00:00:00.000 Started by upstream project "autotest-per-patch" build number 132776 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.064 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.065 The recommended git tool is: git 00:00:00.065 using credential 00000000-0000-0000-0000-000000000002 00:00:00.066 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.084 Fetching changes from the remote Git repository 00:00:00.088 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.124 Using shallow fetch with depth 1 00:00:00.124 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.124 > git --version # timeout=10 00:00:00.189 > git --version # 'git version 2.39.2' 00:00:00.189 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.240 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.240 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.375 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.387 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.399 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:04.399 > git config core.sparsecheckout # timeout=10 00:00:04.410 > git read-tree -mu HEAD # timeout=10 00:00:04.427 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:04.449 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:04.450 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:04.531 [Pipeline] Start of Pipeline 00:00:04.545 [Pipeline] library 00:00:04.547 Loading library shm_lib@master 00:00:04.547 Library shm_lib@master is cached. Copying from home. 00:00:04.567 [Pipeline] node 00:01:30.621 Running on CYP6 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:30.636 [Pipeline] { 00:01:30.689 [Pipeline] catchError 00:01:30.692 [Pipeline] { 00:01:30.701 [Pipeline] wrap 00:01:30.709 [Pipeline] { 00:01:30.719 [Pipeline] stage 00:01:30.721 [Pipeline] { (Prologue) 00:01:30.902 [Pipeline] sh 00:01:31.772 + logger -p user.info -t JENKINS-CI 00:01:31.807 [Pipeline] echo 00:01:31.809 Node: CYP6 00:01:31.816 [Pipeline] sh 00:01:32.165 [Pipeline] setCustomBuildProperty 00:01:32.178 [Pipeline] echo 00:01:32.179 Cleanup processes 00:01:32.184 [Pipeline] sh 00:01:32.477 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:32.477 27206 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:32.494 [Pipeline] sh 00:01:32.795 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:32.795 ++ grep -v 'sudo pgrep' 00:01:32.795 ++ awk '{print $1}' 00:01:32.795 + sudo kill -9 00:01:32.795 + true 00:01:32.813 [Pipeline] cleanWs 00:01:32.825 [WS-CLEANUP] Deleting project workspace... 00:01:32.825 [WS-CLEANUP] Deferred wipeout is used... 00:01:32.840 [WS-CLEANUP] done 00:01:32.845 [Pipeline] setCustomBuildProperty 00:01:32.885 [Pipeline] sh 00:01:33.178 + sudo git config --global --replace-all safe.directory '*' 00:01:33.321 [Pipeline] httpRequest 00:01:35.251 [Pipeline] echo 00:01:35.252 Sorcerer 10.211.164.101 is alive 00:01:35.259 [Pipeline] retry 00:01:35.261 [Pipeline] { 00:01:35.274 [Pipeline] httpRequest 00:01:35.278 HttpMethod: GET 00:01:35.279 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:01:35.280 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:01:35.285 Response Code: HTTP/1.1 200 OK 00:01:35.285 Success: Status code 200 is in the accepted range: 200,404 00:01:35.285 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:01:35.432 [Pipeline] } 00:01:35.443 [Pipeline] // retry 00:01:35.447 [Pipeline] sh 00:01:35.734 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:01:35.751 [Pipeline] httpRequest 00:01:36.153 [Pipeline] echo 00:01:36.155 Sorcerer 10.211.164.101 is alive 00:01:36.164 [Pipeline] retry 00:01:36.167 [Pipeline] { 00:01:36.185 [Pipeline] httpRequest 00:01:36.190 HttpMethod: GET 00:01:36.190 URL: http://10.211.164.101/packages/spdk_15ce1ba92a7f3803af8b26504042f979d14b95c5.tar.gz 00:01:36.191 Sending request to url: http://10.211.164.101/packages/spdk_15ce1ba92a7f3803af8b26504042f979d14b95c5.tar.gz 00:01:36.196 Response Code: HTTP/1.1 200 OK 00:01:36.196 Success: Status code 200 is in the accepted range: 200,404 00:01:36.196 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_15ce1ba92a7f3803af8b26504042f979d14b95c5.tar.gz 00:01:38.468 [Pipeline] } 00:01:38.478 [Pipeline] // retry 00:01:38.483 [Pipeline] sh 00:01:38.776 + tar --no-same-owner -xf spdk_15ce1ba92a7f3803af8b26504042f979d14b95c5.tar.gz 00:01:42.096 [Pipeline] sh 00:01:42.385 + git -C spdk log --oneline -n5 00:01:42.385 15ce1ba92 lib/reduce: Send unmap to backing dev 00:01:42.385 5f032e8b7 lib/reduce: Write Zero to partial chunk when unmapping the chunks. 00:01:42.385 a5e6ecf28 lib/reduce: Data copy logic in thin read operations 00:01:42.385 a333974e5 nvme/rdma: Flush queued send WRs when disconnecting a qpair 00:01:42.385 2b8672176 nvme/rdma: Prevent submitting new recv WR when disconnecting 00:01:42.396 [Pipeline] } 00:01:42.408 [Pipeline] // stage 00:01:42.415 [Pipeline] stage 00:01:42.417 [Pipeline] { (Prepare) 00:01:42.428 [Pipeline] writeFile 00:01:42.440 [Pipeline] sh 00:01:42.726 + logger -p user.info -t JENKINS-CI 00:01:42.740 [Pipeline] sh 00:01:43.026 + logger -p user.info -t JENKINS-CI 00:01:43.038 [Pipeline] sh 00:01:43.324 + cat autorun-spdk.conf 00:01:43.325 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:43.325 SPDK_TEST_NVMF=1 00:01:43.325 SPDK_TEST_NVME_CLI=1 00:01:43.325 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:43.325 SPDK_TEST_NVMF_NICS=e810 00:01:43.325 SPDK_TEST_VFIOUSER=1 00:01:43.325 SPDK_RUN_UBSAN=1 00:01:43.325 NET_TYPE=phy 00:01:43.333 RUN_NIGHTLY=0 00:01:43.338 [Pipeline] readFile 00:01:43.378 [Pipeline] withEnv 00:01:43.380 [Pipeline] { 00:01:43.391 [Pipeline] sh 00:01:43.684 + set -ex 00:01:43.684 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:43.684 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:43.684 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:43.684 ++ SPDK_TEST_NVMF=1 00:01:43.684 ++ SPDK_TEST_NVME_CLI=1 00:01:43.684 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:43.684 ++ SPDK_TEST_NVMF_NICS=e810 00:01:43.684 ++ SPDK_TEST_VFIOUSER=1 00:01:43.684 ++ SPDK_RUN_UBSAN=1 00:01:43.684 ++ NET_TYPE=phy 00:01:43.684 ++ RUN_NIGHTLY=0 00:01:43.684 + case $SPDK_TEST_NVMF_NICS in 00:01:43.684 + DRIVERS=ice 00:01:43.684 + [[ tcp == \r\d\m\a ]] 00:01:43.684 + [[ -n ice ]] 00:01:43.684 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:43.684 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:53.692 rmmod: ERROR: Module irdma is not currently loaded 00:01:53.692 rmmod: ERROR: Module i40iw is not currently loaded 00:01:53.692 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:53.692 + true 00:01:53.692 + for D in $DRIVERS 00:01:53.692 + sudo modprobe ice 00:01:53.692 + exit 0 00:01:53.704 [Pipeline] } 00:01:53.719 [Pipeline] // withEnv 00:01:53.725 [Pipeline] } 00:01:53.738 [Pipeline] // stage 00:01:53.748 [Pipeline] catchError 00:01:53.750 [Pipeline] { 00:01:53.766 [Pipeline] timeout 00:01:53.766 Timeout set to expire in 1 hr 0 min 00:01:53.768 [Pipeline] { 00:01:53.783 [Pipeline] stage 00:01:53.785 [Pipeline] { (Tests) 00:01:53.830 [Pipeline] sh 00:01:54.123 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:54.123 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:54.123 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:54.123 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:54.123 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:54.123 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:54.123 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:54.123 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:54.123 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:54.123 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:54.123 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:54.123 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:54.123 + source /etc/os-release 00:01:54.123 ++ NAME='Fedora Linux' 00:01:54.123 ++ VERSION='39 (Cloud Edition)' 00:01:54.123 ++ ID=fedora 00:01:54.123 ++ VERSION_ID=39 00:01:54.123 ++ VERSION_CODENAME= 00:01:54.123 ++ PLATFORM_ID=platform:f39 00:01:54.123 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:54.123 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:54.123 ++ LOGO=fedora-logo-icon 00:01:54.123 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:54.123 ++ HOME_URL=https://fedoraproject.org/ 00:01:54.123 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:54.123 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:54.123 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:54.123 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:54.123 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:54.123 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:54.123 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:54.123 ++ SUPPORT_END=2024-11-12 00:01:54.123 ++ VARIANT='Cloud Edition' 00:01:54.123 ++ VARIANT_ID=cloud 00:01:54.123 + uname -a 00:01:54.123 Linux spdk-cyp-06 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:54.123 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:57.426 Hugepages 00:01:57.426 node hugesize free / total 00:01:57.426 node0 1048576kB 0 / 0 00:01:57.426 node0 2048kB 0 / 0 00:01:57.426 node1 1048576kB 0 / 0 00:01:57.426 node1 2048kB 0 / 0 00:01:57.426 00:01:57.426 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:57.426 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:01:57.426 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:01:57.426 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:01:57.426 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:01:57.426 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:01:57.426 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:01:57.426 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:01:57.426 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:01:57.426 NVMe 0000:65:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:57.426 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:01:57.426 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:01:57.426 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:01:57.426 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:01:57.426 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:01:57.426 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:01:57.426 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:01:57.426 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:01:57.426 + rm -f /tmp/spdk-ld-path 00:01:57.426 + source autorun-spdk.conf 00:01:57.426 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:57.426 ++ SPDK_TEST_NVMF=1 00:01:57.426 ++ SPDK_TEST_NVME_CLI=1 00:01:57.426 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:57.426 ++ SPDK_TEST_NVMF_NICS=e810 00:01:57.426 ++ SPDK_TEST_VFIOUSER=1 00:01:57.426 ++ SPDK_RUN_UBSAN=1 00:01:57.426 ++ NET_TYPE=phy 00:01:57.426 ++ RUN_NIGHTLY=0 00:01:57.426 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:57.426 + [[ -n '' ]] 00:01:57.426 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:57.426 + for M in /var/spdk/build-*-manifest.txt 00:01:57.426 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:57.426 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:57.426 + for M in /var/spdk/build-*-manifest.txt 00:01:57.426 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:57.426 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:57.426 + for M in /var/spdk/build-*-manifest.txt 00:01:57.426 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:57.426 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:57.426 ++ uname 00:01:57.426 + [[ Linux == \L\i\n\u\x ]] 00:01:57.426 + sudo dmesg -T 00:01:57.426 + sudo dmesg --clear 00:01:57.426 + dmesg_pid=28194 00:01:57.426 + [[ Fedora Linux == FreeBSD ]] 00:01:57.426 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:57.426 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:57.426 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:57.426 + sudo dmesg -Tw 00:01:57.426 + [[ -x /usr/src/fio-static/fio ]] 00:01:57.426 + export FIO_BIN=/usr/src/fio-static/fio 00:01:57.426 + FIO_BIN=/usr/src/fio-static/fio 00:01:57.426 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:57.426 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:57.426 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:57.426 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:57.426 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:57.426 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:57.426 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:57.426 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:57.426 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:57.426 06:00:51 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:57.426 06:00:51 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:57.426 06:00:51 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:57.426 06:00:51 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:57.426 06:00:51 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:57.426 06:00:51 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:57.426 06:00:51 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:57.426 06:00:51 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:01:57.426 06:00:51 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:57.426 06:00:51 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:57.426 06:00:51 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:01:57.426 06:00:51 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:57.426 06:00:51 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:57.427 06:00:51 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:57.427 06:00:51 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:57.427 06:00:51 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:57.427 06:00:51 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:57.427 06:00:51 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:57.427 06:00:51 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:57.427 06:00:51 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:57.427 06:00:51 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:57.427 06:00:51 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:57.427 06:00:51 -- paths/export.sh@5 -- $ export PATH 00:01:57.427 06:00:51 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:57.427 06:00:51 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:57.427 06:00:51 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:57.427 06:00:51 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733720451.XXXXXX 00:01:57.427 06:00:51 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733720451.iru7Q8 00:01:57.427 06:00:51 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:57.427 06:00:51 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:57.427 06:00:51 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:57.427 06:00:51 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:57.427 06:00:51 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:57.427 06:00:51 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:57.427 06:00:51 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:57.427 06:00:51 -- common/autotest_common.sh@10 -- $ set +x 00:01:57.427 06:00:51 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:57.427 06:00:51 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:57.427 06:00:51 -- pm/common@17 -- $ local monitor 00:01:57.427 06:00:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:57.427 06:00:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:57.427 06:00:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:57.427 06:00:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:57.427 06:00:51 -- pm/common@21 -- $ date +%s 00:01:57.427 06:00:51 -- pm/common@21 -- $ date +%s 00:01:57.427 06:00:51 -- pm/common@25 -- $ sleep 1 00:01:57.427 06:00:51 -- pm/common@21 -- $ date +%s 00:01:57.427 06:00:51 -- pm/common@21 -- $ date +%s 00:01:57.427 06:00:51 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733720451 00:01:57.427 06:00:51 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733720451 00:01:57.427 06:00:51 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733720451 00:01:57.427 06:00:51 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733720451 00:01:57.688 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733720451_collect-vmstat.pm.log 00:01:57.688 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733720451_collect-cpu-load.pm.log 00:01:57.688 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733720451_collect-cpu-temp.pm.log 00:01:57.688 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733720451_collect-bmc-pm.bmc.pm.log 00:01:58.658 06:00:52 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:58.658 06:00:52 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:58.658 06:00:52 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:58.658 06:00:52 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:58.658 06:00:52 -- spdk/autobuild.sh@16 -- $ date -u 00:01:58.658 Mon Dec 9 05:00:52 AM UTC 2024 00:01:58.658 06:00:52 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:58.658 v25.01-pre-305-g15ce1ba92 00:01:58.658 06:00:52 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:58.658 06:00:53 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:58.658 06:00:53 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:58.658 06:00:53 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:58.658 06:00:53 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:58.658 06:00:53 -- common/autotest_common.sh@10 -- $ set +x 00:01:58.658 ************************************ 00:01:58.658 START TEST ubsan 00:01:58.658 ************************************ 00:01:58.658 06:00:53 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:58.658 using ubsan 00:01:58.658 00:01:58.658 real 0m0.001s 00:01:58.658 user 0m0.000s 00:01:58.658 sys 0m0.001s 00:01:58.658 06:00:53 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:58.658 06:00:53 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:58.658 ************************************ 00:01:58.658 END TEST ubsan 00:01:58.658 ************************************ 00:01:58.658 06:00:53 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:58.658 06:00:53 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:58.658 06:00:53 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:58.658 06:00:53 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:58.658 06:00:53 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:58.658 06:00:53 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:58.658 06:00:53 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:58.658 06:00:53 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:58.658 06:00:53 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:59.231 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:59.231 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:00.175 Using 'verbs' RDMA provider 00:02:16.033 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:30.938 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:30.938 Creating mk/config.mk...done. 00:02:30.938 Creating mk/cc.flags.mk...done. 00:02:30.938 Type 'make' to build. 00:02:30.938 06:01:24 -- spdk/autobuild.sh@70 -- $ run_test make make -j128 00:02:30.938 06:01:24 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:30.938 06:01:24 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:30.938 06:01:24 -- common/autotest_common.sh@10 -- $ set +x 00:02:30.938 ************************************ 00:02:30.938 START TEST make 00:02:30.938 ************************************ 00:02:30.938 06:01:24 make -- common/autotest_common.sh@1129 -- $ make -j128 00:02:30.938 make[1]: Nothing to be done for 'all'. 00:02:32.847 The Meson build system 00:02:32.847 Version: 1.5.0 00:02:32.847 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:32.847 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:32.847 Build type: native build 00:02:32.847 Project name: libvfio-user 00:02:32.847 Project version: 0.0.1 00:02:32.847 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:32.847 C linker for the host machine: cc ld.bfd 2.40-14 00:02:32.847 Host machine cpu family: x86_64 00:02:32.847 Host machine cpu: x86_64 00:02:32.847 Run-time dependency threads found: YES 00:02:32.847 Library dl found: YES 00:02:32.847 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:32.847 Run-time dependency json-c found: YES 0.17 00:02:32.847 Run-time dependency cmocka found: YES 1.1.7 00:02:32.847 Program pytest-3 found: NO 00:02:32.847 Program flake8 found: NO 00:02:32.847 Program misspell-fixer found: NO 00:02:32.847 Program restructuredtext-lint found: NO 00:02:32.847 Program valgrind found: YES (/usr/bin/valgrind) 00:02:32.847 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:32.847 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:32.847 Compiler for C supports arguments -Wwrite-strings: YES 00:02:32.847 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:32.847 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:32.848 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:32.848 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:32.848 Build targets in project: 8 00:02:32.848 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:32.848 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:32.848 00:02:32.848 libvfio-user 0.0.1 00:02:32.848 00:02:32.848 User defined options 00:02:32.848 buildtype : debug 00:02:32.848 default_library: shared 00:02:32.848 libdir : /usr/local/lib 00:02:32.848 00:02:32.848 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:33.107 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:33.107 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:33.107 [2/37] Compiling C object samples/null.p/null.c.o 00:02:33.107 [3/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:33.107 [4/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:33.107 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:33.107 [6/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:33.107 [7/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:33.107 [8/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:33.107 [9/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:33.107 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:33.107 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:33.107 [12/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:33.107 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:33.107 [14/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:33.107 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:33.107 [16/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:33.107 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:33.107 [18/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:33.107 [19/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:33.107 [20/37] Compiling C object samples/server.p/server.c.o 00:02:33.107 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:33.107 [22/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:33.107 [23/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:33.107 [24/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:33.107 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:33.107 [26/37] Compiling C object samples/client.p/client.c.o 00:02:33.107 [27/37] Linking target samples/client 00:02:33.107 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:33.107 [29/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:33.107 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:02:33.366 [31/37] Linking target test/unit_tests 00:02:33.366 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:33.366 [33/37] Linking target samples/server 00:02:33.366 [34/37] Linking target samples/null 00:02:33.366 [35/37] Linking target samples/lspci 00:02:33.366 [36/37] Linking target samples/gpio-pci-idio-16 00:02:33.366 [37/37] Linking target samples/shadow_ioeventfd_server 00:02:33.366 INFO: autodetecting backend as ninja 00:02:33.366 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:33.626 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:33.884 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:33.884 ninja: no work to do. 00:02:39.169 The Meson build system 00:02:39.169 Version: 1.5.0 00:02:39.169 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:39.169 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:39.169 Build type: native build 00:02:39.169 Program cat found: YES (/usr/bin/cat) 00:02:39.169 Project name: DPDK 00:02:39.169 Project version: 24.03.0 00:02:39.169 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:39.169 C linker for the host machine: cc ld.bfd 2.40-14 00:02:39.169 Host machine cpu family: x86_64 00:02:39.169 Host machine cpu: x86_64 00:02:39.169 Message: ## Building in Developer Mode ## 00:02:39.169 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:39.169 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:39.169 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:39.169 Program python3 found: YES (/usr/bin/python3) 00:02:39.169 Program cat found: YES (/usr/bin/cat) 00:02:39.169 Compiler for C supports arguments -march=native: YES 00:02:39.169 Checking for size of "void *" : 8 00:02:39.169 Checking for size of "void *" : 8 (cached) 00:02:39.169 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:39.169 Library m found: YES 00:02:39.169 Library numa found: YES 00:02:39.169 Has header "numaif.h" : YES 00:02:39.169 Library fdt found: NO 00:02:39.169 Library execinfo found: NO 00:02:39.169 Has header "execinfo.h" : YES 00:02:39.169 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:39.169 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:39.169 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:39.169 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:39.169 Run-time dependency openssl found: YES 3.1.1 00:02:39.169 Run-time dependency libpcap found: YES 1.10.4 00:02:39.169 Has header "pcap.h" with dependency libpcap: YES 00:02:39.169 Compiler for C supports arguments -Wcast-qual: YES 00:02:39.169 Compiler for C supports arguments -Wdeprecated: YES 00:02:39.169 Compiler for C supports arguments -Wformat: YES 00:02:39.169 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:39.169 Compiler for C supports arguments -Wformat-security: NO 00:02:39.169 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:39.169 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:39.169 Compiler for C supports arguments -Wnested-externs: YES 00:02:39.169 Compiler for C supports arguments -Wold-style-definition: YES 00:02:39.169 Compiler for C supports arguments -Wpointer-arith: YES 00:02:39.169 Compiler for C supports arguments -Wsign-compare: YES 00:02:39.169 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:39.169 Compiler for C supports arguments -Wundef: YES 00:02:39.169 Compiler for C supports arguments -Wwrite-strings: YES 00:02:39.169 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:39.169 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:39.169 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:39.169 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:39.169 Program objdump found: YES (/usr/bin/objdump) 00:02:39.169 Compiler for C supports arguments -mavx512f: YES 00:02:39.169 Checking if "AVX512 checking" compiles: YES 00:02:39.169 Fetching value of define "__SSE4_2__" : 1 00:02:39.169 Fetching value of define "__AES__" : 1 00:02:39.169 Fetching value of define "__AVX__" : 1 00:02:39.169 Fetching value of define "__AVX2__" : 1 00:02:39.169 Fetching value of define "__AVX512BW__" : 1 00:02:39.169 Fetching value of define "__AVX512CD__" : 1 00:02:39.169 Fetching value of define "__AVX512DQ__" : 1 00:02:39.169 Fetching value of define "__AVX512F__" : 1 00:02:39.169 Fetching value of define "__AVX512VL__" : 1 00:02:39.169 Fetching value of define "__PCLMUL__" : 1 00:02:39.169 Fetching value of define "__RDRND__" : 1 00:02:39.169 Fetching value of define "__RDSEED__" : 1 00:02:39.169 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:39.169 Fetching value of define "__znver1__" : (undefined) 00:02:39.169 Fetching value of define "__znver2__" : (undefined) 00:02:39.169 Fetching value of define "__znver3__" : (undefined) 00:02:39.169 Fetching value of define "__znver4__" : (undefined) 00:02:39.169 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:39.169 Message: lib/log: Defining dependency "log" 00:02:39.169 Message: lib/kvargs: Defining dependency "kvargs" 00:02:39.169 Message: lib/telemetry: Defining dependency "telemetry" 00:02:39.169 Checking for function "getentropy" : NO 00:02:39.169 Message: lib/eal: Defining dependency "eal" 00:02:39.169 Message: lib/ring: Defining dependency "ring" 00:02:39.169 Message: lib/rcu: Defining dependency "rcu" 00:02:39.169 Message: lib/mempool: Defining dependency "mempool" 00:02:39.169 Message: lib/mbuf: Defining dependency "mbuf" 00:02:39.169 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:39.169 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:39.169 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:39.169 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:39.169 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:39.169 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:39.169 Compiler for C supports arguments -mpclmul: YES 00:02:39.169 Compiler for C supports arguments -maes: YES 00:02:39.169 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:39.169 Compiler for C supports arguments -mavx512bw: YES 00:02:39.169 Compiler for C supports arguments -mavx512dq: YES 00:02:39.169 Compiler for C supports arguments -mavx512vl: YES 00:02:39.169 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:39.169 Compiler for C supports arguments -mavx2: YES 00:02:39.169 Compiler for C supports arguments -mavx: YES 00:02:39.169 Message: lib/net: Defining dependency "net" 00:02:39.169 Message: lib/meter: Defining dependency "meter" 00:02:39.170 Message: lib/ethdev: Defining dependency "ethdev" 00:02:39.170 Message: lib/pci: Defining dependency "pci" 00:02:39.170 Message: lib/cmdline: Defining dependency "cmdline" 00:02:39.170 Message: lib/hash: Defining dependency "hash" 00:02:39.170 Message: lib/timer: Defining dependency "timer" 00:02:39.170 Message: lib/compressdev: Defining dependency "compressdev" 00:02:39.170 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:39.170 Message: lib/dmadev: Defining dependency "dmadev" 00:02:39.170 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:39.170 Message: lib/power: Defining dependency "power" 00:02:39.170 Message: lib/reorder: Defining dependency "reorder" 00:02:39.170 Message: lib/security: Defining dependency "security" 00:02:39.170 Has header "linux/userfaultfd.h" : YES 00:02:39.170 Has header "linux/vduse.h" : YES 00:02:39.170 Message: lib/vhost: Defining dependency "vhost" 00:02:39.170 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:39.170 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:39.170 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:39.170 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:39.170 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:39.170 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:39.170 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:39.170 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:39.170 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:39.170 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:39.170 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:39.170 Configuring doxy-api-html.conf using configuration 00:02:39.170 Configuring doxy-api-man.conf using configuration 00:02:39.170 Program mandb found: YES (/usr/bin/mandb) 00:02:39.170 Program sphinx-build found: NO 00:02:39.170 Configuring rte_build_config.h using configuration 00:02:39.170 Message: 00:02:39.170 ================= 00:02:39.170 Applications Enabled 00:02:39.170 ================= 00:02:39.170 00:02:39.170 apps: 00:02:39.170 00:02:39.170 00:02:39.170 Message: 00:02:39.170 ================= 00:02:39.170 Libraries Enabled 00:02:39.170 ================= 00:02:39.170 00:02:39.170 libs: 00:02:39.170 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:39.170 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:39.170 cryptodev, dmadev, power, reorder, security, vhost, 00:02:39.170 00:02:39.170 Message: 00:02:39.170 =============== 00:02:39.170 Drivers Enabled 00:02:39.170 =============== 00:02:39.170 00:02:39.170 common: 00:02:39.170 00:02:39.170 bus: 00:02:39.170 pci, vdev, 00:02:39.170 mempool: 00:02:39.170 ring, 00:02:39.170 dma: 00:02:39.170 00:02:39.170 net: 00:02:39.170 00:02:39.170 crypto: 00:02:39.170 00:02:39.170 compress: 00:02:39.170 00:02:39.170 vdpa: 00:02:39.170 00:02:39.170 00:02:39.170 Message: 00:02:39.170 ================= 00:02:39.170 Content Skipped 00:02:39.170 ================= 00:02:39.170 00:02:39.170 apps: 00:02:39.170 dumpcap: explicitly disabled via build config 00:02:39.170 graph: explicitly disabled via build config 00:02:39.170 pdump: explicitly disabled via build config 00:02:39.170 proc-info: explicitly disabled via build config 00:02:39.170 test-acl: explicitly disabled via build config 00:02:39.170 test-bbdev: explicitly disabled via build config 00:02:39.170 test-cmdline: explicitly disabled via build config 00:02:39.170 test-compress-perf: explicitly disabled via build config 00:02:39.170 test-crypto-perf: explicitly disabled via build config 00:02:39.170 test-dma-perf: explicitly disabled via build config 00:02:39.170 test-eventdev: explicitly disabled via build config 00:02:39.170 test-fib: explicitly disabled via build config 00:02:39.170 test-flow-perf: explicitly disabled via build config 00:02:39.170 test-gpudev: explicitly disabled via build config 00:02:39.170 test-mldev: explicitly disabled via build config 00:02:39.170 test-pipeline: explicitly disabled via build config 00:02:39.170 test-pmd: explicitly disabled via build config 00:02:39.170 test-regex: explicitly disabled via build config 00:02:39.170 test-sad: explicitly disabled via build config 00:02:39.170 test-security-perf: explicitly disabled via build config 00:02:39.170 00:02:39.170 libs: 00:02:39.170 argparse: explicitly disabled via build config 00:02:39.170 metrics: explicitly disabled via build config 00:02:39.170 acl: explicitly disabled via build config 00:02:39.170 bbdev: explicitly disabled via build config 00:02:39.170 bitratestats: explicitly disabled via build config 00:02:39.170 bpf: explicitly disabled via build config 00:02:39.170 cfgfile: explicitly disabled via build config 00:02:39.170 distributor: explicitly disabled via build config 00:02:39.170 efd: explicitly disabled via build config 00:02:39.170 eventdev: explicitly disabled via build config 00:02:39.170 dispatcher: explicitly disabled via build config 00:02:39.170 gpudev: explicitly disabled via build config 00:02:39.170 gro: explicitly disabled via build config 00:02:39.170 gso: explicitly disabled via build config 00:02:39.170 ip_frag: explicitly disabled via build config 00:02:39.170 jobstats: explicitly disabled via build config 00:02:39.170 latencystats: explicitly disabled via build config 00:02:39.170 lpm: explicitly disabled via build config 00:02:39.170 member: explicitly disabled via build config 00:02:39.170 pcapng: explicitly disabled via build config 00:02:39.170 rawdev: explicitly disabled via build config 00:02:39.170 regexdev: explicitly disabled via build config 00:02:39.170 mldev: explicitly disabled via build config 00:02:39.170 rib: explicitly disabled via build config 00:02:39.170 sched: explicitly disabled via build config 00:02:39.170 stack: explicitly disabled via build config 00:02:39.170 ipsec: explicitly disabled via build config 00:02:39.170 pdcp: explicitly disabled via build config 00:02:39.170 fib: explicitly disabled via build config 00:02:39.170 port: explicitly disabled via build config 00:02:39.170 pdump: explicitly disabled via build config 00:02:39.170 table: explicitly disabled via build config 00:02:39.170 pipeline: explicitly disabled via build config 00:02:39.170 graph: explicitly disabled via build config 00:02:39.170 node: explicitly disabled via build config 00:02:39.170 00:02:39.170 drivers: 00:02:39.170 common/cpt: not in enabled drivers build config 00:02:39.170 common/dpaax: not in enabled drivers build config 00:02:39.170 common/iavf: not in enabled drivers build config 00:02:39.170 common/idpf: not in enabled drivers build config 00:02:39.170 common/ionic: not in enabled drivers build config 00:02:39.170 common/mvep: not in enabled drivers build config 00:02:39.170 common/octeontx: not in enabled drivers build config 00:02:39.170 bus/auxiliary: not in enabled drivers build config 00:02:39.170 bus/cdx: not in enabled drivers build config 00:02:39.170 bus/dpaa: not in enabled drivers build config 00:02:39.170 bus/fslmc: not in enabled drivers build config 00:02:39.170 bus/ifpga: not in enabled drivers build config 00:02:39.170 bus/platform: not in enabled drivers build config 00:02:39.170 bus/uacce: not in enabled drivers build config 00:02:39.170 bus/vmbus: not in enabled drivers build config 00:02:39.170 common/cnxk: not in enabled drivers build config 00:02:39.170 common/mlx5: not in enabled drivers build config 00:02:39.170 common/nfp: not in enabled drivers build config 00:02:39.170 common/nitrox: not in enabled drivers build config 00:02:39.170 common/qat: not in enabled drivers build config 00:02:39.170 common/sfc_efx: not in enabled drivers build config 00:02:39.170 mempool/bucket: not in enabled drivers build config 00:02:39.170 mempool/cnxk: not in enabled drivers build config 00:02:39.170 mempool/dpaa: not in enabled drivers build config 00:02:39.170 mempool/dpaa2: not in enabled drivers build config 00:02:39.170 mempool/octeontx: not in enabled drivers build config 00:02:39.170 mempool/stack: not in enabled drivers build config 00:02:39.170 dma/cnxk: not in enabled drivers build config 00:02:39.170 dma/dpaa: not in enabled drivers build config 00:02:39.170 dma/dpaa2: not in enabled drivers build config 00:02:39.170 dma/hisilicon: not in enabled drivers build config 00:02:39.170 dma/idxd: not in enabled drivers build config 00:02:39.170 dma/ioat: not in enabled drivers build config 00:02:39.170 dma/skeleton: not in enabled drivers build config 00:02:39.170 net/af_packet: not in enabled drivers build config 00:02:39.170 net/af_xdp: not in enabled drivers build config 00:02:39.170 net/ark: not in enabled drivers build config 00:02:39.170 net/atlantic: not in enabled drivers build config 00:02:39.170 net/avp: not in enabled drivers build config 00:02:39.170 net/axgbe: not in enabled drivers build config 00:02:39.170 net/bnx2x: not in enabled drivers build config 00:02:39.170 net/bnxt: not in enabled drivers build config 00:02:39.170 net/bonding: not in enabled drivers build config 00:02:39.171 net/cnxk: not in enabled drivers build config 00:02:39.171 net/cpfl: not in enabled drivers build config 00:02:39.171 net/cxgbe: not in enabled drivers build config 00:02:39.171 net/dpaa: not in enabled drivers build config 00:02:39.171 net/dpaa2: not in enabled drivers build config 00:02:39.171 net/e1000: not in enabled drivers build config 00:02:39.171 net/ena: not in enabled drivers build config 00:02:39.171 net/enetc: not in enabled drivers build config 00:02:39.171 net/enetfec: not in enabled drivers build config 00:02:39.171 net/enic: not in enabled drivers build config 00:02:39.171 net/failsafe: not in enabled drivers build config 00:02:39.171 net/fm10k: not in enabled drivers build config 00:02:39.171 net/gve: not in enabled drivers build config 00:02:39.171 net/hinic: not in enabled drivers build config 00:02:39.171 net/hns3: not in enabled drivers build config 00:02:39.171 net/i40e: not in enabled drivers build config 00:02:39.171 net/iavf: not in enabled drivers build config 00:02:39.171 net/ice: not in enabled drivers build config 00:02:39.171 net/idpf: not in enabled drivers build config 00:02:39.171 net/igc: not in enabled drivers build config 00:02:39.171 net/ionic: not in enabled drivers build config 00:02:39.171 net/ipn3ke: not in enabled drivers build config 00:02:39.171 net/ixgbe: not in enabled drivers build config 00:02:39.171 net/mana: not in enabled drivers build config 00:02:39.171 net/memif: not in enabled drivers build config 00:02:39.171 net/mlx4: not in enabled drivers build config 00:02:39.171 net/mlx5: not in enabled drivers build config 00:02:39.171 net/mvneta: not in enabled drivers build config 00:02:39.171 net/mvpp2: not in enabled drivers build config 00:02:39.171 net/netvsc: not in enabled drivers build config 00:02:39.171 net/nfb: not in enabled drivers build config 00:02:39.171 net/nfp: not in enabled drivers build config 00:02:39.171 net/ngbe: not in enabled drivers build config 00:02:39.171 net/null: not in enabled drivers build config 00:02:39.171 net/octeontx: not in enabled drivers build config 00:02:39.171 net/octeon_ep: not in enabled drivers build config 00:02:39.171 net/pcap: not in enabled drivers build config 00:02:39.171 net/pfe: not in enabled drivers build config 00:02:39.171 net/qede: not in enabled drivers build config 00:02:39.171 net/ring: not in enabled drivers build config 00:02:39.171 net/sfc: not in enabled drivers build config 00:02:39.171 net/softnic: not in enabled drivers build config 00:02:39.171 net/tap: not in enabled drivers build config 00:02:39.171 net/thunderx: not in enabled drivers build config 00:02:39.171 net/txgbe: not in enabled drivers build config 00:02:39.171 net/vdev_netvsc: not in enabled drivers build config 00:02:39.171 net/vhost: not in enabled drivers build config 00:02:39.171 net/virtio: not in enabled drivers build config 00:02:39.171 net/vmxnet3: not in enabled drivers build config 00:02:39.171 raw/*: missing internal dependency, "rawdev" 00:02:39.171 crypto/armv8: not in enabled drivers build config 00:02:39.171 crypto/bcmfs: not in enabled drivers build config 00:02:39.171 crypto/caam_jr: not in enabled drivers build config 00:02:39.171 crypto/ccp: not in enabled drivers build config 00:02:39.171 crypto/cnxk: not in enabled drivers build config 00:02:39.171 crypto/dpaa_sec: not in enabled drivers build config 00:02:39.171 crypto/dpaa2_sec: not in enabled drivers build config 00:02:39.171 crypto/ipsec_mb: not in enabled drivers build config 00:02:39.171 crypto/mlx5: not in enabled drivers build config 00:02:39.171 crypto/mvsam: not in enabled drivers build config 00:02:39.171 crypto/nitrox: not in enabled drivers build config 00:02:39.171 crypto/null: not in enabled drivers build config 00:02:39.171 crypto/octeontx: not in enabled drivers build config 00:02:39.171 crypto/openssl: not in enabled drivers build config 00:02:39.171 crypto/scheduler: not in enabled drivers build config 00:02:39.171 crypto/uadk: not in enabled drivers build config 00:02:39.171 crypto/virtio: not in enabled drivers build config 00:02:39.171 compress/isal: not in enabled drivers build config 00:02:39.171 compress/mlx5: not in enabled drivers build config 00:02:39.171 compress/nitrox: not in enabled drivers build config 00:02:39.171 compress/octeontx: not in enabled drivers build config 00:02:39.171 compress/zlib: not in enabled drivers build config 00:02:39.171 regex/*: missing internal dependency, "regexdev" 00:02:39.171 ml/*: missing internal dependency, "mldev" 00:02:39.171 vdpa/ifc: not in enabled drivers build config 00:02:39.171 vdpa/mlx5: not in enabled drivers build config 00:02:39.171 vdpa/nfp: not in enabled drivers build config 00:02:39.171 vdpa/sfc: not in enabled drivers build config 00:02:39.171 event/*: missing internal dependency, "eventdev" 00:02:39.171 baseband/*: missing internal dependency, "bbdev" 00:02:39.171 gpu/*: missing internal dependency, "gpudev" 00:02:39.171 00:02:39.171 00:02:39.171 Build targets in project: 84 00:02:39.171 00:02:39.171 DPDK 24.03.0 00:02:39.171 00:02:39.171 User defined options 00:02:39.171 buildtype : debug 00:02:39.171 default_library : shared 00:02:39.171 libdir : lib 00:02:39.171 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:39.171 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:39.171 c_link_args : 00:02:39.171 cpu_instruction_set: native 00:02:39.171 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:02:39.171 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:02:39.171 enable_docs : false 00:02:39.171 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:39.171 enable_kmods : false 00:02:39.171 max_lcores : 128 00:02:39.171 tests : false 00:02:39.171 00:02:39.171 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:39.443 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:39.443 [1/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:39.443 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:39.443 [3/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:39.443 [4/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:39.443 [5/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:39.443 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:39.443 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:39.443 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:39.443 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:39.443 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:39.443 [11/267] Linking static target lib/librte_kvargs.a 00:02:39.443 [12/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:39.443 [13/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:39.703 [14/267] Linking static target lib/librte_log.a 00:02:39.703 [15/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:39.703 [16/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:39.703 [17/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:39.703 [18/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:39.703 [19/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:39.703 [20/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:39.703 [21/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:39.703 [22/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:39.703 [23/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:39.703 [24/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:39.703 [25/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:39.703 [26/267] Linking static target lib/librte_pci.a 00:02:39.703 [27/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:39.703 [28/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:39.703 [29/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:39.703 [30/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:39.703 [31/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:39.703 [32/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:39.703 [33/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:39.703 [34/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:39.703 [35/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:39.966 [36/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:39.966 [37/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:39.966 [38/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:39.966 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:39.966 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:39.966 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:39.966 [42/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:39.966 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:39.966 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:39.966 [45/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:39.966 [46/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:39.966 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:39.966 [48/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:39.966 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:39.966 [50/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:39.966 [51/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:39.966 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:39.966 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:39.966 [54/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:39.966 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:39.966 [56/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:39.966 [57/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:39.966 [58/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:39.966 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:39.966 [60/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:39.966 [61/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:39.966 [62/267] Linking static target lib/librte_timer.a 00:02:39.966 [63/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:39.966 [64/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:39.966 [65/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:39.966 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:39.966 [67/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:39.966 [68/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:39.966 [69/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:39.966 [70/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:39.966 [71/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:39.966 [72/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:39.966 [73/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:39.966 [74/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:39.966 [75/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:39.966 [76/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:39.966 [77/267] Linking static target lib/librte_telemetry.a 00:02:39.966 [78/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:39.966 [79/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.966 [80/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:39.966 [81/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:39.966 [82/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:39.966 [83/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:39.966 [84/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:39.966 [85/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:39.966 [86/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:39.966 [87/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:39.966 [88/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:39.966 [89/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:39.966 [90/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:39.966 [91/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:39.966 [92/267] Linking static target lib/librte_meter.a 00:02:39.966 [93/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:39.966 [94/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.966 [95/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:39.966 [96/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:39.966 [97/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:39.966 [98/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:39.966 [99/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:39.966 [100/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:39.966 [101/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:39.966 [102/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:39.966 [103/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:39.966 [104/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:39.966 [105/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:39.966 [106/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:39.966 [107/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:39.966 [108/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:39.966 [109/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:39.966 [110/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:40.226 [111/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:40.226 [112/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:40.226 [113/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:40.226 [114/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:40.226 [115/267] Linking static target lib/librte_ring.a 00:02:40.226 [116/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:40.226 [117/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:40.226 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:40.226 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:40.226 [120/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:40.226 [121/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:40.226 [122/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:40.226 [123/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:40.226 [124/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:40.226 [125/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:40.226 [126/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:40.226 [127/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:40.226 [128/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:40.226 [129/267] Linking static target lib/librte_cmdline.a 00:02:40.226 [130/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:40.226 [131/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:40.226 [132/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:40.226 [133/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:40.226 [134/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:40.226 [135/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:40.226 [136/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:40.226 [137/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:40.226 [138/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:40.226 [139/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:40.226 [140/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:40.227 [141/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:40.227 [142/267] Linking static target lib/librte_mempool.a 00:02:40.227 [143/267] Linking static target lib/librte_net.a 00:02:40.227 [144/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:40.227 [145/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:40.227 [146/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:40.227 [147/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:40.227 [148/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:40.227 [149/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:40.227 [150/267] Linking static target lib/librte_reorder.a 00:02:40.227 [151/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:40.227 [152/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:40.227 [153/267] Linking static target lib/librte_dmadev.a 00:02:40.227 [154/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:40.227 [155/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:40.227 [156/267] Linking static target lib/librte_compressdev.a 00:02:40.227 [157/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:40.227 [158/267] Linking static target lib/librte_rcu.a 00:02:40.227 [159/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:40.227 [160/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:40.227 [161/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:40.227 [162/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:40.227 [163/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:40.227 [164/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:40.227 [165/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:40.227 [166/267] Linking static target lib/librte_power.a 00:02:40.227 [167/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:40.227 [168/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:40.227 [169/267] Linking static target lib/librte_eal.a 00:02:40.227 [170/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:40.227 [171/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.227 [172/267] Linking static target lib/librte_security.a 00:02:40.227 [173/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:40.227 [174/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.227 [175/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:40.227 [176/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:40.227 [177/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:40.488 [178/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:40.488 [179/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:40.488 [180/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:40.488 [181/267] Linking static target lib/librte_mbuf.a 00:02:40.488 [182/267] Linking target lib/librte_log.so.24.1 00:02:40.488 [183/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:40.488 [184/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.489 [185/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:40.489 [186/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:40.489 [187/267] Linking static target lib/librte_hash.a 00:02:40.489 [188/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.489 [189/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:40.489 [190/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:40.489 [191/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:40.489 [192/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:40.489 [193/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.489 [194/267] Linking target lib/librte_kvargs.so.24.1 00:02:40.489 [195/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:40.489 [196/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:40.489 [197/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:40.489 [198/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:40.489 [199/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:40.489 [200/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:40.489 [201/267] Linking static target drivers/librte_bus_vdev.a 00:02:40.489 [202/267] Linking static target drivers/librte_mempool_ring.a 00:02:40.489 [203/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:40.489 [204/267] Linking static target lib/librte_cryptodev.a 00:02:40.489 [205/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:40.489 [206/267] Linking static target drivers/librte_bus_pci.a 00:02:40.748 [207/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.749 [208/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:40.749 [209/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.749 [210/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:40.749 [211/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.749 [212/267] Linking target lib/librte_telemetry.so.24.1 00:02:40.749 [213/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:41.009 [214/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.009 [215/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.009 [216/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.009 [217/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:41.009 [218/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.009 [219/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:41.009 [220/267] Linking static target lib/librte_ethdev.a 00:02:41.271 [221/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.271 [222/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.271 [223/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.271 [224/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.532 [225/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.532 [226/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.104 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:42.104 [228/267] Linking static target lib/librte_vhost.a 00:02:42.674 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.060 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.663 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.606 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.606 [233/267] Linking target lib/librte_eal.so.24.1 00:02:51.866 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:51.866 [235/267] Linking target lib/librte_meter.so.24.1 00:02:51.866 [236/267] Linking target lib/librte_pci.so.24.1 00:02:51.866 [237/267] Linking target lib/librte_ring.so.24.1 00:02:51.866 [238/267] Linking target lib/librte_dmadev.so.24.1 00:02:51.866 [239/267] Linking target lib/librte_timer.so.24.1 00:02:51.866 [240/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:51.866 [241/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:51.866 [242/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:51.866 [243/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:51.866 [244/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:51.866 [245/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:51.866 [246/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:51.866 [247/267] Linking target lib/librte_mempool.so.24.1 00:02:51.866 [248/267] Linking target lib/librte_rcu.so.24.1 00:02:52.125 [249/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:52.125 [250/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:52.125 [251/267] Linking target lib/librte_mbuf.so.24.1 00:02:52.125 [252/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:52.125 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:52.385 [254/267] Linking target lib/librte_compressdev.so.24.1 00:02:52.385 [255/267] Linking target lib/librte_reorder.so.24.1 00:02:52.385 [256/267] Linking target lib/librte_net.so.24.1 00:02:52.385 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:02:52.385 [258/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:52.385 [259/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:52.385 [260/267] Linking target lib/librte_ethdev.so.24.1 00:02:52.385 [261/267] Linking target lib/librte_cmdline.so.24.1 00:02:52.385 [262/267] Linking target lib/librte_hash.so.24.1 00:02:52.385 [263/267] Linking target lib/librte_security.so.24.1 00:02:52.645 [264/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:52.645 [265/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:52.645 [266/267] Linking target lib/librte_power.so.24.1 00:02:52.645 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:52.645 INFO: autodetecting backend as ninja 00:02:52.645 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 128 00:02:55.940 CC lib/ut_mock/mock.o 00:02:55.940 CC lib/log/log.o 00:02:55.940 CC lib/log/log_flags.o 00:02:55.940 CC lib/log/log_deprecated.o 00:02:55.940 CC lib/ut/ut.o 00:02:56.201 LIB libspdk_ut_mock.a 00:02:56.201 LIB libspdk_ut.a 00:02:56.201 LIB libspdk_log.a 00:02:56.201 SO libspdk_ut_mock.so.6.0 00:02:56.201 SO libspdk_ut.so.2.0 00:02:56.201 SO libspdk_log.so.7.1 00:02:56.201 SYMLINK libspdk_ut_mock.so 00:02:56.201 SYMLINK libspdk_log.so 00:02:56.201 SYMLINK libspdk_ut.so 00:02:56.463 CC lib/util/base64.o 00:02:56.463 CC lib/util/cpuset.o 00:02:56.463 CC lib/util/bit_array.o 00:02:56.463 CC lib/util/crc16.o 00:02:56.463 CC lib/util/crc32.o 00:02:56.463 CC lib/util/crc32c.o 00:02:56.463 CC lib/util/crc32_ieee.o 00:02:56.463 CC lib/util/dif.o 00:02:56.463 CC lib/util/crc64.o 00:02:56.463 CC lib/util/fd.o 00:02:56.463 CC lib/util/fd_group.o 00:02:56.463 CC lib/util/hexlify.o 00:02:56.463 CC lib/util/file.o 00:02:56.463 CC lib/util/iov.o 00:02:56.725 CC lib/util/math.o 00:02:56.725 CC lib/util/net.o 00:02:56.725 CC lib/dma/dma.o 00:02:56.725 CC lib/util/pipe.o 00:02:56.725 CC lib/ioat/ioat.o 00:02:56.725 CXX lib/trace_parser/trace.o 00:02:56.725 CC lib/util/strerror_tls.o 00:02:56.725 CC lib/util/string.o 00:02:56.725 CC lib/util/uuid.o 00:02:56.725 CC lib/util/xor.o 00:02:56.725 CC lib/util/zipf.o 00:02:56.725 CC lib/util/md5.o 00:02:56.725 CC lib/vfio_user/host/vfio_user_pci.o 00:02:56.725 CC lib/vfio_user/host/vfio_user.o 00:02:56.725 LIB libspdk_dma.a 00:02:56.986 SO libspdk_dma.so.5.0 00:02:56.986 LIB libspdk_ioat.a 00:02:56.986 SYMLINK libspdk_dma.so 00:02:56.986 SO libspdk_ioat.so.7.0 00:02:56.986 LIB libspdk_vfio_user.a 00:02:56.986 SYMLINK libspdk_ioat.so 00:02:56.986 SO libspdk_vfio_user.so.5.0 00:02:57.247 LIB libspdk_util.a 00:02:57.247 SYMLINK libspdk_vfio_user.so 00:02:57.247 SO libspdk_util.so.10.1 00:02:57.247 SYMLINK libspdk_util.so 00:02:57.817 CC lib/idxd/idxd.o 00:02:57.817 CC lib/idxd/idxd_kernel.o 00:02:57.817 CC lib/idxd/idxd_user.o 00:02:57.817 CC lib/vmd/vmd.o 00:02:57.817 CC lib/vmd/led.o 00:02:57.817 CC lib/json/json_parse.o 00:02:57.817 CC lib/json/json_util.o 00:02:57.817 CC lib/json/json_write.o 00:02:57.817 CC lib/env_dpdk/env.o 00:02:57.817 CC lib/conf/conf.o 00:02:57.817 CC lib/env_dpdk/memory.o 00:02:57.817 CC lib/env_dpdk/pci.o 00:02:57.817 CC lib/env_dpdk/init.o 00:02:57.817 CC lib/env_dpdk/threads.o 00:02:57.817 CC lib/env_dpdk/pci_ioat.o 00:02:57.817 CC lib/rdma_utils/rdma_utils.o 00:02:57.817 CC lib/env_dpdk/pci_virtio.o 00:02:57.817 CC lib/env_dpdk/pci_vmd.o 00:02:57.817 CC lib/env_dpdk/pci_idxd.o 00:02:57.817 CC lib/env_dpdk/pci_event.o 00:02:57.817 CC lib/env_dpdk/pci_dpdk.o 00:02:57.817 CC lib/env_dpdk/sigbus_handler.o 00:02:57.817 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:57.817 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:57.817 LIB libspdk_rdma_utils.a 00:02:57.817 LIB libspdk_conf.a 00:02:57.817 SO libspdk_rdma_utils.so.1.0 00:02:58.078 LIB libspdk_json.a 00:02:58.078 SO libspdk_conf.so.6.0 00:02:58.078 SO libspdk_json.so.6.0 00:02:58.078 SYMLINK libspdk_rdma_utils.so 00:02:58.078 SYMLINK libspdk_conf.so 00:02:58.078 SYMLINK libspdk_json.so 00:02:58.078 LIB libspdk_idxd.a 00:02:58.078 SO libspdk_idxd.so.12.1 00:02:58.078 LIB libspdk_vmd.a 00:02:58.339 LIB libspdk_trace_parser.a 00:02:58.339 SO libspdk_vmd.so.6.0 00:02:58.339 SYMLINK libspdk_idxd.so 00:02:58.339 SO libspdk_trace_parser.so.6.0 00:02:58.339 CC lib/rdma_provider/common.o 00:02:58.339 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:58.339 SYMLINK libspdk_vmd.so 00:02:58.339 CC lib/jsonrpc/jsonrpc_server.o 00:02:58.339 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:58.339 CC lib/jsonrpc/jsonrpc_client.o 00:02:58.339 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:58.339 SYMLINK libspdk_trace_parser.so 00:02:58.616 LIB libspdk_rdma_provider.a 00:02:58.616 SO libspdk_rdma_provider.so.7.0 00:02:58.616 LIB libspdk_jsonrpc.a 00:02:58.616 SYMLINK libspdk_rdma_provider.so 00:02:58.616 SO libspdk_jsonrpc.so.6.0 00:02:58.616 SYMLINK libspdk_jsonrpc.so 00:02:58.877 LIB libspdk_env_dpdk.a 00:02:58.877 SO libspdk_env_dpdk.so.15.1 00:02:59.138 SYMLINK libspdk_env_dpdk.so 00:02:59.138 CC lib/rpc/rpc.o 00:02:59.138 LIB libspdk_rpc.a 00:02:59.399 SO libspdk_rpc.so.6.0 00:02:59.399 SYMLINK libspdk_rpc.so 00:02:59.660 CC lib/notify/notify.o 00:02:59.660 CC lib/notify/notify_rpc.o 00:02:59.660 CC lib/keyring/keyring.o 00:02:59.660 CC lib/keyring/keyring_rpc.o 00:02:59.660 CC lib/trace/trace.o 00:02:59.660 CC lib/trace/trace_flags.o 00:02:59.660 CC lib/trace/trace_rpc.o 00:02:59.920 LIB libspdk_notify.a 00:02:59.920 SO libspdk_notify.so.6.0 00:02:59.920 LIB libspdk_keyring.a 00:02:59.920 SO libspdk_keyring.so.2.0 00:02:59.920 SYMLINK libspdk_notify.so 00:02:59.920 LIB libspdk_trace.a 00:02:59.920 SYMLINK libspdk_keyring.so 00:02:59.920 SO libspdk_trace.so.11.0 00:03:00.181 SYMLINK libspdk_trace.so 00:03:00.441 CC lib/thread/thread.o 00:03:00.441 CC lib/sock/sock.o 00:03:00.441 CC lib/thread/iobuf.o 00:03:00.441 CC lib/sock/sock_rpc.o 00:03:00.702 LIB libspdk_sock.a 00:03:00.702 SO libspdk_sock.so.10.0 00:03:00.962 SYMLINK libspdk_sock.so 00:03:01.222 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:01.223 CC lib/nvme/nvme_ctrlr.o 00:03:01.223 CC lib/nvme/nvme_ns.o 00:03:01.223 CC lib/nvme/nvme_fabric.o 00:03:01.223 CC lib/nvme/nvme_ns_cmd.o 00:03:01.223 CC lib/nvme/nvme_pcie_common.o 00:03:01.223 CC lib/nvme/nvme_pcie.o 00:03:01.223 CC lib/nvme/nvme.o 00:03:01.223 CC lib/nvme/nvme_qpair.o 00:03:01.223 CC lib/nvme/nvme_quirks.o 00:03:01.223 CC lib/nvme/nvme_transport.o 00:03:01.223 CC lib/nvme/nvme_discovery.o 00:03:01.223 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:01.223 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:01.223 CC lib/nvme/nvme_tcp.o 00:03:01.223 CC lib/nvme/nvme_opal.o 00:03:01.223 CC lib/nvme/nvme_io_msg.o 00:03:01.223 CC lib/nvme/nvme_poll_group.o 00:03:01.223 CC lib/nvme/nvme_zns.o 00:03:01.223 CC lib/nvme/nvme_stubs.o 00:03:01.223 CC lib/nvme/nvme_auth.o 00:03:01.223 CC lib/nvme/nvme_vfio_user.o 00:03:01.223 CC lib/nvme/nvme_cuse.o 00:03:01.223 CC lib/nvme/nvme_rdma.o 00:03:01.792 LIB libspdk_thread.a 00:03:01.792 SO libspdk_thread.so.11.0 00:03:01.792 SYMLINK libspdk_thread.so 00:03:02.053 CC lib/fsdev/fsdev_rpc.o 00:03:02.053 CC lib/fsdev/fsdev.o 00:03:02.053 CC lib/fsdev/fsdev_io.o 00:03:02.053 CC lib/accel/accel.o 00:03:02.053 CC lib/accel/accel_rpc.o 00:03:02.053 CC lib/accel/accel_sw.o 00:03:02.053 CC lib/vfu_tgt/tgt_endpoint.o 00:03:02.053 CC lib/vfu_tgt/tgt_rpc.o 00:03:02.053 CC lib/init/subsystem.o 00:03:02.053 CC lib/init/json_config.o 00:03:02.053 CC lib/virtio/virtio_vhost_user.o 00:03:02.053 CC lib/init/subsystem_rpc.o 00:03:02.053 CC lib/init/rpc.o 00:03:02.053 CC lib/virtio/virtio.o 00:03:02.053 CC lib/virtio/virtio_pci.o 00:03:02.053 CC lib/virtio/virtio_vfio_user.o 00:03:02.053 CC lib/blob/blobstore.o 00:03:02.053 CC lib/blob/request.o 00:03:02.053 CC lib/blob/zeroes.o 00:03:02.053 CC lib/blob/blob_bs_dev.o 00:03:02.313 LIB libspdk_init.a 00:03:02.313 SO libspdk_init.so.6.0 00:03:02.313 LIB libspdk_vfu_tgt.a 00:03:02.573 SO libspdk_vfu_tgt.so.3.0 00:03:02.573 LIB libspdk_virtio.a 00:03:02.573 SYMLINK libspdk_init.so 00:03:02.573 SO libspdk_virtio.so.7.0 00:03:02.573 SYMLINK libspdk_vfu_tgt.so 00:03:02.573 SYMLINK libspdk_virtio.so 00:03:02.573 LIB libspdk_fsdev.a 00:03:02.832 SO libspdk_fsdev.so.2.0 00:03:02.832 SYMLINK libspdk_fsdev.so 00:03:02.832 CC lib/event/app.o 00:03:02.832 CC lib/event/reactor.o 00:03:02.832 CC lib/event/app_rpc.o 00:03:02.832 CC lib/event/log_rpc.o 00:03:02.832 CC lib/event/scheduler_static.o 00:03:03.092 LIB libspdk_accel.a 00:03:03.092 LIB libspdk_nvme.a 00:03:03.092 SO libspdk_accel.so.16.0 00:03:03.092 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:03.092 SYMLINK libspdk_accel.so 00:03:03.092 SO libspdk_nvme.so.15.0 00:03:03.092 LIB libspdk_event.a 00:03:03.351 SO libspdk_event.so.14.0 00:03:03.351 SYMLINK libspdk_event.so 00:03:03.351 SYMLINK libspdk_nvme.so 00:03:03.610 CC lib/bdev/bdev.o 00:03:03.610 CC lib/bdev/bdev_rpc.o 00:03:03.610 CC lib/bdev/bdev_zone.o 00:03:03.610 CC lib/bdev/part.o 00:03:03.610 CC lib/bdev/scsi_nvme.o 00:03:03.610 LIB libspdk_fuse_dispatcher.a 00:03:03.610 SO libspdk_fuse_dispatcher.so.1.0 00:03:03.869 SYMLINK libspdk_fuse_dispatcher.so 00:03:04.811 LIB libspdk_blob.a 00:03:04.811 SO libspdk_blob.so.12.0 00:03:04.811 SYMLINK libspdk_blob.so 00:03:05.073 CC lib/lvol/lvol.o 00:03:05.073 CC lib/blobfs/blobfs.o 00:03:05.073 CC lib/blobfs/tree.o 00:03:05.645 LIB libspdk_bdev.a 00:03:05.645 SO libspdk_bdev.so.17.0 00:03:05.907 LIB libspdk_blobfs.a 00:03:05.907 SYMLINK libspdk_bdev.so 00:03:05.907 SO libspdk_blobfs.so.11.0 00:03:05.907 LIB libspdk_lvol.a 00:03:05.907 SO libspdk_lvol.so.11.0 00:03:05.907 SYMLINK libspdk_blobfs.so 00:03:06.169 SYMLINK libspdk_lvol.so 00:03:06.169 CC lib/scsi/lun.o 00:03:06.169 CC lib/scsi/port.o 00:03:06.169 CC lib/scsi/dev.o 00:03:06.169 CC lib/nvmf/ctrlr.o 00:03:06.169 CC lib/scsi/scsi.o 00:03:06.169 CC lib/nvmf/ctrlr_discovery.o 00:03:06.169 CC lib/scsi/scsi_bdev.o 00:03:06.169 CC lib/nvmf/ctrlr_bdev.o 00:03:06.169 CC lib/scsi/scsi_pr.o 00:03:06.169 CC lib/scsi/scsi_rpc.o 00:03:06.169 CC lib/nvmf/subsystem.o 00:03:06.169 CC lib/scsi/task.o 00:03:06.169 CC lib/nvmf/nvmf.o 00:03:06.169 CC lib/nvmf/nvmf_rpc.o 00:03:06.169 CC lib/nvmf/transport.o 00:03:06.169 CC lib/nvmf/tcp.o 00:03:06.169 CC lib/nvmf/stubs.o 00:03:06.169 CC lib/nvmf/mdns_server.o 00:03:06.169 CC lib/nvmf/vfio_user.o 00:03:06.169 CC lib/nvmf/auth.o 00:03:06.169 CC lib/nvmf/rdma.o 00:03:06.169 CC lib/nbd/nbd.o 00:03:06.169 CC lib/ublk/ublk.o 00:03:06.169 CC lib/ublk/ublk_rpc.o 00:03:06.169 CC lib/nbd/nbd_rpc.o 00:03:06.169 CC lib/ftl/ftl_core.o 00:03:06.169 CC lib/ftl/ftl_init.o 00:03:06.169 CC lib/ftl/ftl_layout.o 00:03:06.169 CC lib/ftl/ftl_debug.o 00:03:06.169 CC lib/ftl/ftl_io.o 00:03:06.169 CC lib/ftl/ftl_sb.o 00:03:06.169 CC lib/ftl/ftl_l2p_flat.o 00:03:06.169 CC lib/ftl/ftl_l2p.o 00:03:06.169 CC lib/ftl/ftl_nv_cache.o 00:03:06.169 CC lib/ftl/ftl_band.o 00:03:06.169 CC lib/ftl/ftl_writer.o 00:03:06.169 CC lib/ftl/ftl_band_ops.o 00:03:06.169 CC lib/ftl/ftl_rq.o 00:03:06.169 CC lib/ftl/ftl_reloc.o 00:03:06.169 CC lib/ftl/ftl_l2p_cache.o 00:03:06.169 CC lib/ftl/ftl_p2l.o 00:03:06.169 CC lib/ftl/ftl_p2l_log.o 00:03:06.169 CC lib/ftl/mngt/ftl_mngt.o 00:03:06.169 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:06.169 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:06.169 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:06.169 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:06.169 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:06.169 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:06.169 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:06.169 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:06.169 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:06.169 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:06.169 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:06.169 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:06.169 CC lib/ftl/utils/ftl_conf.o 00:03:06.169 CC lib/ftl/utils/ftl_md.o 00:03:06.169 CC lib/ftl/utils/ftl_mempool.o 00:03:06.169 CC lib/ftl/utils/ftl_bitmap.o 00:03:06.169 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:06.169 CC lib/ftl/utils/ftl_property.o 00:03:06.169 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:06.169 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:06.169 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:06.169 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:06.169 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:06.169 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:06.169 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:06.169 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:06.169 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:06.429 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:06.429 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:06.429 CC lib/ftl/base/ftl_base_dev.o 00:03:06.429 CC lib/ftl/ftl_trace.o 00:03:06.429 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:06.429 CC lib/ftl/base/ftl_base_bdev.o 00:03:06.689 LIB libspdk_scsi.a 00:03:06.689 SO libspdk_scsi.so.9.0 00:03:06.951 LIB libspdk_nbd.a 00:03:06.951 SYMLINK libspdk_scsi.so 00:03:06.951 SO libspdk_nbd.so.7.0 00:03:06.951 SYMLINK libspdk_nbd.so 00:03:06.951 LIB libspdk_ublk.a 00:03:06.951 SO libspdk_ublk.so.3.0 00:03:06.951 SYMLINK libspdk_ublk.so 00:03:07.212 CC lib/vhost/vhost.o 00:03:07.212 CC lib/vhost/vhost_rpc.o 00:03:07.212 CC lib/vhost/rte_vhost_user.o 00:03:07.212 CC lib/vhost/vhost_scsi.o 00:03:07.212 CC lib/vhost/vhost_blk.o 00:03:07.212 LIB libspdk_ftl.a 00:03:07.212 CC lib/iscsi/conn.o 00:03:07.212 CC lib/iscsi/init_grp.o 00:03:07.212 CC lib/iscsi/iscsi.o 00:03:07.212 CC lib/iscsi/param.o 00:03:07.212 CC lib/iscsi/portal_grp.o 00:03:07.212 CC lib/iscsi/tgt_node.o 00:03:07.212 CC lib/iscsi/iscsi_subsystem.o 00:03:07.212 CC lib/iscsi/iscsi_rpc.o 00:03:07.212 CC lib/iscsi/task.o 00:03:07.212 SO libspdk_ftl.so.9.0 00:03:07.474 SYMLINK libspdk_ftl.so 00:03:08.045 LIB libspdk_nvmf.a 00:03:08.045 LIB libspdk_vhost.a 00:03:08.045 SO libspdk_vhost.so.8.0 00:03:08.045 SO libspdk_nvmf.so.20.0 00:03:08.306 SYMLINK libspdk_vhost.so 00:03:08.306 LIB libspdk_iscsi.a 00:03:08.306 SYMLINK libspdk_nvmf.so 00:03:08.306 SO libspdk_iscsi.so.8.0 00:03:08.567 SYMLINK libspdk_iscsi.so 00:03:09.140 CC module/env_dpdk/env_dpdk_rpc.o 00:03:09.140 CC module/vfu_device/vfu_virtio.o 00:03:09.140 CC module/vfu_device/vfu_virtio_blk.o 00:03:09.140 CC module/vfu_device/vfu_virtio_scsi.o 00:03:09.140 CC module/vfu_device/vfu_virtio_rpc.o 00:03:09.140 CC module/vfu_device/vfu_virtio_fs.o 00:03:09.140 CC module/accel/dsa/accel_dsa.o 00:03:09.140 CC module/accel/dsa/accel_dsa_rpc.o 00:03:09.140 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:09.140 LIB libspdk_env_dpdk_rpc.a 00:03:09.140 CC module/keyring/linux/keyring.o 00:03:09.140 CC module/accel/ioat/accel_ioat.o 00:03:09.140 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:09.140 CC module/accel/error/accel_error.o 00:03:09.140 CC module/accel/ioat/accel_ioat_rpc.o 00:03:09.140 CC module/keyring/linux/keyring_rpc.o 00:03:09.140 CC module/blob/bdev/blob_bdev.o 00:03:09.140 CC module/accel/error/accel_error_rpc.o 00:03:09.140 CC module/fsdev/aio/fsdev_aio.o 00:03:09.140 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:09.140 CC module/fsdev/aio/linux_aio_mgr.o 00:03:09.140 CC module/sock/posix/posix.o 00:03:09.140 CC module/keyring/file/keyring.o 00:03:09.140 CC module/keyring/file/keyring_rpc.o 00:03:09.140 CC module/accel/iaa/accel_iaa.o 00:03:09.140 CC module/accel/iaa/accel_iaa_rpc.o 00:03:09.140 CC module/scheduler/gscheduler/gscheduler.o 00:03:09.140 SO libspdk_env_dpdk_rpc.so.6.0 00:03:09.401 SYMLINK libspdk_env_dpdk_rpc.so 00:03:09.401 LIB libspdk_scheduler_dpdk_governor.a 00:03:09.401 LIB libspdk_keyring_linux.a 00:03:09.401 LIB libspdk_keyring_file.a 00:03:09.401 LIB libspdk_scheduler_gscheduler.a 00:03:09.401 LIB libspdk_accel_error.a 00:03:09.401 LIB libspdk_accel_ioat.a 00:03:09.401 SO libspdk_keyring_file.so.2.0 00:03:09.401 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:09.401 SO libspdk_scheduler_gscheduler.so.4.0 00:03:09.401 SO libspdk_keyring_linux.so.1.0 00:03:09.401 LIB libspdk_scheduler_dynamic.a 00:03:09.401 LIB libspdk_accel_iaa.a 00:03:09.401 SO libspdk_accel_error.so.2.0 00:03:09.401 SO libspdk_accel_ioat.so.6.0 00:03:09.401 SO libspdk_scheduler_dynamic.so.4.0 00:03:09.401 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:09.401 LIB libspdk_accel_dsa.a 00:03:09.401 SO libspdk_accel_iaa.so.3.0 00:03:09.401 SYMLINK libspdk_keyring_file.so 00:03:09.401 SYMLINK libspdk_keyring_linux.so 00:03:09.401 SYMLINK libspdk_scheduler_gscheduler.so 00:03:09.401 LIB libspdk_blob_bdev.a 00:03:09.662 SO libspdk_accel_dsa.so.5.0 00:03:09.662 SYMLINK libspdk_accel_error.so 00:03:09.662 SYMLINK libspdk_scheduler_dynamic.so 00:03:09.662 SYMLINK libspdk_accel_ioat.so 00:03:09.662 SO libspdk_blob_bdev.so.12.0 00:03:09.662 SYMLINK libspdk_accel_iaa.so 00:03:09.662 SYMLINK libspdk_accel_dsa.so 00:03:09.662 LIB libspdk_vfu_device.a 00:03:09.662 SYMLINK libspdk_blob_bdev.so 00:03:09.662 SO libspdk_vfu_device.so.3.0 00:03:09.662 SYMLINK libspdk_vfu_device.so 00:03:09.662 LIB libspdk_fsdev_aio.a 00:03:09.923 SO libspdk_fsdev_aio.so.1.0 00:03:09.923 LIB libspdk_sock_posix.a 00:03:09.923 SO libspdk_sock_posix.so.6.0 00:03:09.923 SYMLINK libspdk_fsdev_aio.so 00:03:09.923 SYMLINK libspdk_sock_posix.so 00:03:10.182 CC module/bdev/null/bdev_null.o 00:03:10.182 CC module/bdev/null/bdev_null_rpc.o 00:03:10.182 CC module/bdev/error/vbdev_error.o 00:03:10.182 CC module/bdev/error/vbdev_error_rpc.o 00:03:10.182 CC module/bdev/split/vbdev_split.o 00:03:10.182 CC module/bdev/split/vbdev_split_rpc.o 00:03:10.182 CC module/bdev/gpt/gpt.o 00:03:10.182 CC module/bdev/gpt/vbdev_gpt.o 00:03:10.182 CC module/bdev/delay/vbdev_delay.o 00:03:10.182 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:10.182 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:10.182 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:10.182 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:10.182 CC module/bdev/nvme/bdev_nvme.o 00:03:10.182 CC module/bdev/nvme/nvme_rpc.o 00:03:10.182 CC module/bdev/nvme/bdev_mdns_client.o 00:03:10.182 CC module/bdev/nvme/vbdev_opal.o 00:03:10.182 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:10.182 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:10.182 CC module/bdev/raid/bdev_raid_rpc.o 00:03:10.182 CC module/bdev/raid/bdev_raid.o 00:03:10.182 CC module/bdev/ftl/bdev_ftl.o 00:03:10.182 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:10.182 CC module/bdev/raid/bdev_raid_sb.o 00:03:10.182 CC module/bdev/aio/bdev_aio.o 00:03:10.182 CC module/bdev/passthru/vbdev_passthru.o 00:03:10.182 CC module/bdev/raid/raid0.o 00:03:10.182 CC module/bdev/aio/bdev_aio_rpc.o 00:03:10.182 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:10.182 CC module/bdev/raid/raid1.o 00:03:10.182 CC module/bdev/raid/concat.o 00:03:10.182 CC module/bdev/iscsi/bdev_iscsi.o 00:03:10.182 CC module/bdev/malloc/bdev_malloc.o 00:03:10.182 CC module/bdev/lvol/vbdev_lvol.o 00:03:10.182 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:10.182 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:10.182 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:10.182 CC module/blobfs/bdev/blobfs_bdev.o 00:03:10.182 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:10.182 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:10.182 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:10.182 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:10.443 LIB libspdk_bdev_split.a 00:03:10.443 LIB libspdk_blobfs_bdev.a 00:03:10.443 LIB libspdk_bdev_null.a 00:03:10.443 SO libspdk_bdev_split.so.6.0 00:03:10.443 SO libspdk_bdev_null.so.6.0 00:03:10.443 LIB libspdk_bdev_gpt.a 00:03:10.443 LIB libspdk_bdev_error.a 00:03:10.443 SO libspdk_blobfs_bdev.so.6.0 00:03:10.443 SO libspdk_bdev_gpt.so.6.0 00:03:10.443 LIB libspdk_bdev_passthru.a 00:03:10.443 SYMLINK libspdk_bdev_null.so 00:03:10.443 SO libspdk_bdev_error.so.6.0 00:03:10.443 LIB libspdk_bdev_ftl.a 00:03:10.443 SYMLINK libspdk_blobfs_bdev.so 00:03:10.443 SYMLINK libspdk_bdev_split.so 00:03:10.443 LIB libspdk_bdev_zone_block.a 00:03:10.443 SO libspdk_bdev_passthru.so.6.0 00:03:10.443 LIB libspdk_bdev_aio.a 00:03:10.443 SO libspdk_bdev_ftl.so.6.0 00:03:10.443 LIB libspdk_bdev_delay.a 00:03:10.443 SYMLINK libspdk_bdev_gpt.so 00:03:10.443 SO libspdk_bdev_zone_block.so.6.0 00:03:10.443 LIB libspdk_bdev_malloc.a 00:03:10.443 LIB libspdk_bdev_iscsi.a 00:03:10.443 SO libspdk_bdev_aio.so.6.0 00:03:10.443 SO libspdk_bdev_delay.so.6.0 00:03:10.443 SYMLINK libspdk_bdev_error.so 00:03:10.443 SYMLINK libspdk_bdev_passthru.so 00:03:10.713 SO libspdk_bdev_malloc.so.6.0 00:03:10.713 SO libspdk_bdev_iscsi.so.6.0 00:03:10.713 SYMLINK libspdk_bdev_ftl.so 00:03:10.713 SYMLINK libspdk_bdev_zone_block.so 00:03:10.713 SYMLINK libspdk_bdev_aio.so 00:03:10.713 SYMLINK libspdk_bdev_delay.so 00:03:10.713 SYMLINK libspdk_bdev_iscsi.so 00:03:10.713 SYMLINK libspdk_bdev_malloc.so 00:03:10.713 LIB libspdk_bdev_lvol.a 00:03:10.713 LIB libspdk_bdev_virtio.a 00:03:10.713 SO libspdk_bdev_lvol.so.6.0 00:03:10.713 SO libspdk_bdev_virtio.so.6.0 00:03:10.713 SYMLINK libspdk_bdev_lvol.so 00:03:10.713 SYMLINK libspdk_bdev_virtio.so 00:03:10.979 LIB libspdk_bdev_raid.a 00:03:10.979 SO libspdk_bdev_raid.so.6.0 00:03:11.239 SYMLINK libspdk_bdev_raid.so 00:03:11.809 LIB libspdk_bdev_nvme.a 00:03:11.809 SO libspdk_bdev_nvme.so.7.1 00:03:12.068 SYMLINK libspdk_bdev_nvme.so 00:03:12.638 CC module/event/subsystems/keyring/keyring.o 00:03:12.638 CC module/event/subsystems/iobuf/iobuf.o 00:03:12.638 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:12.638 CC module/event/subsystems/scheduler/scheduler.o 00:03:12.638 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:12.638 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:12.638 CC module/event/subsystems/vmd/vmd.o 00:03:12.638 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:12.638 CC module/event/subsystems/fsdev/fsdev.o 00:03:12.638 CC module/event/subsystems/sock/sock.o 00:03:12.638 LIB libspdk_event_keyring.a 00:03:12.897 LIB libspdk_event_iobuf.a 00:03:12.897 LIB libspdk_event_scheduler.a 00:03:12.897 LIB libspdk_event_vhost_blk.a 00:03:12.897 SO libspdk_event_keyring.so.1.0 00:03:12.897 LIB libspdk_event_vfu_tgt.a 00:03:12.897 LIB libspdk_event_fsdev.a 00:03:12.897 LIB libspdk_event_vmd.a 00:03:12.897 LIB libspdk_event_sock.a 00:03:12.897 SO libspdk_event_vhost_blk.so.3.0 00:03:12.897 SO libspdk_event_iobuf.so.3.0 00:03:12.897 SO libspdk_event_scheduler.so.4.0 00:03:12.897 SO libspdk_event_vfu_tgt.so.3.0 00:03:12.897 SO libspdk_event_vmd.so.6.0 00:03:12.897 SO libspdk_event_fsdev.so.1.0 00:03:12.897 SO libspdk_event_sock.so.5.0 00:03:12.897 SYMLINK libspdk_event_keyring.so 00:03:12.897 SYMLINK libspdk_event_vhost_blk.so 00:03:12.897 SYMLINK libspdk_event_iobuf.so 00:03:12.897 SYMLINK libspdk_event_scheduler.so 00:03:12.897 SYMLINK libspdk_event_vfu_tgt.so 00:03:12.897 SYMLINK libspdk_event_vmd.so 00:03:12.897 SYMLINK libspdk_event_fsdev.so 00:03:12.897 SYMLINK libspdk_event_sock.so 00:03:13.157 CC module/event/subsystems/accel/accel.o 00:03:13.459 LIB libspdk_event_accel.a 00:03:13.459 SO libspdk_event_accel.so.6.0 00:03:13.459 SYMLINK libspdk_event_accel.so 00:03:13.719 CC module/event/subsystems/bdev/bdev.o 00:03:13.980 LIB libspdk_event_bdev.a 00:03:13.980 SO libspdk_event_bdev.so.6.0 00:03:13.980 SYMLINK libspdk_event_bdev.so 00:03:14.241 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:14.241 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:14.241 CC module/event/subsystems/nbd/nbd.o 00:03:14.504 CC module/event/subsystems/scsi/scsi.o 00:03:14.504 CC module/event/subsystems/ublk/ublk.o 00:03:14.504 LIB libspdk_event_nbd.a 00:03:14.504 LIB libspdk_event_ublk.a 00:03:14.504 LIB libspdk_event_scsi.a 00:03:14.504 SO libspdk_event_nbd.so.6.0 00:03:14.504 SO libspdk_event_ublk.so.3.0 00:03:14.504 SO libspdk_event_scsi.so.6.0 00:03:14.504 LIB libspdk_event_nvmf.a 00:03:14.765 SO libspdk_event_nvmf.so.6.0 00:03:14.765 SYMLINK libspdk_event_nbd.so 00:03:14.765 SYMLINK libspdk_event_ublk.so 00:03:14.765 SYMLINK libspdk_event_scsi.so 00:03:14.765 SYMLINK libspdk_event_nvmf.so 00:03:15.026 CC module/event/subsystems/iscsi/iscsi.o 00:03:15.026 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:15.288 LIB libspdk_event_iscsi.a 00:03:15.288 LIB libspdk_event_vhost_scsi.a 00:03:15.288 SO libspdk_event_iscsi.so.6.0 00:03:15.288 SO libspdk_event_vhost_scsi.so.3.0 00:03:15.288 SYMLINK libspdk_event_iscsi.so 00:03:15.288 SYMLINK libspdk_event_vhost_scsi.so 00:03:15.549 SO libspdk.so.6.0 00:03:15.549 SYMLINK libspdk.so 00:03:15.810 CC test/rpc_client/rpc_client_test.o 00:03:15.810 TEST_HEADER include/spdk/accel.h 00:03:15.810 CXX app/trace/trace.o 00:03:15.810 TEST_HEADER include/spdk/barrier.h 00:03:15.810 TEST_HEADER include/spdk/accel_module.h 00:03:15.810 TEST_HEADER include/spdk/base64.h 00:03:15.810 TEST_HEADER include/spdk/assert.h 00:03:15.810 TEST_HEADER include/spdk/bdev_module.h 00:03:15.810 TEST_HEADER include/spdk/bdev.h 00:03:15.810 TEST_HEADER include/spdk/bdev_zone.h 00:03:15.810 CC app/trace_record/trace_record.o 00:03:15.810 TEST_HEADER include/spdk/bit_array.h 00:03:15.810 TEST_HEADER include/spdk/bit_pool.h 00:03:15.810 TEST_HEADER include/spdk/blob_bdev.h 00:03:15.810 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:15.810 TEST_HEADER include/spdk/blobfs.h 00:03:15.810 TEST_HEADER include/spdk/blob.h 00:03:15.810 TEST_HEADER include/spdk/config.h 00:03:15.810 TEST_HEADER include/spdk/conf.h 00:03:15.810 TEST_HEADER include/spdk/cpuset.h 00:03:15.810 CC app/spdk_nvme_discover/discovery_aer.o 00:03:15.810 TEST_HEADER include/spdk/crc32.h 00:03:15.810 TEST_HEADER include/spdk/crc16.h 00:03:15.810 CC app/spdk_lspci/spdk_lspci.o 00:03:15.810 TEST_HEADER include/spdk/dma.h 00:03:15.810 CC app/spdk_top/spdk_top.o 00:03:15.810 TEST_HEADER include/spdk/crc64.h 00:03:15.810 TEST_HEADER include/spdk/dif.h 00:03:15.810 CC app/spdk_nvme_identify/identify.o 00:03:15.810 TEST_HEADER include/spdk/endian.h 00:03:15.810 CC app/spdk_nvme_perf/perf.o 00:03:15.810 TEST_HEADER include/spdk/env.h 00:03:15.810 TEST_HEADER include/spdk/env_dpdk.h 00:03:15.810 TEST_HEADER include/spdk/event.h 00:03:15.810 TEST_HEADER include/spdk/fd.h 00:03:15.810 TEST_HEADER include/spdk/fd_group.h 00:03:15.810 TEST_HEADER include/spdk/fsdev.h 00:03:15.810 TEST_HEADER include/spdk/file.h 00:03:15.810 TEST_HEADER include/spdk/fsdev_module.h 00:03:15.810 TEST_HEADER include/spdk/ftl.h 00:03:15.810 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:15.810 TEST_HEADER include/spdk/gpt_spec.h 00:03:15.810 TEST_HEADER include/spdk/hexlify.h 00:03:15.810 TEST_HEADER include/spdk/idxd.h 00:03:15.810 TEST_HEADER include/spdk/histogram_data.h 00:03:15.810 TEST_HEADER include/spdk/init.h 00:03:15.810 TEST_HEADER include/spdk/idxd_spec.h 00:03:15.810 TEST_HEADER include/spdk/ioat.h 00:03:15.810 TEST_HEADER include/spdk/ioat_spec.h 00:03:15.810 TEST_HEADER include/spdk/jsonrpc.h 00:03:15.810 TEST_HEADER include/spdk/iscsi_spec.h 00:03:15.810 TEST_HEADER include/spdk/json.h 00:03:15.810 TEST_HEADER include/spdk/keyring.h 00:03:15.810 TEST_HEADER include/spdk/keyring_module.h 00:03:15.810 TEST_HEADER include/spdk/likely.h 00:03:15.810 TEST_HEADER include/spdk/log.h 00:03:15.810 TEST_HEADER include/spdk/lvol.h 00:03:15.810 TEST_HEADER include/spdk/md5.h 00:03:15.810 TEST_HEADER include/spdk/mmio.h 00:03:15.810 TEST_HEADER include/spdk/memory.h 00:03:15.810 TEST_HEADER include/spdk/nbd.h 00:03:15.810 TEST_HEADER include/spdk/net.h 00:03:15.810 TEST_HEADER include/spdk/notify.h 00:03:15.810 TEST_HEADER include/spdk/nvme.h 00:03:15.810 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:15.810 TEST_HEADER include/spdk/nvme_intel.h 00:03:15.810 CC app/nvmf_tgt/nvmf_main.o 00:03:15.810 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:15.810 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:15.810 TEST_HEADER include/spdk/nvme_zns.h 00:03:15.810 TEST_HEADER include/spdk/nvme_spec.h 00:03:15.810 CC app/spdk_dd/spdk_dd.o 00:03:15.810 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:15.810 TEST_HEADER include/spdk/nvmf.h 00:03:15.810 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:15.810 TEST_HEADER include/spdk/nvmf_spec.h 00:03:15.810 CC app/iscsi_tgt/iscsi_tgt.o 00:03:15.810 TEST_HEADER include/spdk/opal_spec.h 00:03:15.810 TEST_HEADER include/spdk/nvmf_transport.h 00:03:15.810 TEST_HEADER include/spdk/opal.h 00:03:15.810 TEST_HEADER include/spdk/pipe.h 00:03:15.810 TEST_HEADER include/spdk/pci_ids.h 00:03:15.810 TEST_HEADER include/spdk/queue.h 00:03:15.810 TEST_HEADER include/spdk/rpc.h 00:03:15.810 TEST_HEADER include/spdk/reduce.h 00:03:16.072 TEST_HEADER include/spdk/scheduler.h 00:03:16.072 TEST_HEADER include/spdk/scsi_spec.h 00:03:16.072 TEST_HEADER include/spdk/scsi.h 00:03:16.072 CC app/spdk_tgt/spdk_tgt.o 00:03:16.072 TEST_HEADER include/spdk/stdinc.h 00:03:16.072 TEST_HEADER include/spdk/string.h 00:03:16.072 TEST_HEADER include/spdk/sock.h 00:03:16.072 TEST_HEADER include/spdk/thread.h 00:03:16.072 TEST_HEADER include/spdk/trace_parser.h 00:03:16.072 TEST_HEADER include/spdk/trace.h 00:03:16.072 TEST_HEADER include/spdk/tree.h 00:03:16.072 TEST_HEADER include/spdk/ublk.h 00:03:16.072 TEST_HEADER include/spdk/util.h 00:03:16.072 TEST_HEADER include/spdk/uuid.h 00:03:16.072 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:16.072 TEST_HEADER include/spdk/version.h 00:03:16.072 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:16.072 TEST_HEADER include/spdk/vhost.h 00:03:16.072 TEST_HEADER include/spdk/vmd.h 00:03:16.072 TEST_HEADER include/spdk/zipf.h 00:03:16.072 TEST_HEADER include/spdk/xor.h 00:03:16.072 CXX test/cpp_headers/accel.o 00:03:16.072 CXX test/cpp_headers/accel_module.o 00:03:16.072 CXX test/cpp_headers/barrier.o 00:03:16.072 CXX test/cpp_headers/assert.o 00:03:16.072 CXX test/cpp_headers/base64.o 00:03:16.072 CXX test/cpp_headers/bdev.o 00:03:16.072 CXX test/cpp_headers/bdev_zone.o 00:03:16.072 CXX test/cpp_headers/bdev_module.o 00:03:16.072 CXX test/cpp_headers/bit_array.o 00:03:16.072 CXX test/cpp_headers/blob_bdev.o 00:03:16.072 CXX test/cpp_headers/bit_pool.o 00:03:16.072 CXX test/cpp_headers/blobfs.o 00:03:16.072 CXX test/cpp_headers/blobfs_bdev.o 00:03:16.072 CXX test/cpp_headers/conf.o 00:03:16.072 CXX test/cpp_headers/blob.o 00:03:16.072 CXX test/cpp_headers/config.o 00:03:16.072 CXX test/cpp_headers/cpuset.o 00:03:16.072 CXX test/cpp_headers/crc16.o 00:03:16.072 CXX test/cpp_headers/crc32.o 00:03:16.072 CXX test/cpp_headers/dif.o 00:03:16.072 CXX test/cpp_headers/crc64.o 00:03:16.072 CXX test/cpp_headers/endian.o 00:03:16.072 CXX test/cpp_headers/dma.o 00:03:16.072 CXX test/cpp_headers/env_dpdk.o 00:03:16.072 CXX test/cpp_headers/env.o 00:03:16.072 CXX test/cpp_headers/event.o 00:03:16.072 CXX test/cpp_headers/fd_group.o 00:03:16.072 CXX test/cpp_headers/file.o 00:03:16.072 CXX test/cpp_headers/fd.o 00:03:16.072 CXX test/cpp_headers/fsdev.o 00:03:16.072 CXX test/cpp_headers/ftl.o 00:03:16.072 CXX test/cpp_headers/fsdev_module.o 00:03:16.072 CXX test/cpp_headers/fuse_dispatcher.o 00:03:16.072 CXX test/cpp_headers/gpt_spec.o 00:03:16.072 CXX test/cpp_headers/hexlify.o 00:03:16.072 CXX test/cpp_headers/histogram_data.o 00:03:16.072 CXX test/cpp_headers/idxd_spec.o 00:03:16.072 CXX test/cpp_headers/init.o 00:03:16.072 CXX test/cpp_headers/idxd.o 00:03:16.072 CXX test/cpp_headers/ioat.o 00:03:16.072 CXX test/cpp_headers/ioat_spec.o 00:03:16.072 CXX test/cpp_headers/json.o 00:03:16.072 CXX test/cpp_headers/iscsi_spec.o 00:03:16.072 CXX test/cpp_headers/jsonrpc.o 00:03:16.072 CXX test/cpp_headers/keyring.o 00:03:16.072 CXX test/cpp_headers/lvol.o 00:03:16.072 CXX test/cpp_headers/keyring_module.o 00:03:16.072 CXX test/cpp_headers/likely.o 00:03:16.072 CXX test/cpp_headers/md5.o 00:03:16.072 CXX test/cpp_headers/memory.o 00:03:16.072 CXX test/cpp_headers/log.o 00:03:16.072 CXX test/cpp_headers/net.o 00:03:16.072 CXX test/cpp_headers/nbd.o 00:03:16.072 CXX test/cpp_headers/mmio.o 00:03:16.072 CXX test/cpp_headers/nvme.o 00:03:16.072 CXX test/cpp_headers/nvme_intel.o 00:03:16.072 CXX test/cpp_headers/notify.o 00:03:16.072 CXX test/cpp_headers/nvmf_cmd.o 00:03:16.072 CXX test/cpp_headers/nvme_ocssd.o 00:03:16.072 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:16.072 CXX test/cpp_headers/nvme_spec.o 00:03:16.072 CXX test/cpp_headers/nvme_zns.o 00:03:16.072 CXX test/cpp_headers/nvmf.o 00:03:16.072 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:16.072 CXX test/cpp_headers/nvmf_spec.o 00:03:16.072 CXX test/cpp_headers/opal.o 00:03:16.072 CXX test/cpp_headers/nvmf_transport.o 00:03:16.072 CXX test/cpp_headers/opal_spec.o 00:03:16.072 CXX test/cpp_headers/pci_ids.o 00:03:16.072 CXX test/cpp_headers/pipe.o 00:03:16.072 CXX test/cpp_headers/queue.o 00:03:16.072 CXX test/cpp_headers/reduce.o 00:03:16.072 CXX test/cpp_headers/rpc.o 00:03:16.072 CXX test/cpp_headers/scheduler.o 00:03:16.072 CXX test/cpp_headers/scsi.o 00:03:16.072 CXX test/cpp_headers/scsi_spec.o 00:03:16.072 CXX test/cpp_headers/string.o 00:03:16.072 CXX test/cpp_headers/sock.o 00:03:16.072 CXX test/cpp_headers/thread.o 00:03:16.072 CXX test/cpp_headers/stdinc.o 00:03:16.072 CXX test/cpp_headers/trace_parser.o 00:03:16.072 CXX test/cpp_headers/tree.o 00:03:16.072 CXX test/cpp_headers/ublk.o 00:03:16.072 CXX test/cpp_headers/trace.o 00:03:16.072 CXX test/cpp_headers/util.o 00:03:16.072 CXX test/cpp_headers/version.o 00:03:16.072 CXX test/cpp_headers/uuid.o 00:03:16.072 CXX test/cpp_headers/vfio_user_pci.o 00:03:16.072 CXX test/cpp_headers/vhost.o 00:03:16.072 CXX test/cpp_headers/vfio_user_spec.o 00:03:16.072 CC test/thread/poller_perf/poller_perf.o 00:03:16.072 CXX test/cpp_headers/vmd.o 00:03:16.072 CXX test/cpp_headers/xor.o 00:03:16.072 CXX test/cpp_headers/zipf.o 00:03:16.072 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:16.072 CC test/env/memory/memory_ut.o 00:03:16.072 LINK spdk_lspci 00:03:16.072 CC test/env/vtophys/vtophys.o 00:03:16.072 CC examples/util/zipf/zipf.o 00:03:16.072 CC examples/ioat/perf/perf.o 00:03:16.072 CC examples/ioat/verify/verify.o 00:03:16.072 CC test/app/jsoncat/jsoncat.o 00:03:16.072 CC test/app/histogram_perf/histogram_perf.o 00:03:16.072 CC test/env/pci/pci_ut.o 00:03:16.072 CC test/app/stub/stub.o 00:03:16.072 CC app/fio/nvme/fio_plugin.o 00:03:16.072 CC test/app/bdev_svc/bdev_svc.o 00:03:16.072 CC test/dma/test_dma/test_dma.o 00:03:16.072 LINK rpc_client_test 00:03:16.072 CC app/fio/bdev/fio_plugin.o 00:03:16.338 LINK spdk_nvme_discover 00:03:16.338 LINK nvmf_tgt 00:03:16.338 LINK spdk_trace_record 00:03:16.338 LINK interrupt_tgt 00:03:16.338 CC test/env/mem_callbacks/mem_callbacks.o 00:03:16.338 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:16.338 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:16.338 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:16.338 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:16.603 LINK iscsi_tgt 00:03:16.603 LINK spdk_tgt 00:03:16.603 LINK vtophys 00:03:16.603 LINK stub 00:03:16.603 LINK jsoncat 00:03:16.603 LINK zipf 00:03:16.603 LINK histogram_perf 00:03:16.603 LINK env_dpdk_post_init 00:03:16.603 LINK verify 00:03:16.603 LINK poller_perf 00:03:16.603 LINK ioat_perf 00:03:16.603 LINK spdk_trace 00:03:16.863 LINK spdk_dd 00:03:16.863 LINK bdev_svc 00:03:16.863 LINK pci_ut 00:03:17.124 LINK spdk_nvme 00:03:17.124 LINK vhost_fuzz 00:03:17.124 LINK nvme_fuzz 00:03:17.124 LINK test_dma 00:03:17.124 LINK spdk_top 00:03:17.124 CC app/vhost/vhost.o 00:03:17.124 LINK spdk_bdev 00:03:17.124 CC examples/sock/hello_world/hello_sock.o 00:03:17.124 LINK spdk_nvme_perf 00:03:17.124 CC test/event/reactor_perf/reactor_perf.o 00:03:17.124 CC examples/idxd/perf/perf.o 00:03:17.124 CC test/event/reactor/reactor.o 00:03:17.124 CC examples/vmd/lsvmd/lsvmd.o 00:03:17.124 CC test/event/event_perf/event_perf.o 00:03:17.124 CC test/event/scheduler/scheduler.o 00:03:17.124 CC examples/vmd/led/led.o 00:03:17.124 LINK spdk_nvme_identify 00:03:17.124 CC examples/thread/thread/thread_ex.o 00:03:17.124 CC test/event/app_repeat/app_repeat.o 00:03:17.385 LINK mem_callbacks 00:03:17.385 LINK reactor 00:03:17.385 LINK led 00:03:17.385 LINK event_perf 00:03:17.385 LINK reactor_perf 00:03:17.385 LINK vhost 00:03:17.385 LINK lsvmd 00:03:17.385 LINK app_repeat 00:03:17.385 LINK hello_sock 00:03:17.385 LINK thread 00:03:17.385 LINK scheduler 00:03:17.385 LINK idxd_perf 00:03:17.647 LINK memory_ut 00:03:17.647 CC test/nvme/startup/startup.o 00:03:17.647 CC test/nvme/e2edp/nvme_dp.o 00:03:17.647 CC test/nvme/reset/reset.o 00:03:17.647 CC test/nvme/err_injection/err_injection.o 00:03:17.647 CC test/nvme/sgl/sgl.o 00:03:17.647 CC test/nvme/aer/aer.o 00:03:17.647 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:17.647 CC test/nvme/overhead/overhead.o 00:03:17.647 CC test/nvme/boot_partition/boot_partition.o 00:03:17.647 CC test/nvme/fused_ordering/fused_ordering.o 00:03:17.647 CC test/nvme/connect_stress/connect_stress.o 00:03:17.647 CC test/nvme/compliance/nvme_compliance.o 00:03:17.647 CC test/nvme/reserve/reserve.o 00:03:17.647 CC test/nvme/simple_copy/simple_copy.o 00:03:17.647 CC test/nvme/fdp/fdp.o 00:03:17.647 CC test/accel/dif/dif.o 00:03:17.647 CC test/nvme/cuse/cuse.o 00:03:17.647 CC test/blobfs/mkfs/mkfs.o 00:03:17.909 CC test/lvol/esnap/esnap.o 00:03:17.909 LINK startup 00:03:17.909 LINK boot_partition 00:03:17.909 LINK doorbell_aers 00:03:17.909 LINK reserve 00:03:17.909 LINK err_injection 00:03:17.909 LINK fused_ordering 00:03:17.909 LINK connect_stress 00:03:17.909 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:17.909 CC examples/nvme/reconnect/reconnect.o 00:03:17.909 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:17.909 CC examples/nvme/hotplug/hotplug.o 00:03:17.909 LINK nvme_dp 00:03:17.909 CC examples/nvme/abort/abort.o 00:03:17.909 LINK simple_copy 00:03:17.909 LINK aer 00:03:17.909 CC examples/nvme/hello_world/hello_world.o 00:03:17.909 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:17.909 CC examples/nvme/arbitration/arbitration.o 00:03:17.909 LINK sgl 00:03:17.909 LINK reset 00:03:17.909 LINK mkfs 00:03:17.909 LINK overhead 00:03:17.909 LINK iscsi_fuzz 00:03:17.909 LINK nvme_compliance 00:03:17.909 LINK fdp 00:03:17.909 CC examples/accel/perf/accel_perf.o 00:03:17.909 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:17.909 CC examples/blob/cli/blobcli.o 00:03:17.909 CC examples/blob/hello_world/hello_blob.o 00:03:18.169 LINK pmr_persistence 00:03:18.169 LINK cmb_copy 00:03:18.170 LINK hotplug 00:03:18.170 LINK hello_world 00:03:18.170 LINK reconnect 00:03:18.170 LINK arbitration 00:03:18.170 LINK dif 00:03:18.170 LINK abort 00:03:18.170 LINK hello_blob 00:03:18.170 LINK hello_fsdev 00:03:18.170 LINK nvme_manage 00:03:18.430 LINK accel_perf 00:03:18.430 LINK blobcli 00:03:18.692 LINK cuse 00:03:18.692 CC test/bdev/bdevio/bdevio.o 00:03:18.953 CC examples/bdev/bdevperf/bdevperf.o 00:03:18.953 CC examples/bdev/hello_world/hello_bdev.o 00:03:19.215 LINK bdevio 00:03:19.215 LINK hello_bdev 00:03:19.476 LINK bdevperf 00:03:20.047 CC examples/nvmf/nvmf/nvmf.o 00:03:20.308 LINK nvmf 00:03:20.880 LINK esnap 00:03:21.141 00:03:21.141 real 0m50.607s 00:03:21.141 user 7m19.843s 00:03:21.141 sys 4m13.974s 00:03:21.141 06:02:15 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:21.141 06:02:15 make -- common/autotest_common.sh@10 -- $ set +x 00:03:21.141 ************************************ 00:03:21.141 END TEST make 00:03:21.141 ************************************ 00:03:21.141 06:02:15 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:21.141 06:02:15 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:21.141 06:02:15 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:21.141 06:02:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:21.141 06:02:15 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:21.141 06:02:15 -- pm/common@44 -- $ pid=28237 00:03:21.141 06:02:15 -- pm/common@50 -- $ kill -TERM 28237 00:03:21.141 06:02:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:21.141 06:02:15 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:21.141 06:02:15 -- pm/common@44 -- $ pid=28238 00:03:21.141 06:02:15 -- pm/common@50 -- $ kill -TERM 28238 00:03:21.141 06:02:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:21.141 06:02:15 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:21.141 06:02:15 -- pm/common@44 -- $ pid=28241 00:03:21.141 06:02:15 -- pm/common@50 -- $ kill -TERM 28241 00:03:21.141 06:02:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:21.141 06:02:15 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:21.141 06:02:15 -- pm/common@44 -- $ pid=28264 00:03:21.141 06:02:15 -- pm/common@50 -- $ sudo -E kill -TERM 28264 00:03:21.141 06:02:15 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:21.141 06:02:15 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:21.403 06:02:15 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:21.403 06:02:15 -- common/autotest_common.sh@1711 -- # lcov --version 00:03:21.403 06:02:15 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:21.403 06:02:15 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:21.403 06:02:15 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:21.403 06:02:15 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:21.403 06:02:15 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:21.403 06:02:15 -- scripts/common.sh@336 -- # IFS=.-: 00:03:21.403 06:02:15 -- scripts/common.sh@336 -- # read -ra ver1 00:03:21.403 06:02:15 -- scripts/common.sh@337 -- # IFS=.-: 00:03:21.404 06:02:15 -- scripts/common.sh@337 -- # read -ra ver2 00:03:21.404 06:02:15 -- scripts/common.sh@338 -- # local 'op=<' 00:03:21.404 06:02:15 -- scripts/common.sh@340 -- # ver1_l=2 00:03:21.404 06:02:15 -- scripts/common.sh@341 -- # ver2_l=1 00:03:21.404 06:02:15 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:21.404 06:02:15 -- scripts/common.sh@344 -- # case "$op" in 00:03:21.404 06:02:15 -- scripts/common.sh@345 -- # : 1 00:03:21.404 06:02:15 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:21.404 06:02:15 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:21.404 06:02:15 -- scripts/common.sh@365 -- # decimal 1 00:03:21.404 06:02:15 -- scripts/common.sh@353 -- # local d=1 00:03:21.404 06:02:15 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:21.404 06:02:15 -- scripts/common.sh@355 -- # echo 1 00:03:21.404 06:02:15 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:21.404 06:02:15 -- scripts/common.sh@366 -- # decimal 2 00:03:21.404 06:02:15 -- scripts/common.sh@353 -- # local d=2 00:03:21.404 06:02:15 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:21.404 06:02:15 -- scripts/common.sh@355 -- # echo 2 00:03:21.404 06:02:15 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:21.404 06:02:15 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:21.404 06:02:15 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:21.404 06:02:15 -- scripts/common.sh@368 -- # return 0 00:03:21.404 06:02:15 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:21.404 06:02:15 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:21.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:21.404 --rc genhtml_branch_coverage=1 00:03:21.404 --rc genhtml_function_coverage=1 00:03:21.404 --rc genhtml_legend=1 00:03:21.404 --rc geninfo_all_blocks=1 00:03:21.404 --rc geninfo_unexecuted_blocks=1 00:03:21.404 00:03:21.404 ' 00:03:21.404 06:02:15 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:21.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:21.404 --rc genhtml_branch_coverage=1 00:03:21.404 --rc genhtml_function_coverage=1 00:03:21.404 --rc genhtml_legend=1 00:03:21.404 --rc geninfo_all_blocks=1 00:03:21.404 --rc geninfo_unexecuted_blocks=1 00:03:21.404 00:03:21.404 ' 00:03:21.404 06:02:15 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:21.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:21.404 --rc genhtml_branch_coverage=1 00:03:21.404 --rc genhtml_function_coverage=1 00:03:21.404 --rc genhtml_legend=1 00:03:21.404 --rc geninfo_all_blocks=1 00:03:21.404 --rc geninfo_unexecuted_blocks=1 00:03:21.404 00:03:21.404 ' 00:03:21.404 06:02:15 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:21.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:21.404 --rc genhtml_branch_coverage=1 00:03:21.404 --rc genhtml_function_coverage=1 00:03:21.404 --rc genhtml_legend=1 00:03:21.404 --rc geninfo_all_blocks=1 00:03:21.404 --rc geninfo_unexecuted_blocks=1 00:03:21.404 00:03:21.404 ' 00:03:21.404 06:02:15 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:21.404 06:02:15 -- nvmf/common.sh@7 -- # uname -s 00:03:21.404 06:02:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:21.404 06:02:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:21.404 06:02:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:21.404 06:02:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:21.404 06:02:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:21.404 06:02:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:21.404 06:02:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:21.404 06:02:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:21.404 06:02:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:21.404 06:02:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:21.404 06:02:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:03:21.404 06:02:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:03:21.404 06:02:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:21.404 06:02:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:21.404 06:02:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:21.404 06:02:15 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:21.404 06:02:15 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:21.404 06:02:15 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:21.404 06:02:15 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:21.404 06:02:15 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:21.404 06:02:15 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:21.404 06:02:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:21.404 06:02:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:21.404 06:02:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:21.404 06:02:15 -- paths/export.sh@5 -- # export PATH 00:03:21.404 06:02:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:21.404 06:02:15 -- nvmf/common.sh@51 -- # : 0 00:03:21.404 06:02:15 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:21.404 06:02:15 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:21.404 06:02:15 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:21.404 06:02:15 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:21.404 06:02:15 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:21.404 06:02:15 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:21.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:21.404 06:02:15 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:21.404 06:02:15 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:21.404 06:02:15 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:21.404 06:02:15 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:21.404 06:02:15 -- spdk/autotest.sh@32 -- # uname -s 00:03:21.404 06:02:15 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:21.404 06:02:15 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:21.404 06:02:15 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:21.404 06:02:15 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:21.404 06:02:15 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:21.404 06:02:15 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:21.666 06:02:15 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:21.666 06:02:15 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:21.666 06:02:15 -- spdk/autotest.sh@48 -- # udevadm_pid=91033 00:03:21.666 06:02:15 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:21.666 06:02:15 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:21.666 06:02:15 -- pm/common@17 -- # local monitor 00:03:21.666 06:02:15 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:21.666 06:02:15 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:21.666 06:02:15 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:21.666 06:02:15 -- pm/common@21 -- # date +%s 00:03:21.666 06:02:15 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:21.666 06:02:15 -- pm/common@21 -- # date +%s 00:03:21.666 06:02:15 -- pm/common@25 -- # sleep 1 00:03:21.666 06:02:15 -- pm/common@21 -- # date +%s 00:03:21.666 06:02:15 -- pm/common@21 -- # date +%s 00:03:21.666 06:02:16 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733720536 00:03:21.666 06:02:16 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733720536 00:03:21.666 06:02:16 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733720536 00:03:21.666 06:02:16 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733720536 00:03:21.666 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733720536_collect-cpu-load.pm.log 00:03:21.666 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733720536_collect-vmstat.pm.log 00:03:21.666 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733720536_collect-cpu-temp.pm.log 00:03:21.666 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733720536_collect-bmc-pm.bmc.pm.log 00:03:22.610 06:02:17 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:22.610 06:02:17 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:22.610 06:02:17 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:22.610 06:02:17 -- common/autotest_common.sh@10 -- # set +x 00:03:22.610 06:02:17 -- spdk/autotest.sh@59 -- # create_test_list 00:03:22.610 06:02:17 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:22.610 06:02:17 -- common/autotest_common.sh@10 -- # set +x 00:03:22.610 06:02:17 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:22.610 06:02:17 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:22.610 06:02:17 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:22.610 06:02:17 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:22.610 06:02:17 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:22.610 06:02:17 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:22.610 06:02:17 -- common/autotest_common.sh@1457 -- # uname 00:03:22.610 06:02:17 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:22.610 06:02:17 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:22.610 06:02:17 -- common/autotest_common.sh@1477 -- # uname 00:03:22.610 06:02:17 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:22.610 06:02:17 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:22.610 06:02:17 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:22.610 lcov: LCOV version 1.15 00:03:22.610 06:02:17 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:37.528 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:37.528 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:52.447 06:02:46 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:52.447 06:02:46 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:52.447 06:02:46 -- common/autotest_common.sh@10 -- # set +x 00:03:52.447 06:02:46 -- spdk/autotest.sh@78 -- # rm -f 00:03:52.447 06:02:46 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:55.009 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:03:55.009 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:03:55.270 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:03:55.270 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:03:55.270 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:03:55.270 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:03:55.270 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:03:55.270 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:03:55.270 0000:65:00.0 (8086 0a54): Already using the nvme driver 00:03:55.270 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:03:55.270 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:03:55.270 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:03:55.270 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:03:55.532 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:03:55.532 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:03:55.532 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:03:55.532 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:03:55.794 06:02:50 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:55.794 06:02:50 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:55.794 06:02:50 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:55.794 06:02:50 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:03:55.794 06:02:50 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:03:55.794 06:02:50 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:03:55.794 06:02:50 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:55.794 06:02:50 -- common/autotest_common.sh@1669 -- # bdf=0000:65:00.0 00:03:55.794 06:02:50 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:55.794 06:02:50 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:03:55.794 06:02:50 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:55.794 06:02:50 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:55.794 06:02:50 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:55.794 06:02:50 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:55.794 06:02:50 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:55.794 06:02:50 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:55.794 06:02:50 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:55.794 06:02:50 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:55.794 06:02:50 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:55.794 No valid GPT data, bailing 00:03:55.794 06:02:50 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:55.794 06:02:50 -- scripts/common.sh@394 -- # pt= 00:03:55.794 06:02:50 -- scripts/common.sh@395 -- # return 1 00:03:55.794 06:02:50 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:55.794 1+0 records in 00:03:55.794 1+0 records out 00:03:55.794 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00148266 s, 707 MB/s 00:03:55.794 06:02:50 -- spdk/autotest.sh@105 -- # sync 00:03:55.794 06:02:50 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:55.794 06:02:50 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:55.794 06:02:50 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:04.001 06:02:57 -- spdk/autotest.sh@111 -- # uname -s 00:04:04.001 06:02:57 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:04.001 06:02:57 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:04.001 06:02:57 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:07.308 Hugepages 00:04:07.308 node hugesize free / total 00:04:07.308 node0 1048576kB 0 / 0 00:04:07.308 node0 2048kB 0 / 0 00:04:07.308 node1 1048576kB 0 / 0 00:04:07.308 node1 2048kB 0 / 0 00:04:07.308 00:04:07.308 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:07.308 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:04:07.308 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:04:07.308 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:04:07.308 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:04:07.308 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:04:07.308 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:04:07.308 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:04:07.308 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:04:07.308 NVMe 0000:65:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:07.308 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:04:07.308 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:04:07.308 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:04:07.308 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:04:07.308 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:04:07.308 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:04:07.308 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:04:07.308 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:04:07.308 06:03:01 -- spdk/autotest.sh@117 -- # uname -s 00:04:07.308 06:03:01 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:07.308 06:03:01 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:07.308 06:03:01 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:10.623 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:10.623 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:10.623 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:10.885 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:10.885 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:10.885 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:10.885 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:10.885 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:10.885 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:10.885 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:10.885 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:10.885 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:10.885 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:10.885 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:10.885 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:10.885 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:12.841 0000:65:00.0 (8086 0a54): nvme -> vfio-pci 00:04:13.101 06:03:07 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:14.043 06:03:08 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:14.043 06:03:08 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:14.043 06:03:08 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:14.043 06:03:08 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:14.043 06:03:08 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:14.043 06:03:08 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:14.043 06:03:08 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:14.043 06:03:08 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:14.043 06:03:08 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:14.304 06:03:08 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:14.304 06:03:08 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:04:14.304 06:03:08 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:17.610 Waiting for block devices as requested 00:04:17.610 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:17.610 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:17.871 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:17.871 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:17.871 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:18.133 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:18.133 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:18.133 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:18.133 0000:65:00.0 (8086 0a54): vfio-pci -> nvme 00:04:18.395 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:18.395 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:18.656 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:18.656 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:18.656 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:18.917 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:18.917 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:18.917 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:19.492 06:03:13 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:19.492 06:03:13 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:04:19.492 06:03:13 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:04:19.492 06:03:13 -- common/autotest_common.sh@1487 -- # grep 0000:65:00.0/nvme/nvme 00:04:19.492 06:03:13 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:19.492 06:03:13 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:04:19.492 06:03:13 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:19.492 06:03:13 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:19.492 06:03:13 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:19.492 06:03:13 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:19.492 06:03:13 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:19.492 06:03:13 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:19.492 06:03:13 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:19.492 06:03:13 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:04:19.492 06:03:13 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:19.492 06:03:13 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:19.492 06:03:13 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:19.492 06:03:13 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:19.492 06:03:13 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:19.492 06:03:13 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:19.492 06:03:13 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:19.492 06:03:13 -- common/autotest_common.sh@1543 -- # continue 00:04:19.492 06:03:13 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:19.492 06:03:13 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:19.492 06:03:13 -- common/autotest_common.sh@10 -- # set +x 00:04:19.492 06:03:13 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:19.492 06:03:13 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:19.492 06:03:13 -- common/autotest_common.sh@10 -- # set +x 00:04:19.492 06:03:13 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:22.792 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:22.792 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:22.792 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:23.052 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:23.052 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:23.052 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:23.052 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:23.052 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:23.052 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:23.052 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:23.052 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:23.052 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:23.052 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:23.052 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:23.052 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:23.052 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:24.983 0000:65:00.0 (8086 0a54): nvme -> vfio-pci 00:04:25.245 06:03:19 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:25.245 06:03:19 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:25.245 06:03:19 -- common/autotest_common.sh@10 -- # set +x 00:04:25.245 06:03:19 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:25.245 06:03:19 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:25.245 06:03:19 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:25.245 06:03:19 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:25.245 06:03:19 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:25.245 06:03:19 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:25.245 06:03:19 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:25.245 06:03:19 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:25.245 06:03:19 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:25.245 06:03:19 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:25.245 06:03:19 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:25.245 06:03:19 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:25.245 06:03:19 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:25.245 06:03:19 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:25.245 06:03:19 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:04:25.245 06:03:19 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:25.245 06:03:19 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:04:25.245 06:03:19 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:04:25.245 06:03:19 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:25.245 06:03:19 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:04:25.245 06:03:19 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:04:25.245 06:03:19 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:65:00.0 00:04:25.245 06:03:19 -- common/autotest_common.sh@1579 -- # [[ -z 0000:65:00.0 ]] 00:04:25.245 06:03:19 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=108519 00:04:25.245 06:03:19 -- common/autotest_common.sh@1585 -- # waitforlisten 108519 00:04:25.245 06:03:19 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:25.245 06:03:19 -- common/autotest_common.sh@835 -- # '[' -z 108519 ']' 00:04:25.245 06:03:19 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:25.245 06:03:19 -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:25.245 06:03:19 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:25.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:25.245 06:03:19 -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:25.245 06:03:19 -- common/autotest_common.sh@10 -- # set +x 00:04:25.507 [2024-12-09 06:03:19.887609] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:04:25.507 [2024-12-09 06:03:19.887697] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108519 ] 00:04:25.507 [2024-12-09 06:03:19.981798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:25.507 [2024-12-09 06:03:20.036088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.453 06:03:20 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:26.453 06:03:20 -- common/autotest_common.sh@868 -- # return 0 00:04:26.453 06:03:20 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:04:26.453 06:03:20 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:04:26.453 06:03:20 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:65:00.0 00:04:29.763 nvme0n1 00:04:29.763 06:03:23 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:29.763 [2024-12-09 06:03:23.870486] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:29.763 request: 00:04:29.763 { 00:04:29.763 "nvme_ctrlr_name": "nvme0", 00:04:29.763 "password": "test", 00:04:29.763 "method": "bdev_nvme_opal_revert", 00:04:29.763 "req_id": 1 00:04:29.763 } 00:04:29.763 Got JSON-RPC error response 00:04:29.763 response: 00:04:29.763 { 00:04:29.763 "code": -32602, 00:04:29.763 "message": "Invalid parameters" 00:04:29.763 } 00:04:29.763 06:03:23 -- common/autotest_common.sh@1591 -- # true 00:04:29.763 06:03:23 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:04:29.763 06:03:23 -- common/autotest_common.sh@1595 -- # killprocess 108519 00:04:29.763 06:03:23 -- common/autotest_common.sh@954 -- # '[' -z 108519 ']' 00:04:29.763 06:03:23 -- common/autotest_common.sh@958 -- # kill -0 108519 00:04:29.763 06:03:23 -- common/autotest_common.sh@959 -- # uname 00:04:29.763 06:03:23 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:29.763 06:03:23 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 108519 00:04:29.763 06:03:23 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:29.763 06:03:23 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:29.763 06:03:23 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 108519' 00:04:29.763 killing process with pid 108519 00:04:29.763 06:03:23 -- common/autotest_common.sh@973 -- # kill 108519 00:04:29.763 06:03:23 -- common/autotest_common.sh@978 -- # wait 108519 00:04:32.311 06:03:26 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:32.311 06:03:26 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:32.312 06:03:26 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:32.312 06:03:26 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:32.312 06:03:26 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:32.312 06:03:26 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:32.312 06:03:26 -- common/autotest_common.sh@10 -- # set +x 00:04:32.312 06:03:26 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:32.312 06:03:26 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:32.312 06:03:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:32.312 06:03:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.312 06:03:26 -- common/autotest_common.sh@10 -- # set +x 00:04:32.312 ************************************ 00:04:32.312 START TEST env 00:04:32.312 ************************************ 00:04:32.312 06:03:26 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:32.312 * Looking for test storage... 00:04:32.312 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:32.312 06:03:26 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:32.312 06:03:26 env -- common/autotest_common.sh@1711 -- # lcov --version 00:04:32.312 06:03:26 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:32.312 06:03:26 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:32.312 06:03:26 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:32.312 06:03:26 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:32.312 06:03:26 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:32.312 06:03:26 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:32.312 06:03:26 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:32.312 06:03:26 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:32.312 06:03:26 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:32.312 06:03:26 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:32.312 06:03:26 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:32.312 06:03:26 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:32.312 06:03:26 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:32.312 06:03:26 env -- scripts/common.sh@344 -- # case "$op" in 00:04:32.312 06:03:26 env -- scripts/common.sh@345 -- # : 1 00:04:32.312 06:03:26 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:32.312 06:03:26 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:32.312 06:03:26 env -- scripts/common.sh@365 -- # decimal 1 00:04:32.312 06:03:26 env -- scripts/common.sh@353 -- # local d=1 00:04:32.312 06:03:26 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:32.312 06:03:26 env -- scripts/common.sh@355 -- # echo 1 00:04:32.312 06:03:26 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:32.312 06:03:26 env -- scripts/common.sh@366 -- # decimal 2 00:04:32.312 06:03:26 env -- scripts/common.sh@353 -- # local d=2 00:04:32.312 06:03:26 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:32.312 06:03:26 env -- scripts/common.sh@355 -- # echo 2 00:04:32.312 06:03:26 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:32.312 06:03:26 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:32.312 06:03:26 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:32.312 06:03:26 env -- scripts/common.sh@368 -- # return 0 00:04:32.312 06:03:26 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:32.312 06:03:26 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:32.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.312 --rc genhtml_branch_coverage=1 00:04:32.312 --rc genhtml_function_coverage=1 00:04:32.312 --rc genhtml_legend=1 00:04:32.312 --rc geninfo_all_blocks=1 00:04:32.312 --rc geninfo_unexecuted_blocks=1 00:04:32.312 00:04:32.312 ' 00:04:32.312 06:03:26 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:32.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.312 --rc genhtml_branch_coverage=1 00:04:32.312 --rc genhtml_function_coverage=1 00:04:32.312 --rc genhtml_legend=1 00:04:32.312 --rc geninfo_all_blocks=1 00:04:32.312 --rc geninfo_unexecuted_blocks=1 00:04:32.312 00:04:32.312 ' 00:04:32.312 06:03:26 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:32.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.312 --rc genhtml_branch_coverage=1 00:04:32.312 --rc genhtml_function_coverage=1 00:04:32.312 --rc genhtml_legend=1 00:04:32.312 --rc geninfo_all_blocks=1 00:04:32.312 --rc geninfo_unexecuted_blocks=1 00:04:32.312 00:04:32.312 ' 00:04:32.312 06:03:26 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:32.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.312 --rc genhtml_branch_coverage=1 00:04:32.312 --rc genhtml_function_coverage=1 00:04:32.312 --rc genhtml_legend=1 00:04:32.312 --rc geninfo_all_blocks=1 00:04:32.312 --rc geninfo_unexecuted_blocks=1 00:04:32.312 00:04:32.312 ' 00:04:32.312 06:03:26 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:32.312 06:03:26 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:32.312 06:03:26 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.312 06:03:26 env -- common/autotest_common.sh@10 -- # set +x 00:04:32.312 ************************************ 00:04:32.312 START TEST env_memory 00:04:32.312 ************************************ 00:04:32.312 06:03:26 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:32.312 00:04:32.312 00:04:32.312 CUnit - A unit testing framework for C - Version 2.1-3 00:04:32.312 http://cunit.sourceforge.net/ 00:04:32.312 00:04:32.312 00:04:32.312 Suite: memory 00:04:32.312 Test: alloc and free memory map ...[2024-12-09 06:03:26.745836] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:32.312 passed 00:04:32.312 Test: mem map translation ...[2024-12-09 06:03:26.764053] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:32.312 [2024-12-09 06:03:26.764077] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:32.312 [2024-12-09 06:03:26.764112] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:32.312 [2024-12-09 06:03:26.764118] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:32.312 passed 00:04:32.312 Test: mem map registration ...[2024-12-09 06:03:26.803185] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:32.312 [2024-12-09 06:03:26.803218] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:32.312 passed 00:04:32.312 Test: mem map adjacent registrations ...passed 00:04:32.312 00:04:32.312 Run Summary: Type Total Ran Passed Failed Inactive 00:04:32.312 suites 1 1 n/a 0 0 00:04:32.312 tests 4 4 4 0 0 00:04:32.312 asserts 152 152 152 0 n/a 00:04:32.312 00:04:32.312 Elapsed time = 0.129 seconds 00:04:32.312 00:04:32.312 real 0m0.141s 00:04:32.312 user 0m0.133s 00:04:32.312 sys 0m0.005s 00:04:32.312 06:03:26 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:32.312 06:03:26 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:32.312 ************************************ 00:04:32.312 END TEST env_memory 00:04:32.312 ************************************ 00:04:32.312 06:03:26 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:32.312 06:03:26 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:32.312 06:03:26 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.574 06:03:26 env -- common/autotest_common.sh@10 -- # set +x 00:04:32.574 ************************************ 00:04:32.574 START TEST env_vtophys 00:04:32.574 ************************************ 00:04:32.574 06:03:26 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:32.574 EAL: lib.eal log level changed from notice to debug 00:04:32.574 EAL: Detected lcore 0 as core 0 on socket 0 00:04:32.574 EAL: Detected lcore 1 as core 1 on socket 0 00:04:32.574 EAL: Detected lcore 2 as core 2 on socket 0 00:04:32.574 EAL: Detected lcore 3 as core 3 on socket 0 00:04:32.574 EAL: Detected lcore 4 as core 4 on socket 0 00:04:32.574 EAL: Detected lcore 5 as core 5 on socket 0 00:04:32.574 EAL: Detected lcore 6 as core 6 on socket 0 00:04:32.574 EAL: Detected lcore 7 as core 7 on socket 0 00:04:32.574 EAL: Detected lcore 8 as core 8 on socket 0 00:04:32.574 EAL: Detected lcore 9 as core 9 on socket 0 00:04:32.574 EAL: Detected lcore 10 as core 10 on socket 0 00:04:32.574 EAL: Detected lcore 11 as core 11 on socket 0 00:04:32.574 EAL: Detected lcore 12 as core 12 on socket 0 00:04:32.574 EAL: Detected lcore 13 as core 13 on socket 0 00:04:32.574 EAL: Detected lcore 14 as core 14 on socket 0 00:04:32.574 EAL: Detected lcore 15 as core 15 on socket 0 00:04:32.575 EAL: Detected lcore 16 as core 16 on socket 0 00:04:32.575 EAL: Detected lcore 17 as core 17 on socket 0 00:04:32.575 EAL: Detected lcore 18 as core 18 on socket 0 00:04:32.575 EAL: Detected lcore 19 as core 19 on socket 0 00:04:32.575 EAL: Detected lcore 20 as core 20 on socket 0 00:04:32.575 EAL: Detected lcore 21 as core 21 on socket 0 00:04:32.575 EAL: Detected lcore 22 as core 22 on socket 0 00:04:32.575 EAL: Detected lcore 23 as core 23 on socket 0 00:04:32.575 EAL: Detected lcore 24 as core 24 on socket 0 00:04:32.575 EAL: Detected lcore 25 as core 25 on socket 0 00:04:32.575 EAL: Detected lcore 26 as core 26 on socket 0 00:04:32.575 EAL: Detected lcore 27 as core 27 on socket 0 00:04:32.575 EAL: Detected lcore 28 as core 28 on socket 0 00:04:32.575 EAL: Detected lcore 29 as core 29 on socket 0 00:04:32.575 EAL: Detected lcore 30 as core 30 on socket 0 00:04:32.575 EAL: Detected lcore 31 as core 31 on socket 0 00:04:32.575 EAL: Detected lcore 32 as core 0 on socket 1 00:04:32.575 EAL: Detected lcore 33 as core 1 on socket 1 00:04:32.575 EAL: Detected lcore 34 as core 2 on socket 1 00:04:32.575 EAL: Detected lcore 35 as core 3 on socket 1 00:04:32.575 EAL: Detected lcore 36 as core 4 on socket 1 00:04:32.575 EAL: Detected lcore 37 as core 5 on socket 1 00:04:32.575 EAL: Detected lcore 38 as core 6 on socket 1 00:04:32.575 EAL: Detected lcore 39 as core 7 on socket 1 00:04:32.575 EAL: Detected lcore 40 as core 8 on socket 1 00:04:32.575 EAL: Detected lcore 41 as core 9 on socket 1 00:04:32.575 EAL: Detected lcore 42 as core 10 on socket 1 00:04:32.575 EAL: Detected lcore 43 as core 11 on socket 1 00:04:32.575 EAL: Detected lcore 44 as core 12 on socket 1 00:04:32.575 EAL: Detected lcore 45 as core 13 on socket 1 00:04:32.575 EAL: Detected lcore 46 as core 14 on socket 1 00:04:32.575 EAL: Detected lcore 47 as core 15 on socket 1 00:04:32.575 EAL: Detected lcore 48 as core 16 on socket 1 00:04:32.575 EAL: Detected lcore 49 as core 17 on socket 1 00:04:32.575 EAL: Detected lcore 50 as core 18 on socket 1 00:04:32.575 EAL: Detected lcore 51 as core 19 on socket 1 00:04:32.575 EAL: Detected lcore 52 as core 20 on socket 1 00:04:32.575 EAL: Detected lcore 53 as core 21 on socket 1 00:04:32.575 EAL: Detected lcore 54 as core 22 on socket 1 00:04:32.575 EAL: Detected lcore 55 as core 23 on socket 1 00:04:32.575 EAL: Detected lcore 56 as core 24 on socket 1 00:04:32.575 EAL: Detected lcore 57 as core 25 on socket 1 00:04:32.575 EAL: Detected lcore 58 as core 26 on socket 1 00:04:32.575 EAL: Detected lcore 59 as core 27 on socket 1 00:04:32.575 EAL: Detected lcore 60 as core 28 on socket 1 00:04:32.575 EAL: Detected lcore 61 as core 29 on socket 1 00:04:32.575 EAL: Detected lcore 62 as core 30 on socket 1 00:04:32.575 EAL: Detected lcore 63 as core 31 on socket 1 00:04:32.575 EAL: Detected lcore 64 as core 0 on socket 0 00:04:32.575 EAL: Detected lcore 65 as core 1 on socket 0 00:04:32.575 EAL: Detected lcore 66 as core 2 on socket 0 00:04:32.575 EAL: Detected lcore 67 as core 3 on socket 0 00:04:32.575 EAL: Detected lcore 68 as core 4 on socket 0 00:04:32.575 EAL: Detected lcore 69 as core 5 on socket 0 00:04:32.575 EAL: Detected lcore 70 as core 6 on socket 0 00:04:32.575 EAL: Detected lcore 71 as core 7 on socket 0 00:04:32.575 EAL: Detected lcore 72 as core 8 on socket 0 00:04:32.575 EAL: Detected lcore 73 as core 9 on socket 0 00:04:32.575 EAL: Detected lcore 74 as core 10 on socket 0 00:04:32.575 EAL: Detected lcore 75 as core 11 on socket 0 00:04:32.575 EAL: Detected lcore 76 as core 12 on socket 0 00:04:32.575 EAL: Detected lcore 77 as core 13 on socket 0 00:04:32.575 EAL: Detected lcore 78 as core 14 on socket 0 00:04:32.575 EAL: Detected lcore 79 as core 15 on socket 0 00:04:32.575 EAL: Detected lcore 80 as core 16 on socket 0 00:04:32.575 EAL: Detected lcore 81 as core 17 on socket 0 00:04:32.575 EAL: Detected lcore 82 as core 18 on socket 0 00:04:32.575 EAL: Detected lcore 83 as core 19 on socket 0 00:04:32.575 EAL: Detected lcore 84 as core 20 on socket 0 00:04:32.575 EAL: Detected lcore 85 as core 21 on socket 0 00:04:32.575 EAL: Detected lcore 86 as core 22 on socket 0 00:04:32.575 EAL: Detected lcore 87 as core 23 on socket 0 00:04:32.575 EAL: Detected lcore 88 as core 24 on socket 0 00:04:32.575 EAL: Detected lcore 89 as core 25 on socket 0 00:04:32.575 EAL: Detected lcore 90 as core 26 on socket 0 00:04:32.575 EAL: Detected lcore 91 as core 27 on socket 0 00:04:32.575 EAL: Detected lcore 92 as core 28 on socket 0 00:04:32.575 EAL: Detected lcore 93 as core 29 on socket 0 00:04:32.575 EAL: Detected lcore 94 as core 30 on socket 0 00:04:32.575 EAL: Detected lcore 95 as core 31 on socket 0 00:04:32.575 EAL: Detected lcore 96 as core 0 on socket 1 00:04:32.575 EAL: Detected lcore 97 as core 1 on socket 1 00:04:32.575 EAL: Detected lcore 98 as core 2 on socket 1 00:04:32.575 EAL: Detected lcore 99 as core 3 on socket 1 00:04:32.575 EAL: Detected lcore 100 as core 4 on socket 1 00:04:32.575 EAL: Detected lcore 101 as core 5 on socket 1 00:04:32.575 EAL: Detected lcore 102 as core 6 on socket 1 00:04:32.575 EAL: Detected lcore 103 as core 7 on socket 1 00:04:32.575 EAL: Detected lcore 104 as core 8 on socket 1 00:04:32.575 EAL: Detected lcore 105 as core 9 on socket 1 00:04:32.575 EAL: Detected lcore 106 as core 10 on socket 1 00:04:32.575 EAL: Detected lcore 107 as core 11 on socket 1 00:04:32.575 EAL: Detected lcore 108 as core 12 on socket 1 00:04:32.575 EAL: Detected lcore 109 as core 13 on socket 1 00:04:32.575 EAL: Detected lcore 110 as core 14 on socket 1 00:04:32.575 EAL: Detected lcore 111 as core 15 on socket 1 00:04:32.575 EAL: Detected lcore 112 as core 16 on socket 1 00:04:32.575 EAL: Detected lcore 113 as core 17 on socket 1 00:04:32.575 EAL: Detected lcore 114 as core 18 on socket 1 00:04:32.575 EAL: Detected lcore 115 as core 19 on socket 1 00:04:32.575 EAL: Detected lcore 116 as core 20 on socket 1 00:04:32.575 EAL: Detected lcore 117 as core 21 on socket 1 00:04:32.575 EAL: Detected lcore 118 as core 22 on socket 1 00:04:32.575 EAL: Detected lcore 119 as core 23 on socket 1 00:04:32.575 EAL: Detected lcore 120 as core 24 on socket 1 00:04:32.575 EAL: Detected lcore 121 as core 25 on socket 1 00:04:32.575 EAL: Detected lcore 122 as core 26 on socket 1 00:04:32.575 EAL: Detected lcore 123 as core 27 on socket 1 00:04:32.575 EAL: Detected lcore 124 as core 28 on socket 1 00:04:32.575 EAL: Detected lcore 125 as core 29 on socket 1 00:04:32.575 EAL: Detected lcore 126 as core 30 on socket 1 00:04:32.575 EAL: Detected lcore 127 as core 31 on socket 1 00:04:32.575 EAL: Maximum logical cores by configuration: 128 00:04:32.575 EAL: Detected CPU lcores: 128 00:04:32.575 EAL: Detected NUMA nodes: 2 00:04:32.575 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:32.575 EAL: Detected shared linkage of DPDK 00:04:32.575 EAL: No shared files mode enabled, IPC will be disabled 00:04:32.575 EAL: Bus pci wants IOVA as 'DC' 00:04:32.575 EAL: Buses did not request a specific IOVA mode. 00:04:32.575 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:32.575 EAL: Selected IOVA mode 'VA' 00:04:32.575 EAL: Probing VFIO support... 00:04:32.575 EAL: IOMMU type 1 (Type 1) is supported 00:04:32.575 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:32.575 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:32.575 EAL: VFIO support initialized 00:04:32.575 EAL: Ask a virtual area of 0x2e000 bytes 00:04:32.575 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:32.575 EAL: Setting up physically contiguous memory... 00:04:32.575 EAL: Setting maximum number of open files to 524288 00:04:32.575 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:32.575 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:32.575 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:32.575 EAL: Ask a virtual area of 0x61000 bytes 00:04:32.575 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:32.575 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:32.575 EAL: Ask a virtual area of 0x400000000 bytes 00:04:32.575 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:32.575 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:32.575 EAL: Ask a virtual area of 0x61000 bytes 00:04:32.575 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:32.575 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:32.575 EAL: Ask a virtual area of 0x400000000 bytes 00:04:32.575 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:32.575 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:32.575 EAL: Ask a virtual area of 0x61000 bytes 00:04:32.575 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:32.575 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:32.575 EAL: Ask a virtual area of 0x400000000 bytes 00:04:32.575 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:32.575 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:32.575 EAL: Ask a virtual area of 0x61000 bytes 00:04:32.575 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:32.575 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:32.575 EAL: Ask a virtual area of 0x400000000 bytes 00:04:32.575 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:32.575 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:32.575 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:32.575 EAL: Ask a virtual area of 0x61000 bytes 00:04:32.575 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:32.575 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:32.575 EAL: Ask a virtual area of 0x400000000 bytes 00:04:32.575 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:32.575 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:32.575 EAL: Ask a virtual area of 0x61000 bytes 00:04:32.575 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:32.575 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:32.575 EAL: Ask a virtual area of 0x400000000 bytes 00:04:32.575 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:32.575 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:32.575 EAL: Ask a virtual area of 0x61000 bytes 00:04:32.575 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:32.575 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:32.575 EAL: Ask a virtual area of 0x400000000 bytes 00:04:32.575 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:32.575 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:32.575 EAL: Ask a virtual area of 0x61000 bytes 00:04:32.575 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:32.575 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:32.576 EAL: Ask a virtual area of 0x400000000 bytes 00:04:32.576 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:32.576 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:32.576 EAL: Hugepages will be freed exactly as allocated. 00:04:32.576 EAL: No shared files mode enabled, IPC is disabled 00:04:32.576 EAL: No shared files mode enabled, IPC is disabled 00:04:32.576 EAL: TSC frequency is ~2600000 KHz 00:04:32.576 EAL: Main lcore 0 is ready (tid=7f3e93e4ca00;cpuset=[0]) 00:04:32.576 EAL: Trying to obtain current memory policy. 00:04:32.576 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.576 EAL: Restoring previous memory policy: 0 00:04:32.576 EAL: request: mp_malloc_sync 00:04:32.576 EAL: No shared files mode enabled, IPC is disabled 00:04:32.576 EAL: Heap on socket 0 was expanded by 2MB 00:04:32.576 EAL: No shared files mode enabled, IPC is disabled 00:04:32.576 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:32.576 EAL: Mem event callback 'spdk:(nil)' registered 00:04:32.576 00:04:32.576 00:04:32.576 CUnit - A unit testing framework for C - Version 2.1-3 00:04:32.576 http://cunit.sourceforge.net/ 00:04:32.576 00:04:32.576 00:04:32.576 Suite: components_suite 00:04:32.576 Test: vtophys_malloc_test ...passed 00:04:32.576 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:32.576 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.576 EAL: Restoring previous memory policy: 4 00:04:32.576 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.576 EAL: request: mp_malloc_sync 00:04:32.576 EAL: No shared files mode enabled, IPC is disabled 00:04:32.576 EAL: Heap on socket 0 was expanded by 4MB 00:04:32.576 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.576 EAL: request: mp_malloc_sync 00:04:32.576 EAL: No shared files mode enabled, IPC is disabled 00:04:32.576 EAL: Heap on socket 0 was shrunk by 4MB 00:04:32.576 EAL: Trying to obtain current memory policy. 00:04:32.576 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.576 EAL: Restoring previous memory policy: 4 00:04:32.576 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.576 EAL: request: mp_malloc_sync 00:04:32.576 EAL: No shared files mode enabled, IPC is disabled 00:04:32.576 EAL: Heap on socket 0 was expanded by 6MB 00:04:32.576 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.576 EAL: request: mp_malloc_sync 00:04:32.576 EAL: No shared files mode enabled, IPC is disabled 00:04:32.576 EAL: Heap on socket 0 was shrunk by 6MB 00:04:32.576 EAL: Trying to obtain current memory policy. 00:04:32.576 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.576 EAL: Restoring previous memory policy: 4 00:04:32.576 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.576 EAL: request: mp_malloc_sync 00:04:32.576 EAL: No shared files mode enabled, IPC is disabled 00:04:32.576 EAL: Heap on socket 0 was expanded by 10MB 00:04:32.576 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.576 EAL: request: mp_malloc_sync 00:04:32.576 EAL: No shared files mode enabled, IPC is disabled 00:04:32.576 EAL: Heap on socket 0 was shrunk by 10MB 00:04:32.576 EAL: Trying to obtain current memory policy. 00:04:32.576 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.576 EAL: Restoring previous memory policy: 4 00:04:32.576 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.576 EAL: request: mp_malloc_sync 00:04:32.576 EAL: No shared files mode enabled, IPC is disabled 00:04:32.576 EAL: Heap on socket 0 was expanded by 18MB 00:04:32.576 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.576 EAL: request: mp_malloc_sync 00:04:32.576 EAL: No shared files mode enabled, IPC is disabled 00:04:32.576 EAL: Heap on socket 0 was shrunk by 18MB 00:04:32.576 EAL: Trying to obtain current memory policy. 00:04:32.576 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.576 EAL: Restoring previous memory policy: 4 00:04:32.576 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.576 EAL: request: mp_malloc_sync 00:04:32.576 EAL: No shared files mode enabled, IPC is disabled 00:04:32.576 EAL: Heap on socket 0 was expanded by 34MB 00:04:32.576 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.576 EAL: request: mp_malloc_sync 00:04:32.576 EAL: No shared files mode enabled, IPC is disabled 00:04:32.576 EAL: Heap on socket 0 was shrunk by 34MB 00:04:32.576 EAL: Trying to obtain current memory policy. 00:04:32.576 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.576 EAL: Restoring previous memory policy: 4 00:04:32.576 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.576 EAL: request: mp_malloc_sync 00:04:32.576 EAL: No shared files mode enabled, IPC is disabled 00:04:32.576 EAL: Heap on socket 0 was expanded by 66MB 00:04:32.576 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.576 EAL: request: mp_malloc_sync 00:04:32.576 EAL: No shared files mode enabled, IPC is disabled 00:04:32.576 EAL: Heap on socket 0 was shrunk by 66MB 00:04:32.576 EAL: Trying to obtain current memory policy. 00:04:32.576 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.576 EAL: Restoring previous memory policy: 4 00:04:32.576 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.576 EAL: request: mp_malloc_sync 00:04:32.576 EAL: No shared files mode enabled, IPC is disabled 00:04:32.576 EAL: Heap on socket 0 was expanded by 130MB 00:04:32.576 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.576 EAL: request: mp_malloc_sync 00:04:32.576 EAL: No shared files mode enabled, IPC is disabled 00:04:32.576 EAL: Heap on socket 0 was shrunk by 130MB 00:04:32.576 EAL: Trying to obtain current memory policy. 00:04:32.576 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.837 EAL: Restoring previous memory policy: 4 00:04:32.837 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.837 EAL: request: mp_malloc_sync 00:04:32.837 EAL: No shared files mode enabled, IPC is disabled 00:04:32.837 EAL: Heap on socket 0 was expanded by 258MB 00:04:32.837 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.837 EAL: request: mp_malloc_sync 00:04:32.837 EAL: No shared files mode enabled, IPC is disabled 00:04:32.837 EAL: Heap on socket 0 was shrunk by 258MB 00:04:32.837 EAL: Trying to obtain current memory policy. 00:04:32.837 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.837 EAL: Restoring previous memory policy: 4 00:04:32.837 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.837 EAL: request: mp_malloc_sync 00:04:32.837 EAL: No shared files mode enabled, IPC is disabled 00:04:32.837 EAL: Heap on socket 0 was expanded by 514MB 00:04:32.837 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.837 EAL: request: mp_malloc_sync 00:04:32.837 EAL: No shared files mode enabled, IPC is disabled 00:04:32.837 EAL: Heap on socket 0 was shrunk by 514MB 00:04:32.837 EAL: Trying to obtain current memory policy. 00:04:32.837 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.097 EAL: Restoring previous memory policy: 4 00:04:33.097 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.097 EAL: request: mp_malloc_sync 00:04:33.097 EAL: No shared files mode enabled, IPC is disabled 00:04:33.097 EAL: Heap on socket 0 was expanded by 1026MB 00:04:33.097 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.357 EAL: request: mp_malloc_sync 00:04:33.358 EAL: No shared files mode enabled, IPC is disabled 00:04:33.358 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:33.358 passed 00:04:33.358 00:04:33.358 Run Summary: Type Total Ran Passed Failed Inactive 00:04:33.358 suites 1 1 n/a 0 0 00:04:33.358 tests 2 2 2 0 0 00:04:33.358 asserts 497 497 497 0 n/a 00:04:33.358 00:04:33.358 Elapsed time = 0.612 seconds 00:04:33.358 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.358 EAL: request: mp_malloc_sync 00:04:33.358 EAL: No shared files mode enabled, IPC is disabled 00:04:33.358 EAL: Heap on socket 0 was shrunk by 2MB 00:04:33.358 EAL: No shared files mode enabled, IPC is disabled 00:04:33.358 EAL: No shared files mode enabled, IPC is disabled 00:04:33.358 EAL: No shared files mode enabled, IPC is disabled 00:04:33.358 00:04:33.358 real 0m0.762s 00:04:33.358 user 0m0.391s 00:04:33.358 sys 0m0.338s 00:04:33.358 06:03:27 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:33.358 06:03:27 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:33.358 ************************************ 00:04:33.358 END TEST env_vtophys 00:04:33.358 ************************************ 00:04:33.358 06:03:27 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:33.358 06:03:27 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:33.358 06:03:27 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:33.358 06:03:27 env -- common/autotest_common.sh@10 -- # set +x 00:04:33.358 ************************************ 00:04:33.358 START TEST env_pci 00:04:33.358 ************************************ 00:04:33.358 06:03:27 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:33.358 00:04:33.358 00:04:33.358 CUnit - A unit testing framework for C - Version 2.1-3 00:04:33.358 http://cunit.sourceforge.net/ 00:04:33.358 00:04:33.358 00:04:33.358 Suite: pci 00:04:33.358 Test: pci_hook ...[2024-12-09 06:03:27.789793] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 109993 has claimed it 00:04:33.358 EAL: Cannot find device (10000:00:01.0) 00:04:33.358 EAL: Failed to attach device on primary process 00:04:33.358 passed 00:04:33.358 00:04:33.358 Run Summary: Type Total Ran Passed Failed Inactive 00:04:33.358 suites 1 1 n/a 0 0 00:04:33.358 tests 1 1 1 0 0 00:04:33.358 asserts 25 25 25 0 n/a 00:04:33.358 00:04:33.358 Elapsed time = 0.031 seconds 00:04:33.358 00:04:33.358 real 0m0.052s 00:04:33.358 user 0m0.017s 00:04:33.358 sys 0m0.035s 00:04:33.358 06:03:27 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:33.358 06:03:27 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:33.358 ************************************ 00:04:33.358 END TEST env_pci 00:04:33.358 ************************************ 00:04:33.358 06:03:27 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:33.358 06:03:27 env -- env/env.sh@15 -- # uname 00:04:33.358 06:03:27 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:33.358 06:03:27 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:33.358 06:03:27 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:33.358 06:03:27 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:33.358 06:03:27 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:33.358 06:03:27 env -- common/autotest_common.sh@10 -- # set +x 00:04:33.358 ************************************ 00:04:33.358 START TEST env_dpdk_post_init 00:04:33.358 ************************************ 00:04:33.358 06:03:27 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:33.358 EAL: Detected CPU lcores: 128 00:04:33.358 EAL: Detected NUMA nodes: 2 00:04:33.358 EAL: Detected shared linkage of DPDK 00:04:33.358 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:33.619 EAL: Selected IOVA mode 'VA' 00:04:33.619 EAL: VFIO support initialized 00:04:33.619 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:33.619 EAL: Using IOMMU type 1 (Type 1) 00:04:33.619 EAL: Ignore mapping IO port bar(1) 00:04:33.879 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:04:33.879 EAL: Ignore mapping IO port bar(1) 00:04:34.141 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:04:34.141 EAL: Ignore mapping IO port bar(1) 00:04:34.141 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:04:34.402 EAL: Ignore mapping IO port bar(1) 00:04:34.402 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:04:34.663 EAL: Ignore mapping IO port bar(1) 00:04:34.663 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:04:34.924 EAL: Ignore mapping IO port bar(1) 00:04:34.924 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:04:34.924 EAL: Ignore mapping IO port bar(1) 00:04:35.185 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:04:35.185 EAL: Ignore mapping IO port bar(1) 00:04:35.446 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:04:36.019 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:65:00.0 (socket 0) 00:04:36.281 EAL: Ignore mapping IO port bar(1) 00:04:36.281 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:04:36.542 EAL: Ignore mapping IO port bar(1) 00:04:36.542 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:04:36.802 EAL: Ignore mapping IO port bar(1) 00:04:36.802 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:04:36.802 EAL: Ignore mapping IO port bar(1) 00:04:37.064 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:04:37.064 EAL: Ignore mapping IO port bar(1) 00:04:37.324 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:04:37.324 EAL: Ignore mapping IO port bar(1) 00:04:37.585 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:04:37.585 EAL: Ignore mapping IO port bar(1) 00:04:37.585 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:04:37.846 EAL: Ignore mapping IO port bar(1) 00:04:37.846 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:04:42.089 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:04:42.089 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:04:42.089 Starting DPDK initialization... 00:04:42.089 Starting SPDK post initialization... 00:04:42.089 SPDK NVMe probe 00:04:42.089 Attaching to 0000:65:00.0 00:04:42.089 Attached to 0000:65:00.0 00:04:42.089 Cleaning up... 00:04:44.002 00:04:44.002 real 0m10.402s 00:04:44.002 user 0m3.901s 00:04:44.002 sys 0m0.525s 00:04:44.002 06:03:38 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.002 06:03:38 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:44.002 ************************************ 00:04:44.002 END TEST env_dpdk_post_init 00:04:44.002 ************************************ 00:04:44.002 06:03:38 env -- env/env.sh@26 -- # uname 00:04:44.002 06:03:38 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:44.002 06:03:38 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:44.002 06:03:38 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:44.002 06:03:38 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.002 06:03:38 env -- common/autotest_common.sh@10 -- # set +x 00:04:44.002 ************************************ 00:04:44.002 START TEST env_mem_callbacks 00:04:44.002 ************************************ 00:04:44.002 06:03:38 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:44.002 EAL: Detected CPU lcores: 128 00:04:44.002 EAL: Detected NUMA nodes: 2 00:04:44.002 EAL: Detected shared linkage of DPDK 00:04:44.002 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:44.002 EAL: Selected IOVA mode 'VA' 00:04:44.002 EAL: VFIO support initialized 00:04:44.002 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:44.002 00:04:44.002 00:04:44.002 CUnit - A unit testing framework for C - Version 2.1-3 00:04:44.002 http://cunit.sourceforge.net/ 00:04:44.002 00:04:44.002 00:04:44.002 Suite: memory 00:04:44.002 Test: test ... 00:04:44.002 register 0x200000200000 2097152 00:04:44.002 malloc 3145728 00:04:44.002 register 0x200000400000 4194304 00:04:44.002 buf 0x200000500000 len 3145728 PASSED 00:04:44.002 malloc 64 00:04:44.002 buf 0x2000004fff40 len 64 PASSED 00:04:44.002 malloc 4194304 00:04:44.002 register 0x200000800000 6291456 00:04:44.002 buf 0x200000a00000 len 4194304 PASSED 00:04:44.002 free 0x200000500000 3145728 00:04:44.002 free 0x2000004fff40 64 00:04:44.002 unregister 0x200000400000 4194304 PASSED 00:04:44.002 free 0x200000a00000 4194304 00:04:44.002 unregister 0x200000800000 6291456 PASSED 00:04:44.002 malloc 8388608 00:04:44.002 register 0x200000400000 10485760 00:04:44.002 buf 0x200000600000 len 8388608 PASSED 00:04:44.002 free 0x200000600000 8388608 00:04:44.002 unregister 0x200000400000 10485760 PASSED 00:04:44.002 passed 00:04:44.002 00:04:44.002 Run Summary: Type Total Ran Passed Failed Inactive 00:04:44.002 suites 1 1 n/a 0 0 00:04:44.002 tests 1 1 1 0 0 00:04:44.002 asserts 15 15 15 0 n/a 00:04:44.002 00:04:44.002 Elapsed time = 0.010 seconds 00:04:44.002 00:04:44.002 real 0m0.078s 00:04:44.002 user 0m0.012s 00:04:44.002 sys 0m0.066s 00:04:44.002 06:03:38 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.002 06:03:38 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:44.002 ************************************ 00:04:44.002 END TEST env_mem_callbacks 00:04:44.002 ************************************ 00:04:44.002 00:04:44.002 real 0m12.020s 00:04:44.002 user 0m4.710s 00:04:44.002 sys 0m1.330s 00:04:44.002 06:03:38 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.002 06:03:38 env -- common/autotest_common.sh@10 -- # set +x 00:04:44.002 ************************************ 00:04:44.002 END TEST env 00:04:44.002 ************************************ 00:04:44.002 06:03:38 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:44.002 06:03:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:44.002 06:03:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.002 06:03:38 -- common/autotest_common.sh@10 -- # set +x 00:04:44.002 ************************************ 00:04:44.002 START TEST rpc 00:04:44.002 ************************************ 00:04:44.002 06:03:38 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:44.263 * Looking for test storage... 00:04:44.263 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:44.263 06:03:38 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:44.263 06:03:38 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:44.263 06:03:38 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:44.263 06:03:38 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:44.263 06:03:38 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:44.263 06:03:38 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:44.263 06:03:38 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:44.263 06:03:38 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:44.263 06:03:38 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:44.263 06:03:38 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:44.263 06:03:38 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:44.263 06:03:38 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:44.263 06:03:38 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:44.263 06:03:38 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:44.263 06:03:38 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:44.263 06:03:38 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:44.263 06:03:38 rpc -- scripts/common.sh@345 -- # : 1 00:04:44.263 06:03:38 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:44.263 06:03:38 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:44.263 06:03:38 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:44.263 06:03:38 rpc -- scripts/common.sh@353 -- # local d=1 00:04:44.263 06:03:38 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:44.263 06:03:38 rpc -- scripts/common.sh@355 -- # echo 1 00:04:44.263 06:03:38 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:44.263 06:03:38 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:44.263 06:03:38 rpc -- scripts/common.sh@353 -- # local d=2 00:04:44.263 06:03:38 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:44.263 06:03:38 rpc -- scripts/common.sh@355 -- # echo 2 00:04:44.263 06:03:38 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:44.263 06:03:38 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:44.263 06:03:38 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:44.263 06:03:38 rpc -- scripts/common.sh@368 -- # return 0 00:04:44.263 06:03:38 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:44.263 06:03:38 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:44.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.263 --rc genhtml_branch_coverage=1 00:04:44.263 --rc genhtml_function_coverage=1 00:04:44.263 --rc genhtml_legend=1 00:04:44.263 --rc geninfo_all_blocks=1 00:04:44.263 --rc geninfo_unexecuted_blocks=1 00:04:44.263 00:04:44.263 ' 00:04:44.263 06:03:38 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:44.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.263 --rc genhtml_branch_coverage=1 00:04:44.263 --rc genhtml_function_coverage=1 00:04:44.263 --rc genhtml_legend=1 00:04:44.263 --rc geninfo_all_blocks=1 00:04:44.263 --rc geninfo_unexecuted_blocks=1 00:04:44.263 00:04:44.263 ' 00:04:44.263 06:03:38 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:44.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.263 --rc genhtml_branch_coverage=1 00:04:44.263 --rc genhtml_function_coverage=1 00:04:44.263 --rc genhtml_legend=1 00:04:44.263 --rc geninfo_all_blocks=1 00:04:44.263 --rc geninfo_unexecuted_blocks=1 00:04:44.263 00:04:44.263 ' 00:04:44.263 06:03:38 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:44.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.263 --rc genhtml_branch_coverage=1 00:04:44.263 --rc genhtml_function_coverage=1 00:04:44.263 --rc genhtml_legend=1 00:04:44.263 --rc geninfo_all_blocks=1 00:04:44.263 --rc geninfo_unexecuted_blocks=1 00:04:44.263 00:04:44.263 ' 00:04:44.263 06:03:38 rpc -- rpc/rpc.sh@65 -- # spdk_pid=111941 00:04:44.263 06:03:38 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:44.263 06:03:38 rpc -- rpc/rpc.sh@67 -- # waitforlisten 111941 00:04:44.263 06:03:38 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:44.263 06:03:38 rpc -- common/autotest_common.sh@835 -- # '[' -z 111941 ']' 00:04:44.263 06:03:38 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.263 06:03:38 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:44.263 06:03:38 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.263 06:03:38 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:44.263 06:03:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.525 [2024-12-09 06:03:38.851590] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:04:44.525 [2024-12-09 06:03:38.851661] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111941 ] 00:04:44.525 [2024-12-09 06:03:38.935160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.525 [2024-12-09 06:03:38.966265] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:44.525 [2024-12-09 06:03:38.966298] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 111941' to capture a snapshot of events at runtime. 00:04:44.525 [2024-12-09 06:03:38.966304] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:44.525 [2024-12-09 06:03:38.966308] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:44.525 [2024-12-09 06:03:38.966313] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid111941 for offline analysis/debug. 00:04:44.525 [2024-12-09 06:03:38.966760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.102 06:03:39 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:45.102 06:03:39 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:45.102 06:03:39 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:45.102 06:03:39 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:45.102 06:03:39 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:45.102 06:03:39 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:45.102 06:03:39 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:45.102 06:03:39 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:45.102 06:03:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.102 ************************************ 00:04:45.102 START TEST rpc_integrity 00:04:45.102 ************************************ 00:04:45.102 06:03:39 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:45.102 06:03:39 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:45.102 06:03:39 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.102 06:03:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.102 06:03:39 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.102 06:03:39 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:45.102 06:03:39 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:45.364 06:03:39 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:45.364 06:03:39 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:45.364 06:03:39 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.364 06:03:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.364 06:03:39 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.364 06:03:39 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:45.364 06:03:39 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:45.364 06:03:39 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.364 06:03:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.364 06:03:39 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.364 06:03:39 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:45.364 { 00:04:45.364 "name": "Malloc0", 00:04:45.364 "aliases": [ 00:04:45.364 "7a1e9655-ec34-459e-9e16-8f8d619172ea" 00:04:45.364 ], 00:04:45.364 "product_name": "Malloc disk", 00:04:45.364 "block_size": 512, 00:04:45.364 "num_blocks": 16384, 00:04:45.364 "uuid": "7a1e9655-ec34-459e-9e16-8f8d619172ea", 00:04:45.364 "assigned_rate_limits": { 00:04:45.364 "rw_ios_per_sec": 0, 00:04:45.364 "rw_mbytes_per_sec": 0, 00:04:45.364 "r_mbytes_per_sec": 0, 00:04:45.364 "w_mbytes_per_sec": 0 00:04:45.364 }, 00:04:45.364 "claimed": false, 00:04:45.364 "zoned": false, 00:04:45.364 "supported_io_types": { 00:04:45.364 "read": true, 00:04:45.364 "write": true, 00:04:45.364 "unmap": true, 00:04:45.364 "flush": true, 00:04:45.364 "reset": true, 00:04:45.364 "nvme_admin": false, 00:04:45.364 "nvme_io": false, 00:04:45.364 "nvme_io_md": false, 00:04:45.364 "write_zeroes": true, 00:04:45.364 "zcopy": true, 00:04:45.364 "get_zone_info": false, 00:04:45.364 "zone_management": false, 00:04:45.364 "zone_append": false, 00:04:45.364 "compare": false, 00:04:45.364 "compare_and_write": false, 00:04:45.364 "abort": true, 00:04:45.364 "seek_hole": false, 00:04:45.364 "seek_data": false, 00:04:45.364 "copy": true, 00:04:45.364 "nvme_iov_md": false 00:04:45.364 }, 00:04:45.364 "memory_domains": [ 00:04:45.364 { 00:04:45.364 "dma_device_id": "system", 00:04:45.364 "dma_device_type": 1 00:04:45.364 }, 00:04:45.364 { 00:04:45.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:45.364 "dma_device_type": 2 00:04:45.364 } 00:04:45.364 ], 00:04:45.364 "driver_specific": {} 00:04:45.364 } 00:04:45.364 ]' 00:04:45.364 06:03:39 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:45.364 06:03:39 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:45.364 06:03:39 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:45.364 06:03:39 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.364 06:03:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.364 [2024-12-09 06:03:39.803976] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:45.364 [2024-12-09 06:03:39.804000] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:45.364 [2024-12-09 06:03:39.804011] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d5b6c0 00:04:45.364 [2024-12-09 06:03:39.804016] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:45.364 [2024-12-09 06:03:39.805064] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:45.364 [2024-12-09 06:03:39.805080] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:45.364 Passthru0 00:04:45.364 06:03:39 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.364 06:03:39 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:45.364 06:03:39 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.364 06:03:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.364 06:03:39 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.364 06:03:39 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:45.364 { 00:04:45.364 "name": "Malloc0", 00:04:45.364 "aliases": [ 00:04:45.364 "7a1e9655-ec34-459e-9e16-8f8d619172ea" 00:04:45.364 ], 00:04:45.364 "product_name": "Malloc disk", 00:04:45.364 "block_size": 512, 00:04:45.364 "num_blocks": 16384, 00:04:45.364 "uuid": "7a1e9655-ec34-459e-9e16-8f8d619172ea", 00:04:45.364 "assigned_rate_limits": { 00:04:45.364 "rw_ios_per_sec": 0, 00:04:45.364 "rw_mbytes_per_sec": 0, 00:04:45.364 "r_mbytes_per_sec": 0, 00:04:45.364 "w_mbytes_per_sec": 0 00:04:45.365 }, 00:04:45.365 "claimed": true, 00:04:45.365 "claim_type": "exclusive_write", 00:04:45.365 "zoned": false, 00:04:45.365 "supported_io_types": { 00:04:45.365 "read": true, 00:04:45.365 "write": true, 00:04:45.365 "unmap": true, 00:04:45.365 "flush": true, 00:04:45.365 "reset": true, 00:04:45.365 "nvme_admin": false, 00:04:45.365 "nvme_io": false, 00:04:45.365 "nvme_io_md": false, 00:04:45.365 "write_zeroes": true, 00:04:45.365 "zcopy": true, 00:04:45.365 "get_zone_info": false, 00:04:45.365 "zone_management": false, 00:04:45.365 "zone_append": false, 00:04:45.365 "compare": false, 00:04:45.365 "compare_and_write": false, 00:04:45.365 "abort": true, 00:04:45.365 "seek_hole": false, 00:04:45.365 "seek_data": false, 00:04:45.365 "copy": true, 00:04:45.365 "nvme_iov_md": false 00:04:45.365 }, 00:04:45.365 "memory_domains": [ 00:04:45.365 { 00:04:45.365 "dma_device_id": "system", 00:04:45.365 "dma_device_type": 1 00:04:45.365 }, 00:04:45.365 { 00:04:45.365 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:45.365 "dma_device_type": 2 00:04:45.365 } 00:04:45.365 ], 00:04:45.365 "driver_specific": {} 00:04:45.365 }, 00:04:45.365 { 00:04:45.365 "name": "Passthru0", 00:04:45.365 "aliases": [ 00:04:45.365 "80005db3-4095-5ae4-abe0-71d2be699b10" 00:04:45.365 ], 00:04:45.365 "product_name": "passthru", 00:04:45.365 "block_size": 512, 00:04:45.365 "num_blocks": 16384, 00:04:45.365 "uuid": "80005db3-4095-5ae4-abe0-71d2be699b10", 00:04:45.365 "assigned_rate_limits": { 00:04:45.365 "rw_ios_per_sec": 0, 00:04:45.365 "rw_mbytes_per_sec": 0, 00:04:45.365 "r_mbytes_per_sec": 0, 00:04:45.365 "w_mbytes_per_sec": 0 00:04:45.365 }, 00:04:45.365 "claimed": false, 00:04:45.365 "zoned": false, 00:04:45.365 "supported_io_types": { 00:04:45.365 "read": true, 00:04:45.365 "write": true, 00:04:45.365 "unmap": true, 00:04:45.365 "flush": true, 00:04:45.365 "reset": true, 00:04:45.365 "nvme_admin": false, 00:04:45.365 "nvme_io": false, 00:04:45.365 "nvme_io_md": false, 00:04:45.365 "write_zeroes": true, 00:04:45.365 "zcopy": true, 00:04:45.365 "get_zone_info": false, 00:04:45.365 "zone_management": false, 00:04:45.365 "zone_append": false, 00:04:45.365 "compare": false, 00:04:45.365 "compare_and_write": false, 00:04:45.365 "abort": true, 00:04:45.365 "seek_hole": false, 00:04:45.365 "seek_data": false, 00:04:45.365 "copy": true, 00:04:45.365 "nvme_iov_md": false 00:04:45.365 }, 00:04:45.365 "memory_domains": [ 00:04:45.365 { 00:04:45.365 "dma_device_id": "system", 00:04:45.365 "dma_device_type": 1 00:04:45.365 }, 00:04:45.365 { 00:04:45.365 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:45.365 "dma_device_type": 2 00:04:45.365 } 00:04:45.365 ], 00:04:45.365 "driver_specific": { 00:04:45.365 "passthru": { 00:04:45.365 "name": "Passthru0", 00:04:45.365 "base_bdev_name": "Malloc0" 00:04:45.365 } 00:04:45.365 } 00:04:45.365 } 00:04:45.365 ]' 00:04:45.365 06:03:39 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:45.365 06:03:39 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:45.365 06:03:39 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:45.365 06:03:39 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.365 06:03:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.365 06:03:39 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.365 06:03:39 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:45.365 06:03:39 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.365 06:03:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.365 06:03:39 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.365 06:03:39 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:45.365 06:03:39 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.365 06:03:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.365 06:03:39 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.365 06:03:39 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:45.365 06:03:39 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:45.627 06:03:39 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:45.627 00:04:45.627 real 0m0.290s 00:04:45.627 user 0m0.183s 00:04:45.627 sys 0m0.044s 00:04:45.627 06:03:39 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:45.627 06:03:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.627 ************************************ 00:04:45.627 END TEST rpc_integrity 00:04:45.627 ************************************ 00:04:45.627 06:03:39 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:45.627 06:03:39 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:45.627 06:03:39 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:45.627 06:03:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.627 ************************************ 00:04:45.627 START TEST rpc_plugins 00:04:45.627 ************************************ 00:04:45.627 06:03:40 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:45.627 06:03:40 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:45.627 06:03:40 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.627 06:03:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:45.627 06:03:40 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.627 06:03:40 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:45.627 06:03:40 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:45.627 06:03:40 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.627 06:03:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:45.627 06:03:40 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.627 06:03:40 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:45.627 { 00:04:45.627 "name": "Malloc1", 00:04:45.627 "aliases": [ 00:04:45.627 "ef3329d7-0710-4c63-8af8-000e11006805" 00:04:45.627 ], 00:04:45.627 "product_name": "Malloc disk", 00:04:45.627 "block_size": 4096, 00:04:45.627 "num_blocks": 256, 00:04:45.627 "uuid": "ef3329d7-0710-4c63-8af8-000e11006805", 00:04:45.627 "assigned_rate_limits": { 00:04:45.627 "rw_ios_per_sec": 0, 00:04:45.627 "rw_mbytes_per_sec": 0, 00:04:45.627 "r_mbytes_per_sec": 0, 00:04:45.627 "w_mbytes_per_sec": 0 00:04:45.627 }, 00:04:45.627 "claimed": false, 00:04:45.627 "zoned": false, 00:04:45.627 "supported_io_types": { 00:04:45.627 "read": true, 00:04:45.627 "write": true, 00:04:45.627 "unmap": true, 00:04:45.627 "flush": true, 00:04:45.627 "reset": true, 00:04:45.627 "nvme_admin": false, 00:04:45.627 "nvme_io": false, 00:04:45.627 "nvme_io_md": false, 00:04:45.627 "write_zeroes": true, 00:04:45.627 "zcopy": true, 00:04:45.627 "get_zone_info": false, 00:04:45.627 "zone_management": false, 00:04:45.627 "zone_append": false, 00:04:45.627 "compare": false, 00:04:45.627 "compare_and_write": false, 00:04:45.627 "abort": true, 00:04:45.627 "seek_hole": false, 00:04:45.627 "seek_data": false, 00:04:45.627 "copy": true, 00:04:45.627 "nvme_iov_md": false 00:04:45.627 }, 00:04:45.627 "memory_domains": [ 00:04:45.627 { 00:04:45.628 "dma_device_id": "system", 00:04:45.628 "dma_device_type": 1 00:04:45.628 }, 00:04:45.628 { 00:04:45.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:45.628 "dma_device_type": 2 00:04:45.628 } 00:04:45.628 ], 00:04:45.628 "driver_specific": {} 00:04:45.628 } 00:04:45.628 ]' 00:04:45.628 06:03:40 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:45.628 06:03:40 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:45.628 06:03:40 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:45.628 06:03:40 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.628 06:03:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:45.628 06:03:40 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.628 06:03:40 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:45.628 06:03:40 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.628 06:03:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:45.628 06:03:40 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.628 06:03:40 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:45.628 06:03:40 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:45.628 06:03:40 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:45.628 00:04:45.628 real 0m0.154s 00:04:45.628 user 0m0.097s 00:04:45.628 sys 0m0.016s 00:04:45.628 06:03:40 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:45.628 06:03:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:45.628 ************************************ 00:04:45.628 END TEST rpc_plugins 00:04:45.628 ************************************ 00:04:45.889 06:03:40 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:45.889 06:03:40 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:45.889 06:03:40 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:45.889 06:03:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.889 ************************************ 00:04:45.889 START TEST rpc_trace_cmd_test 00:04:45.889 ************************************ 00:04:45.889 06:03:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:45.889 06:03:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:45.889 06:03:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:45.889 06:03:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.889 06:03:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:45.889 06:03:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.889 06:03:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:45.889 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid111941", 00:04:45.889 "tpoint_group_mask": "0x8", 00:04:45.889 "iscsi_conn": { 00:04:45.889 "mask": "0x2", 00:04:45.889 "tpoint_mask": "0x0" 00:04:45.889 }, 00:04:45.889 "scsi": { 00:04:45.889 "mask": "0x4", 00:04:45.889 "tpoint_mask": "0x0" 00:04:45.889 }, 00:04:45.889 "bdev": { 00:04:45.889 "mask": "0x8", 00:04:45.889 "tpoint_mask": "0xffffffffffffffff" 00:04:45.889 }, 00:04:45.889 "nvmf_rdma": { 00:04:45.889 "mask": "0x10", 00:04:45.889 "tpoint_mask": "0x0" 00:04:45.889 }, 00:04:45.889 "nvmf_tcp": { 00:04:45.889 "mask": "0x20", 00:04:45.889 "tpoint_mask": "0x0" 00:04:45.889 }, 00:04:45.889 "ftl": { 00:04:45.889 "mask": "0x40", 00:04:45.889 "tpoint_mask": "0x0" 00:04:45.889 }, 00:04:45.889 "blobfs": { 00:04:45.889 "mask": "0x80", 00:04:45.889 "tpoint_mask": "0x0" 00:04:45.889 }, 00:04:45.889 "dsa": { 00:04:45.889 "mask": "0x200", 00:04:45.889 "tpoint_mask": "0x0" 00:04:45.889 }, 00:04:45.889 "thread": { 00:04:45.889 "mask": "0x400", 00:04:45.889 "tpoint_mask": "0x0" 00:04:45.889 }, 00:04:45.889 "nvme_pcie": { 00:04:45.889 "mask": "0x800", 00:04:45.889 "tpoint_mask": "0x0" 00:04:45.889 }, 00:04:45.889 "iaa": { 00:04:45.889 "mask": "0x1000", 00:04:45.889 "tpoint_mask": "0x0" 00:04:45.889 }, 00:04:45.889 "nvme_tcp": { 00:04:45.889 "mask": "0x2000", 00:04:45.889 "tpoint_mask": "0x0" 00:04:45.889 }, 00:04:45.889 "bdev_nvme": { 00:04:45.889 "mask": "0x4000", 00:04:45.889 "tpoint_mask": "0x0" 00:04:45.889 }, 00:04:45.889 "sock": { 00:04:45.889 "mask": "0x8000", 00:04:45.889 "tpoint_mask": "0x0" 00:04:45.889 }, 00:04:45.889 "blob": { 00:04:45.889 "mask": "0x10000", 00:04:45.889 "tpoint_mask": "0x0" 00:04:45.889 }, 00:04:45.889 "bdev_raid": { 00:04:45.889 "mask": "0x20000", 00:04:45.889 "tpoint_mask": "0x0" 00:04:45.889 }, 00:04:45.889 "scheduler": { 00:04:45.889 "mask": "0x40000", 00:04:45.889 "tpoint_mask": "0x0" 00:04:45.889 } 00:04:45.889 }' 00:04:45.889 06:03:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:45.889 06:03:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:45.889 06:03:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:45.889 06:03:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:45.889 06:03:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:45.889 06:03:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:45.889 06:03:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:45.889 06:03:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:45.889 06:03:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:45.889 06:03:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:45.889 00:04:45.889 real 0m0.191s 00:04:45.889 user 0m0.153s 00:04:45.889 sys 0m0.028s 00:04:45.889 06:03:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:45.889 06:03:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:45.889 ************************************ 00:04:45.889 END TEST rpc_trace_cmd_test 00:04:45.889 ************************************ 00:04:46.151 06:03:40 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:46.151 06:03:40 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:46.151 06:03:40 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:46.151 06:03:40 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:46.151 06:03:40 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.151 06:03:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.151 ************************************ 00:04:46.151 START TEST rpc_daemon_integrity 00:04:46.151 ************************************ 00:04:46.151 06:03:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:46.151 06:03:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:46.151 06:03:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.151 06:03:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.151 06:03:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.151 06:03:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:46.151 06:03:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:46.151 06:03:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:46.151 06:03:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:46.151 06:03:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.151 06:03:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.151 06:03:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.151 06:03:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:46.151 06:03:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:46.151 06:03:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.151 06:03:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.151 06:03:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.151 06:03:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:46.151 { 00:04:46.151 "name": "Malloc2", 00:04:46.151 "aliases": [ 00:04:46.151 "0005a36e-ff6a-4826-b0c6-9deb9dc25c1c" 00:04:46.151 ], 00:04:46.151 "product_name": "Malloc disk", 00:04:46.151 "block_size": 512, 00:04:46.151 "num_blocks": 16384, 00:04:46.151 "uuid": "0005a36e-ff6a-4826-b0c6-9deb9dc25c1c", 00:04:46.151 "assigned_rate_limits": { 00:04:46.151 "rw_ios_per_sec": 0, 00:04:46.151 "rw_mbytes_per_sec": 0, 00:04:46.151 "r_mbytes_per_sec": 0, 00:04:46.151 "w_mbytes_per_sec": 0 00:04:46.151 }, 00:04:46.151 "claimed": false, 00:04:46.151 "zoned": false, 00:04:46.151 "supported_io_types": { 00:04:46.151 "read": true, 00:04:46.151 "write": true, 00:04:46.151 "unmap": true, 00:04:46.151 "flush": true, 00:04:46.151 "reset": true, 00:04:46.151 "nvme_admin": false, 00:04:46.151 "nvme_io": false, 00:04:46.151 "nvme_io_md": false, 00:04:46.151 "write_zeroes": true, 00:04:46.151 "zcopy": true, 00:04:46.151 "get_zone_info": false, 00:04:46.151 "zone_management": false, 00:04:46.151 "zone_append": false, 00:04:46.151 "compare": false, 00:04:46.151 "compare_and_write": false, 00:04:46.151 "abort": true, 00:04:46.151 "seek_hole": false, 00:04:46.151 "seek_data": false, 00:04:46.151 "copy": true, 00:04:46.151 "nvme_iov_md": false 00:04:46.151 }, 00:04:46.151 "memory_domains": [ 00:04:46.151 { 00:04:46.151 "dma_device_id": "system", 00:04:46.151 "dma_device_type": 1 00:04:46.151 }, 00:04:46.151 { 00:04:46.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:46.151 "dma_device_type": 2 00:04:46.151 } 00:04:46.151 ], 00:04:46.151 "driver_specific": {} 00:04:46.151 } 00:04:46.151 ]' 00:04:46.151 06:03:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:46.151 06:03:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:46.151 06:03:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:46.151 06:03:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.151 06:03:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.151 [2024-12-09 06:03:40.634123] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:46.151 [2024-12-09 06:03:40.634147] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:46.151 [2024-12-09 06:03:40.634158] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d28f20 00:04:46.151 [2024-12-09 06:03:40.634166] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:46.151 [2024-12-09 06:03:40.635145] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:46.151 [2024-12-09 06:03:40.635162] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:46.151 Passthru0 00:04:46.151 06:03:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.151 06:03:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:46.151 06:03:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.151 06:03:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.151 06:03:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.151 06:03:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:46.151 { 00:04:46.151 "name": "Malloc2", 00:04:46.151 "aliases": [ 00:04:46.151 "0005a36e-ff6a-4826-b0c6-9deb9dc25c1c" 00:04:46.151 ], 00:04:46.151 "product_name": "Malloc disk", 00:04:46.151 "block_size": 512, 00:04:46.151 "num_blocks": 16384, 00:04:46.151 "uuid": "0005a36e-ff6a-4826-b0c6-9deb9dc25c1c", 00:04:46.152 "assigned_rate_limits": { 00:04:46.152 "rw_ios_per_sec": 0, 00:04:46.152 "rw_mbytes_per_sec": 0, 00:04:46.152 "r_mbytes_per_sec": 0, 00:04:46.152 "w_mbytes_per_sec": 0 00:04:46.152 }, 00:04:46.152 "claimed": true, 00:04:46.152 "claim_type": "exclusive_write", 00:04:46.152 "zoned": false, 00:04:46.152 "supported_io_types": { 00:04:46.152 "read": true, 00:04:46.152 "write": true, 00:04:46.152 "unmap": true, 00:04:46.152 "flush": true, 00:04:46.152 "reset": true, 00:04:46.152 "nvme_admin": false, 00:04:46.152 "nvme_io": false, 00:04:46.152 "nvme_io_md": false, 00:04:46.152 "write_zeroes": true, 00:04:46.152 "zcopy": true, 00:04:46.152 "get_zone_info": false, 00:04:46.152 "zone_management": false, 00:04:46.152 "zone_append": false, 00:04:46.152 "compare": false, 00:04:46.152 "compare_and_write": false, 00:04:46.152 "abort": true, 00:04:46.152 "seek_hole": false, 00:04:46.152 "seek_data": false, 00:04:46.152 "copy": true, 00:04:46.152 "nvme_iov_md": false 00:04:46.152 }, 00:04:46.152 "memory_domains": [ 00:04:46.152 { 00:04:46.152 "dma_device_id": "system", 00:04:46.152 "dma_device_type": 1 00:04:46.152 }, 00:04:46.152 { 00:04:46.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:46.152 "dma_device_type": 2 00:04:46.152 } 00:04:46.152 ], 00:04:46.152 "driver_specific": {} 00:04:46.152 }, 00:04:46.152 { 00:04:46.152 "name": "Passthru0", 00:04:46.152 "aliases": [ 00:04:46.152 "44215d9e-b5da-5e48-81ac-08d2f28afa34" 00:04:46.152 ], 00:04:46.152 "product_name": "passthru", 00:04:46.152 "block_size": 512, 00:04:46.152 "num_blocks": 16384, 00:04:46.152 "uuid": "44215d9e-b5da-5e48-81ac-08d2f28afa34", 00:04:46.152 "assigned_rate_limits": { 00:04:46.152 "rw_ios_per_sec": 0, 00:04:46.152 "rw_mbytes_per_sec": 0, 00:04:46.152 "r_mbytes_per_sec": 0, 00:04:46.152 "w_mbytes_per_sec": 0 00:04:46.152 }, 00:04:46.152 "claimed": false, 00:04:46.152 "zoned": false, 00:04:46.152 "supported_io_types": { 00:04:46.152 "read": true, 00:04:46.152 "write": true, 00:04:46.152 "unmap": true, 00:04:46.152 "flush": true, 00:04:46.152 "reset": true, 00:04:46.152 "nvme_admin": false, 00:04:46.152 "nvme_io": false, 00:04:46.152 "nvme_io_md": false, 00:04:46.152 "write_zeroes": true, 00:04:46.152 "zcopy": true, 00:04:46.152 "get_zone_info": false, 00:04:46.152 "zone_management": false, 00:04:46.152 "zone_append": false, 00:04:46.152 "compare": false, 00:04:46.152 "compare_and_write": false, 00:04:46.152 "abort": true, 00:04:46.152 "seek_hole": false, 00:04:46.152 "seek_data": false, 00:04:46.152 "copy": true, 00:04:46.152 "nvme_iov_md": false 00:04:46.152 }, 00:04:46.152 "memory_domains": [ 00:04:46.152 { 00:04:46.152 "dma_device_id": "system", 00:04:46.152 "dma_device_type": 1 00:04:46.152 }, 00:04:46.152 { 00:04:46.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:46.152 "dma_device_type": 2 00:04:46.152 } 00:04:46.152 ], 00:04:46.152 "driver_specific": { 00:04:46.152 "passthru": { 00:04:46.152 "name": "Passthru0", 00:04:46.152 "base_bdev_name": "Malloc2" 00:04:46.152 } 00:04:46.152 } 00:04:46.152 } 00:04:46.152 ]' 00:04:46.152 06:03:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:46.152 06:03:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:46.152 06:03:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:46.152 06:03:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.152 06:03:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.152 06:03:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.152 06:03:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:46.152 06:03:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.152 06:03:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.152 06:03:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.152 06:03:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:46.152 06:03:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.152 06:03:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.152 06:03:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.152 06:03:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:46.152 06:03:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:46.413 06:03:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:46.413 00:04:46.413 real 0m0.253s 00:04:46.413 user 0m0.151s 00:04:46.413 sys 0m0.038s 00:04:46.413 06:03:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:46.413 06:03:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.413 ************************************ 00:04:46.413 END TEST rpc_daemon_integrity 00:04:46.413 ************************************ 00:04:46.413 06:03:40 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:46.413 06:03:40 rpc -- rpc/rpc.sh@84 -- # killprocess 111941 00:04:46.413 06:03:40 rpc -- common/autotest_common.sh@954 -- # '[' -z 111941 ']' 00:04:46.413 06:03:40 rpc -- common/autotest_common.sh@958 -- # kill -0 111941 00:04:46.413 06:03:40 rpc -- common/autotest_common.sh@959 -- # uname 00:04:46.413 06:03:40 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:46.413 06:03:40 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 111941 00:04:46.413 06:03:40 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:46.413 06:03:40 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:46.413 06:03:40 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 111941' 00:04:46.413 killing process with pid 111941 00:04:46.413 06:03:40 rpc -- common/autotest_common.sh@973 -- # kill 111941 00:04:46.413 06:03:40 rpc -- common/autotest_common.sh@978 -- # wait 111941 00:04:46.674 00:04:46.674 real 0m2.476s 00:04:46.674 user 0m3.187s 00:04:46.674 sys 0m0.704s 00:04:46.674 06:03:41 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:46.674 06:03:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.674 ************************************ 00:04:46.674 END TEST rpc 00:04:46.674 ************************************ 00:04:46.674 06:03:41 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:46.674 06:03:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:46.674 06:03:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.674 06:03:41 -- common/autotest_common.sh@10 -- # set +x 00:04:46.674 ************************************ 00:04:46.674 START TEST skip_rpc 00:04:46.674 ************************************ 00:04:46.674 06:03:41 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:46.674 * Looking for test storage... 00:04:46.674 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:46.674 06:03:41 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:46.674 06:03:41 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:46.674 06:03:41 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:46.938 06:03:41 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:46.938 06:03:41 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:46.938 06:03:41 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:46.938 06:03:41 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:46.938 06:03:41 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:46.938 06:03:41 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:46.938 06:03:41 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:46.938 06:03:41 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:46.938 06:03:41 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:46.938 06:03:41 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:46.938 06:03:41 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:46.938 06:03:41 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:46.938 06:03:41 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:46.938 06:03:41 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:46.938 06:03:41 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:46.938 06:03:41 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:46.938 06:03:41 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:46.938 06:03:41 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:46.938 06:03:41 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:46.938 06:03:41 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:46.938 06:03:41 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:46.938 06:03:41 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:46.938 06:03:41 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:46.938 06:03:41 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:46.938 06:03:41 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:46.938 06:03:41 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:46.938 06:03:41 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:46.938 06:03:41 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:46.938 06:03:41 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:46.938 06:03:41 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:46.938 06:03:41 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:46.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.938 --rc genhtml_branch_coverage=1 00:04:46.938 --rc genhtml_function_coverage=1 00:04:46.938 --rc genhtml_legend=1 00:04:46.938 --rc geninfo_all_blocks=1 00:04:46.938 --rc geninfo_unexecuted_blocks=1 00:04:46.938 00:04:46.938 ' 00:04:46.938 06:03:41 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:46.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.938 --rc genhtml_branch_coverage=1 00:04:46.938 --rc genhtml_function_coverage=1 00:04:46.938 --rc genhtml_legend=1 00:04:46.938 --rc geninfo_all_blocks=1 00:04:46.938 --rc geninfo_unexecuted_blocks=1 00:04:46.938 00:04:46.938 ' 00:04:46.938 06:03:41 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:46.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.938 --rc genhtml_branch_coverage=1 00:04:46.938 --rc genhtml_function_coverage=1 00:04:46.938 --rc genhtml_legend=1 00:04:46.938 --rc geninfo_all_blocks=1 00:04:46.938 --rc geninfo_unexecuted_blocks=1 00:04:46.938 00:04:46.938 ' 00:04:46.938 06:03:41 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:46.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.938 --rc genhtml_branch_coverage=1 00:04:46.938 --rc genhtml_function_coverage=1 00:04:46.938 --rc genhtml_legend=1 00:04:46.938 --rc geninfo_all_blocks=1 00:04:46.938 --rc geninfo_unexecuted_blocks=1 00:04:46.938 00:04:46.938 ' 00:04:46.938 06:03:41 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:46.938 06:03:41 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:46.938 06:03:41 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:46.938 06:03:41 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:46.938 06:03:41 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.938 06:03:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.938 ************************************ 00:04:46.938 START TEST skip_rpc 00:04:46.938 ************************************ 00:04:46.938 06:03:41 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:46.938 06:03:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=112467 00:04:46.938 06:03:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:46.938 06:03:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:46.938 06:03:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:46.938 [2024-12-09 06:03:41.415340] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:04:46.939 [2024-12-09 06:03:41.415402] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112467 ] 00:04:46.939 [2024-12-09 06:03:41.502652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.200 [2024-12-09 06:03:41.543215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.483 06:03:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:52.483 06:03:46 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:52.483 06:03:46 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:52.483 06:03:46 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:52.483 06:03:46 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:52.483 06:03:46 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:52.483 06:03:46 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:52.483 06:03:46 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:52.483 06:03:46 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:52.483 06:03:46 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.483 06:03:46 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:52.483 06:03:46 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:52.483 06:03:46 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:52.483 06:03:46 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:52.483 06:03:46 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:52.483 06:03:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:52.483 06:03:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 112467 00:04:52.483 06:03:46 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 112467 ']' 00:04:52.483 06:03:46 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 112467 00:04:52.483 06:03:46 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:52.483 06:03:46 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:52.483 06:03:46 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 112467 00:04:52.483 06:03:46 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:52.483 06:03:46 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:52.483 06:03:46 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 112467' 00:04:52.483 killing process with pid 112467 00:04:52.483 06:03:46 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 112467 00:04:52.483 06:03:46 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 112467 00:04:52.483 00:04:52.483 real 0m5.265s 00:04:52.483 user 0m5.059s 00:04:52.483 sys 0m0.254s 00:04:52.483 06:03:46 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:52.483 06:03:46 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.483 ************************************ 00:04:52.483 END TEST skip_rpc 00:04:52.483 ************************************ 00:04:52.483 06:03:46 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:52.483 06:03:46 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:52.483 06:03:46 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:52.483 06:03:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.483 ************************************ 00:04:52.483 START TEST skip_rpc_with_json 00:04:52.483 ************************************ 00:04:52.483 06:03:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:52.483 06:03:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:52.483 06:03:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=113406 00:04:52.483 06:03:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:52.483 06:03:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 113406 00:04:52.483 06:03:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:52.483 06:03:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 113406 ']' 00:04:52.483 06:03:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:52.483 06:03:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:52.483 06:03:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:52.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:52.483 06:03:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:52.483 06:03:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:52.483 [2024-12-09 06:03:46.754579] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:04:52.483 [2024-12-09 06:03:46.754632] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113406 ] 00:04:52.483 [2024-12-09 06:03:46.840989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.483 [2024-12-09 06:03:46.875620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.053 06:03:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:53.053 06:03:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:53.053 06:03:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:53.053 06:03:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:53.053 06:03:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:53.053 [2024-12-09 06:03:47.558438] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:53.053 request: 00:04:53.053 { 00:04:53.053 "trtype": "tcp", 00:04:53.053 "method": "nvmf_get_transports", 00:04:53.053 "req_id": 1 00:04:53.053 } 00:04:53.053 Got JSON-RPC error response 00:04:53.053 response: 00:04:53.053 { 00:04:53.053 "code": -19, 00:04:53.053 "message": "No such device" 00:04:53.053 } 00:04:53.053 06:03:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:53.053 06:03:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:53.053 06:03:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:53.053 06:03:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:53.053 [2024-12-09 06:03:47.570542] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:53.053 06:03:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:53.053 06:03:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:53.053 06:03:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:53.053 06:03:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:53.313 06:03:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:53.313 06:03:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:53.313 { 00:04:53.313 "subsystems": [ 00:04:53.313 { 00:04:53.313 "subsystem": "fsdev", 00:04:53.313 "config": [ 00:04:53.313 { 00:04:53.313 "method": "fsdev_set_opts", 00:04:53.313 "params": { 00:04:53.313 "fsdev_io_pool_size": 65535, 00:04:53.313 "fsdev_io_cache_size": 256 00:04:53.313 } 00:04:53.313 } 00:04:53.313 ] 00:04:53.313 }, 00:04:53.313 { 00:04:53.313 "subsystem": "vfio_user_target", 00:04:53.313 "config": null 00:04:53.313 }, 00:04:53.313 { 00:04:53.313 "subsystem": "keyring", 00:04:53.313 "config": [] 00:04:53.313 }, 00:04:53.313 { 00:04:53.313 "subsystem": "iobuf", 00:04:53.313 "config": [ 00:04:53.313 { 00:04:53.313 "method": "iobuf_set_options", 00:04:53.313 "params": { 00:04:53.313 "small_pool_count": 8192, 00:04:53.313 "large_pool_count": 1024, 00:04:53.313 "small_bufsize": 8192, 00:04:53.313 "large_bufsize": 135168, 00:04:53.313 "enable_numa": false 00:04:53.313 } 00:04:53.313 } 00:04:53.313 ] 00:04:53.313 }, 00:04:53.313 { 00:04:53.313 "subsystem": "sock", 00:04:53.313 "config": [ 00:04:53.313 { 00:04:53.313 "method": "sock_set_default_impl", 00:04:53.313 "params": { 00:04:53.313 "impl_name": "posix" 00:04:53.313 } 00:04:53.313 }, 00:04:53.313 { 00:04:53.313 "method": "sock_impl_set_options", 00:04:53.313 "params": { 00:04:53.313 "impl_name": "ssl", 00:04:53.313 "recv_buf_size": 4096, 00:04:53.313 "send_buf_size": 4096, 00:04:53.313 "enable_recv_pipe": true, 00:04:53.313 "enable_quickack": false, 00:04:53.313 "enable_placement_id": 0, 00:04:53.313 "enable_zerocopy_send_server": true, 00:04:53.313 "enable_zerocopy_send_client": false, 00:04:53.313 "zerocopy_threshold": 0, 00:04:53.313 "tls_version": 0, 00:04:53.313 "enable_ktls": false 00:04:53.313 } 00:04:53.313 }, 00:04:53.313 { 00:04:53.313 "method": "sock_impl_set_options", 00:04:53.313 "params": { 00:04:53.313 "impl_name": "posix", 00:04:53.313 "recv_buf_size": 2097152, 00:04:53.313 "send_buf_size": 2097152, 00:04:53.313 "enable_recv_pipe": true, 00:04:53.313 "enable_quickack": false, 00:04:53.313 "enable_placement_id": 0, 00:04:53.313 "enable_zerocopy_send_server": true, 00:04:53.313 "enable_zerocopy_send_client": false, 00:04:53.313 "zerocopy_threshold": 0, 00:04:53.313 "tls_version": 0, 00:04:53.313 "enable_ktls": false 00:04:53.313 } 00:04:53.313 } 00:04:53.313 ] 00:04:53.313 }, 00:04:53.313 { 00:04:53.313 "subsystem": "vmd", 00:04:53.313 "config": [] 00:04:53.313 }, 00:04:53.313 { 00:04:53.313 "subsystem": "accel", 00:04:53.313 "config": [ 00:04:53.313 { 00:04:53.313 "method": "accel_set_options", 00:04:53.313 "params": { 00:04:53.313 "small_cache_size": 128, 00:04:53.313 "large_cache_size": 16, 00:04:53.313 "task_count": 2048, 00:04:53.313 "sequence_count": 2048, 00:04:53.313 "buf_count": 2048 00:04:53.313 } 00:04:53.313 } 00:04:53.313 ] 00:04:53.313 }, 00:04:53.313 { 00:04:53.313 "subsystem": "bdev", 00:04:53.313 "config": [ 00:04:53.313 { 00:04:53.313 "method": "bdev_set_options", 00:04:53.313 "params": { 00:04:53.313 "bdev_io_pool_size": 65535, 00:04:53.313 "bdev_io_cache_size": 256, 00:04:53.313 "bdev_auto_examine": true, 00:04:53.313 "iobuf_small_cache_size": 128, 00:04:53.313 "iobuf_large_cache_size": 16 00:04:53.313 } 00:04:53.313 }, 00:04:53.313 { 00:04:53.313 "method": "bdev_raid_set_options", 00:04:53.313 "params": { 00:04:53.313 "process_window_size_kb": 1024, 00:04:53.313 "process_max_bandwidth_mb_sec": 0 00:04:53.313 } 00:04:53.313 }, 00:04:53.313 { 00:04:53.313 "method": "bdev_iscsi_set_options", 00:04:53.313 "params": { 00:04:53.313 "timeout_sec": 30 00:04:53.313 } 00:04:53.313 }, 00:04:53.313 { 00:04:53.313 "method": "bdev_nvme_set_options", 00:04:53.313 "params": { 00:04:53.313 "action_on_timeout": "none", 00:04:53.313 "timeout_us": 0, 00:04:53.313 "timeout_admin_us": 0, 00:04:53.313 "keep_alive_timeout_ms": 10000, 00:04:53.313 "arbitration_burst": 0, 00:04:53.313 "low_priority_weight": 0, 00:04:53.313 "medium_priority_weight": 0, 00:04:53.313 "high_priority_weight": 0, 00:04:53.313 "nvme_adminq_poll_period_us": 10000, 00:04:53.313 "nvme_ioq_poll_period_us": 0, 00:04:53.313 "io_queue_requests": 0, 00:04:53.313 "delay_cmd_submit": true, 00:04:53.313 "transport_retry_count": 4, 00:04:53.313 "bdev_retry_count": 3, 00:04:53.313 "transport_ack_timeout": 0, 00:04:53.313 "ctrlr_loss_timeout_sec": 0, 00:04:53.313 "reconnect_delay_sec": 0, 00:04:53.313 "fast_io_fail_timeout_sec": 0, 00:04:53.313 "disable_auto_failback": false, 00:04:53.313 "generate_uuids": false, 00:04:53.313 "transport_tos": 0, 00:04:53.313 "nvme_error_stat": false, 00:04:53.313 "rdma_srq_size": 0, 00:04:53.313 "io_path_stat": false, 00:04:53.313 "allow_accel_sequence": false, 00:04:53.313 "rdma_max_cq_size": 0, 00:04:53.314 "rdma_cm_event_timeout_ms": 0, 00:04:53.314 "dhchap_digests": [ 00:04:53.314 "sha256", 00:04:53.314 "sha384", 00:04:53.314 "sha512" 00:04:53.314 ], 00:04:53.314 "dhchap_dhgroups": [ 00:04:53.314 "null", 00:04:53.314 "ffdhe2048", 00:04:53.314 "ffdhe3072", 00:04:53.314 "ffdhe4096", 00:04:53.314 "ffdhe6144", 00:04:53.314 "ffdhe8192" 00:04:53.314 ] 00:04:53.314 } 00:04:53.314 }, 00:04:53.314 { 00:04:53.314 "method": "bdev_nvme_set_hotplug", 00:04:53.314 "params": { 00:04:53.314 "period_us": 100000, 00:04:53.314 "enable": false 00:04:53.314 } 00:04:53.314 }, 00:04:53.314 { 00:04:53.314 "method": "bdev_wait_for_examine" 00:04:53.314 } 00:04:53.314 ] 00:04:53.314 }, 00:04:53.314 { 00:04:53.314 "subsystem": "scsi", 00:04:53.314 "config": null 00:04:53.314 }, 00:04:53.314 { 00:04:53.314 "subsystem": "scheduler", 00:04:53.314 "config": [ 00:04:53.314 { 00:04:53.314 "method": "framework_set_scheduler", 00:04:53.314 "params": { 00:04:53.314 "name": "static" 00:04:53.314 } 00:04:53.314 } 00:04:53.314 ] 00:04:53.314 }, 00:04:53.314 { 00:04:53.314 "subsystem": "vhost_scsi", 00:04:53.314 "config": [] 00:04:53.314 }, 00:04:53.314 { 00:04:53.314 "subsystem": "vhost_blk", 00:04:53.314 "config": [] 00:04:53.314 }, 00:04:53.314 { 00:04:53.314 "subsystem": "ublk", 00:04:53.314 "config": [] 00:04:53.314 }, 00:04:53.314 { 00:04:53.314 "subsystem": "nbd", 00:04:53.314 "config": [] 00:04:53.314 }, 00:04:53.314 { 00:04:53.314 "subsystem": "nvmf", 00:04:53.314 "config": [ 00:04:53.314 { 00:04:53.314 "method": "nvmf_set_config", 00:04:53.314 "params": { 00:04:53.314 "discovery_filter": "match_any", 00:04:53.314 "admin_cmd_passthru": { 00:04:53.314 "identify_ctrlr": false 00:04:53.314 }, 00:04:53.314 "dhchap_digests": [ 00:04:53.314 "sha256", 00:04:53.314 "sha384", 00:04:53.314 "sha512" 00:04:53.314 ], 00:04:53.314 "dhchap_dhgroups": [ 00:04:53.314 "null", 00:04:53.314 "ffdhe2048", 00:04:53.314 "ffdhe3072", 00:04:53.314 "ffdhe4096", 00:04:53.314 "ffdhe6144", 00:04:53.314 "ffdhe8192" 00:04:53.314 ] 00:04:53.314 } 00:04:53.314 }, 00:04:53.314 { 00:04:53.314 "method": "nvmf_set_max_subsystems", 00:04:53.314 "params": { 00:04:53.314 "max_subsystems": 1024 00:04:53.314 } 00:04:53.314 }, 00:04:53.314 { 00:04:53.314 "method": "nvmf_set_crdt", 00:04:53.314 "params": { 00:04:53.314 "crdt1": 0, 00:04:53.314 "crdt2": 0, 00:04:53.314 "crdt3": 0 00:04:53.314 } 00:04:53.314 }, 00:04:53.314 { 00:04:53.314 "method": "nvmf_create_transport", 00:04:53.314 "params": { 00:04:53.314 "trtype": "TCP", 00:04:53.314 "max_queue_depth": 128, 00:04:53.314 "max_io_qpairs_per_ctrlr": 127, 00:04:53.314 "in_capsule_data_size": 4096, 00:04:53.314 "max_io_size": 131072, 00:04:53.314 "io_unit_size": 131072, 00:04:53.314 "max_aq_depth": 128, 00:04:53.314 "num_shared_buffers": 511, 00:04:53.314 "buf_cache_size": 4294967295, 00:04:53.314 "dif_insert_or_strip": false, 00:04:53.314 "zcopy": false, 00:04:53.314 "c2h_success": true, 00:04:53.314 "sock_priority": 0, 00:04:53.314 "abort_timeout_sec": 1, 00:04:53.314 "ack_timeout": 0, 00:04:53.314 "data_wr_pool_size": 0 00:04:53.314 } 00:04:53.314 } 00:04:53.314 ] 00:04:53.314 }, 00:04:53.314 { 00:04:53.314 "subsystem": "iscsi", 00:04:53.314 "config": [ 00:04:53.314 { 00:04:53.314 "method": "iscsi_set_options", 00:04:53.314 "params": { 00:04:53.314 "node_base": "iqn.2016-06.io.spdk", 00:04:53.314 "max_sessions": 128, 00:04:53.314 "max_connections_per_session": 2, 00:04:53.314 "max_queue_depth": 64, 00:04:53.314 "default_time2wait": 2, 00:04:53.314 "default_time2retain": 20, 00:04:53.314 "first_burst_length": 8192, 00:04:53.314 "immediate_data": true, 00:04:53.314 "allow_duplicated_isid": false, 00:04:53.314 "error_recovery_level": 0, 00:04:53.314 "nop_timeout": 60, 00:04:53.314 "nop_in_interval": 30, 00:04:53.314 "disable_chap": false, 00:04:53.314 "require_chap": false, 00:04:53.314 "mutual_chap": false, 00:04:53.314 "chap_group": 0, 00:04:53.314 "max_large_datain_per_connection": 64, 00:04:53.314 "max_r2t_per_connection": 4, 00:04:53.314 "pdu_pool_size": 36864, 00:04:53.314 "immediate_data_pool_size": 16384, 00:04:53.314 "data_out_pool_size": 2048 00:04:53.314 } 00:04:53.314 } 00:04:53.314 ] 00:04:53.314 } 00:04:53.314 ] 00:04:53.314 } 00:04:53.314 06:03:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:53.314 06:03:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 113406 00:04:53.314 06:03:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 113406 ']' 00:04:53.314 06:03:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 113406 00:04:53.314 06:03:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:53.314 06:03:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:53.314 06:03:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 113406 00:04:53.314 06:03:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:53.314 06:03:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:53.314 06:03:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 113406' 00:04:53.314 killing process with pid 113406 00:04:53.314 06:03:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 113406 00:04:53.314 06:03:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 113406 00:04:53.573 06:03:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=113697 00:04:53.573 06:03:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:53.573 06:03:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:58.860 06:03:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 113697 00:04:58.860 06:03:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 113697 ']' 00:04:58.860 06:03:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 113697 00:04:58.860 06:03:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:58.860 06:03:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:58.860 06:03:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 113697 00:04:58.860 06:03:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:58.860 06:03:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:58.860 06:03:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 113697' 00:04:58.860 killing process with pid 113697 00:04:58.860 06:03:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 113697 00:04:58.860 06:03:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 113697 00:04:58.860 06:03:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:58.860 06:03:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:58.860 00:04:58.860 real 0m6.556s 00:04:58.860 user 0m6.462s 00:04:58.860 sys 0m0.561s 00:04:58.860 06:03:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.860 06:03:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:58.860 ************************************ 00:04:58.860 END TEST skip_rpc_with_json 00:04:58.860 ************************************ 00:04:58.860 06:03:53 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:58.860 06:03:53 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.860 06:03:53 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.860 06:03:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.860 ************************************ 00:04:58.860 START TEST skip_rpc_with_delay 00:04:58.860 ************************************ 00:04:58.860 06:03:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:58.860 06:03:53 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:58.860 06:03:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:58.860 06:03:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:58.860 06:03:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:58.860 06:03:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:58.860 06:03:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:58.860 06:03:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:58.860 06:03:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:58.860 06:03:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:58.860 06:03:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:58.860 06:03:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:58.860 06:03:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:58.860 [2024-12-09 06:03:53.392938] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:58.860 06:03:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:58.860 06:03:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:58.860 06:03:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:58.860 06:03:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:58.860 00:04:58.860 real 0m0.079s 00:04:58.860 user 0m0.046s 00:04:58.860 sys 0m0.032s 00:04:58.860 06:03:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.860 06:03:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:58.860 ************************************ 00:04:58.860 END TEST skip_rpc_with_delay 00:04:58.860 ************************************ 00:04:59.122 06:03:53 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:59.122 06:03:53 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:59.122 06:03:53 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:59.122 06:03:53 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:59.122 06:03:53 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.122 06:03:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.122 ************************************ 00:04:59.122 START TEST exit_on_failed_rpc_init 00:04:59.122 ************************************ 00:04:59.122 06:03:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:59.122 06:03:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=114672 00:04:59.122 06:03:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 114672 00:04:59.122 06:03:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:59.122 06:03:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 114672 ']' 00:04:59.122 06:03:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.122 06:03:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:59.122 06:03:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.122 06:03:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:59.122 06:03:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:59.122 [2024-12-09 06:03:53.549954] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:04:59.122 [2024-12-09 06:03:53.550014] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114672 ] 00:04:59.122 [2024-12-09 06:03:53.637169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.122 [2024-12-09 06:03:53.671914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.060 06:03:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:00.060 06:03:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:00.060 06:03:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:00.060 06:03:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:00.060 06:03:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:00.060 06:03:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:00.060 06:03:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:00.060 06:03:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:00.060 06:03:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:00.060 06:03:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:00.060 06:03:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:00.061 06:03:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:00.061 06:03:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:00.061 06:03:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:00.061 06:03:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:00.061 [2024-12-09 06:03:54.408390] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:05:00.061 [2024-12-09 06:03:54.408439] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114935 ] 00:05:00.061 [2024-12-09 06:03:54.474233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.061 [2024-12-09 06:03:54.508332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:00.061 [2024-12-09 06:03:54.508382] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:00.061 [2024-12-09 06:03:54.508391] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:00.061 [2024-12-09 06:03:54.508398] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:00.061 06:03:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:00.061 06:03:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:00.061 06:03:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:00.061 06:03:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:00.061 06:03:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:00.061 06:03:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:00.061 06:03:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:00.061 06:03:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 114672 00:05:00.061 06:03:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 114672 ']' 00:05:00.061 06:03:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 114672 00:05:00.061 06:03:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:00.061 06:03:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:00.061 06:03:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 114672 00:05:00.061 06:03:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:00.061 06:03:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:00.061 06:03:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 114672' 00:05:00.061 killing process with pid 114672 00:05:00.061 06:03:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 114672 00:05:00.061 06:03:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 114672 00:05:00.320 00:05:00.320 real 0m1.300s 00:05:00.320 user 0m1.526s 00:05:00.320 sys 0m0.362s 00:05:00.320 06:03:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.320 06:03:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:00.320 ************************************ 00:05:00.320 END TEST exit_on_failed_rpc_init 00:05:00.320 ************************************ 00:05:00.320 06:03:54 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:00.320 00:05:00.320 real 0m13.713s 00:05:00.320 user 0m13.311s 00:05:00.320 sys 0m1.535s 00:05:00.320 06:03:54 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.320 06:03:54 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.320 ************************************ 00:05:00.320 END TEST skip_rpc 00:05:00.320 ************************************ 00:05:00.320 06:03:54 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:00.320 06:03:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:00.320 06:03:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.320 06:03:54 -- common/autotest_common.sh@10 -- # set +x 00:05:00.579 ************************************ 00:05:00.579 START TEST rpc_client 00:05:00.579 ************************************ 00:05:00.579 06:03:54 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:00.579 * Looking for test storage... 00:05:00.579 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:00.579 06:03:55 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:00.579 06:03:55 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:05:00.579 06:03:55 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:00.579 06:03:55 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:00.579 06:03:55 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:00.579 06:03:55 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:00.579 06:03:55 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:00.579 06:03:55 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:00.579 06:03:55 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:00.579 06:03:55 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:00.579 06:03:55 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:00.579 06:03:55 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:00.579 06:03:55 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:00.579 06:03:55 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:00.579 06:03:55 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:00.579 06:03:55 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:00.579 06:03:55 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:00.579 06:03:55 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:00.579 06:03:55 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:00.579 06:03:55 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:00.579 06:03:55 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:00.579 06:03:55 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:00.579 06:03:55 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:00.579 06:03:55 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:00.579 06:03:55 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:00.579 06:03:55 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:00.579 06:03:55 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:00.579 06:03:55 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:00.579 06:03:55 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:00.579 06:03:55 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:00.579 06:03:55 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:00.579 06:03:55 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:00.579 06:03:55 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:00.579 06:03:55 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:00.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.579 --rc genhtml_branch_coverage=1 00:05:00.579 --rc genhtml_function_coverage=1 00:05:00.579 --rc genhtml_legend=1 00:05:00.579 --rc geninfo_all_blocks=1 00:05:00.579 --rc geninfo_unexecuted_blocks=1 00:05:00.579 00:05:00.579 ' 00:05:00.580 06:03:55 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:00.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.580 --rc genhtml_branch_coverage=1 00:05:00.580 --rc genhtml_function_coverage=1 00:05:00.580 --rc genhtml_legend=1 00:05:00.580 --rc geninfo_all_blocks=1 00:05:00.580 --rc geninfo_unexecuted_blocks=1 00:05:00.580 00:05:00.580 ' 00:05:00.580 06:03:55 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:00.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.580 --rc genhtml_branch_coverage=1 00:05:00.580 --rc genhtml_function_coverage=1 00:05:00.580 --rc genhtml_legend=1 00:05:00.580 --rc geninfo_all_blocks=1 00:05:00.580 --rc geninfo_unexecuted_blocks=1 00:05:00.580 00:05:00.580 ' 00:05:00.580 06:03:55 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:00.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.580 --rc genhtml_branch_coverage=1 00:05:00.580 --rc genhtml_function_coverage=1 00:05:00.580 --rc genhtml_legend=1 00:05:00.580 --rc geninfo_all_blocks=1 00:05:00.580 --rc geninfo_unexecuted_blocks=1 00:05:00.580 00:05:00.580 ' 00:05:00.580 06:03:55 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:00.580 OK 00:05:00.580 06:03:55 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:00.580 00:05:00.580 real 0m0.222s 00:05:00.580 user 0m0.136s 00:05:00.580 sys 0m0.102s 00:05:00.580 06:03:55 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.580 06:03:55 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:00.580 ************************************ 00:05:00.580 END TEST rpc_client 00:05:00.580 ************************************ 00:05:00.840 06:03:55 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:00.840 06:03:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:00.840 06:03:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.840 06:03:55 -- common/autotest_common.sh@10 -- # set +x 00:05:00.840 ************************************ 00:05:00.840 START TEST json_config 00:05:00.840 ************************************ 00:05:00.840 06:03:55 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:00.840 06:03:55 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:00.840 06:03:55 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:05:00.840 06:03:55 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:00.840 06:03:55 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:00.840 06:03:55 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:00.840 06:03:55 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:00.840 06:03:55 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:00.840 06:03:55 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:00.840 06:03:55 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:00.840 06:03:55 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:00.840 06:03:55 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:00.840 06:03:55 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:00.840 06:03:55 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:00.840 06:03:55 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:00.840 06:03:55 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:00.840 06:03:55 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:00.840 06:03:55 json_config -- scripts/common.sh@345 -- # : 1 00:05:00.840 06:03:55 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:00.840 06:03:55 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:00.840 06:03:55 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:00.840 06:03:55 json_config -- scripts/common.sh@353 -- # local d=1 00:05:00.840 06:03:55 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:00.840 06:03:55 json_config -- scripts/common.sh@355 -- # echo 1 00:05:00.840 06:03:55 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:00.840 06:03:55 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:00.840 06:03:55 json_config -- scripts/common.sh@353 -- # local d=2 00:05:00.840 06:03:55 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:00.840 06:03:55 json_config -- scripts/common.sh@355 -- # echo 2 00:05:00.840 06:03:55 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:00.840 06:03:55 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:00.840 06:03:55 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:00.840 06:03:55 json_config -- scripts/common.sh@368 -- # return 0 00:05:00.840 06:03:55 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:00.840 06:03:55 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:00.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.840 --rc genhtml_branch_coverage=1 00:05:00.840 --rc genhtml_function_coverage=1 00:05:00.840 --rc genhtml_legend=1 00:05:00.840 --rc geninfo_all_blocks=1 00:05:00.840 --rc geninfo_unexecuted_blocks=1 00:05:00.840 00:05:00.840 ' 00:05:00.840 06:03:55 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:00.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.840 --rc genhtml_branch_coverage=1 00:05:00.840 --rc genhtml_function_coverage=1 00:05:00.840 --rc genhtml_legend=1 00:05:00.840 --rc geninfo_all_blocks=1 00:05:00.840 --rc geninfo_unexecuted_blocks=1 00:05:00.840 00:05:00.840 ' 00:05:00.840 06:03:55 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:00.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.840 --rc genhtml_branch_coverage=1 00:05:00.840 --rc genhtml_function_coverage=1 00:05:00.840 --rc genhtml_legend=1 00:05:00.840 --rc geninfo_all_blocks=1 00:05:00.840 --rc geninfo_unexecuted_blocks=1 00:05:00.840 00:05:00.840 ' 00:05:00.840 06:03:55 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:00.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.840 --rc genhtml_branch_coverage=1 00:05:00.840 --rc genhtml_function_coverage=1 00:05:00.840 --rc genhtml_legend=1 00:05:00.840 --rc geninfo_all_blocks=1 00:05:00.840 --rc geninfo_unexecuted_blocks=1 00:05:00.840 00:05:00.840 ' 00:05:00.840 06:03:55 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:00.840 06:03:55 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:00.840 06:03:55 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:00.840 06:03:55 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:00.840 06:03:55 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:00.840 06:03:55 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:00.840 06:03:55 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:00.840 06:03:55 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:00.840 06:03:55 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:00.840 06:03:55 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:00.840 06:03:55 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:00.840 06:03:55 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:00.840 06:03:55 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:05:00.840 06:03:55 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:05:00.840 06:03:55 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:00.840 06:03:55 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:00.840 06:03:55 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:00.840 06:03:55 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:00.840 06:03:55 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:00.841 06:03:55 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:00.841 06:03:55 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:00.841 06:03:55 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:00.841 06:03:55 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:00.841 06:03:55 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:00.841 06:03:55 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:00.841 06:03:55 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:00.841 06:03:55 json_config -- paths/export.sh@5 -- # export PATH 00:05:00.841 06:03:55 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:00.841 06:03:55 json_config -- nvmf/common.sh@51 -- # : 0 00:05:00.841 06:03:55 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:00.841 06:03:55 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:00.841 06:03:55 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:00.841 06:03:55 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:00.841 06:03:55 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:00.841 06:03:55 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:00.841 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:00.841 06:03:55 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:00.841 06:03:55 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:00.841 06:03:55 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:00.841 06:03:55 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:00.841 06:03:55 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:00.841 06:03:55 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:00.841 06:03:55 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:00.841 06:03:55 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:00.841 06:03:55 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:00.841 06:03:55 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:00.841 06:03:55 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:00.841 06:03:55 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:00.841 06:03:55 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:00.841 06:03:55 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:00.841 06:03:55 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:00.841 06:03:55 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:00.841 06:03:55 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:00.841 06:03:55 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:00.841 06:03:55 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:00.841 INFO: JSON configuration test init 00:05:00.841 06:03:55 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:00.841 06:03:55 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:00.841 06:03:55 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:00.841 06:03:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:00.841 06:03:55 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:00.841 06:03:55 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:00.841 06:03:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:00.841 06:03:55 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:00.841 06:03:55 json_config -- json_config/common.sh@9 -- # local app=target 00:05:00.841 06:03:55 json_config -- json_config/common.sh@10 -- # shift 00:05:00.841 06:03:55 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:00.841 06:03:55 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:00.841 06:03:55 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:00.841 06:03:55 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:00.841 06:03:55 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:00.841 06:03:55 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=115114 00:05:00.841 06:03:55 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:00.841 Waiting for target to run... 00:05:00.841 06:03:55 json_config -- json_config/common.sh@25 -- # waitforlisten 115114 /var/tmp/spdk_tgt.sock 00:05:00.841 06:03:55 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:00.841 06:03:55 json_config -- common/autotest_common.sh@835 -- # '[' -z 115114 ']' 00:05:00.841 06:03:55 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:00.841 06:03:55 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:00.841 06:03:55 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:00.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:00.841 06:03:55 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:00.841 06:03:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:01.102 [2024-12-09 06:03:55.477347] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:05:01.102 [2024-12-09 06:03:55.477417] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115114 ] 00:05:01.390 [2024-12-09 06:03:55.909189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.390 [2024-12-09 06:03:55.942052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.959 06:03:56 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:01.959 06:03:56 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:01.959 06:03:56 json_config -- json_config/common.sh@26 -- # echo '' 00:05:01.959 00:05:01.959 06:03:56 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:01.959 06:03:56 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:01.959 06:03:56 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:01.959 06:03:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:01.959 06:03:56 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:01.959 06:03:56 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:01.959 06:03:56 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:01.959 06:03:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:01.959 06:03:56 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:01.959 06:03:56 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:01.959 06:03:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:05.254 06:03:59 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:05.254 06:03:59 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:05.254 06:03:59 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:05.254 06:03:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.254 06:03:59 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:05.254 06:03:59 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:05.254 06:03:59 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:05.254 06:03:59 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:05.254 06:03:59 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:05.254 06:03:59 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:05.254 06:03:59 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:05.254 06:03:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:05.254 06:03:59 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:05.254 06:03:59 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:05.254 06:03:59 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:05.254 06:03:59 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:05.254 06:03:59 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:05.254 06:03:59 json_config -- json_config/json_config.sh@54 -- # sort 00:05:05.254 06:03:59 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:05.254 06:03:59 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:05.254 06:03:59 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:05.254 06:03:59 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:05.254 06:03:59 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:05.254 06:03:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.254 06:03:59 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:05.254 06:03:59 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:05.254 06:03:59 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:05.254 06:03:59 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:05.254 06:03:59 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:05.254 06:03:59 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:05.254 06:03:59 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:05.254 06:03:59 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:05.254 06:03:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.254 06:03:59 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:05.254 06:03:59 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:05.254 06:03:59 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:05.254 06:03:59 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:05.254 06:03:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:05.254 MallocForNvmf0 00:05:05.254 06:03:59 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:05.254 06:03:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:05.515 MallocForNvmf1 00:05:05.515 06:03:59 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:05.515 06:03:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:05.775 [2024-12-09 06:04:00.115000] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:05.776 06:04:00 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:05.776 06:04:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:05.776 06:04:00 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:05.776 06:04:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:06.036 06:04:00 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:06.036 06:04:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:06.296 06:04:00 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:06.296 06:04:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:06.296 [2024-12-09 06:04:00.769005] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:06.296 06:04:00 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:06.296 06:04:00 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:06.296 06:04:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:06.296 06:04:00 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:06.296 06:04:00 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:06.296 06:04:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:06.296 06:04:00 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:06.296 06:04:00 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:06.296 06:04:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:06.556 MallocBdevForConfigChangeCheck 00:05:06.556 06:04:01 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:06.556 06:04:01 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:06.556 06:04:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:06.556 06:04:01 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:06.556 06:04:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:06.817 06:04:01 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:06.817 INFO: shutting down applications... 00:05:06.817 06:04:01 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:06.817 06:04:01 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:06.817 06:04:01 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:06.817 06:04:01 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:09.362 Calling clear_iscsi_subsystem 00:05:09.362 Calling clear_nvmf_subsystem 00:05:09.362 Calling clear_nbd_subsystem 00:05:09.362 Calling clear_ublk_subsystem 00:05:09.362 Calling clear_vhost_blk_subsystem 00:05:09.362 Calling clear_vhost_scsi_subsystem 00:05:09.362 Calling clear_bdev_subsystem 00:05:09.362 06:04:03 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:09.362 06:04:03 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:09.362 06:04:03 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:09.362 06:04:03 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:09.362 06:04:03 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:09.362 06:04:03 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:09.630 06:04:04 json_config -- json_config/json_config.sh@352 -- # break 00:05:09.630 06:04:04 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:09.630 06:04:04 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:09.630 06:04:04 json_config -- json_config/common.sh@31 -- # local app=target 00:05:09.630 06:04:04 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:09.630 06:04:04 json_config -- json_config/common.sh@35 -- # [[ -n 115114 ]] 00:05:09.630 06:04:04 json_config -- json_config/common.sh@38 -- # kill -SIGINT 115114 00:05:09.630 06:04:04 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:09.630 06:04:04 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:09.630 06:04:04 json_config -- json_config/common.sh@41 -- # kill -0 115114 00:05:09.630 06:04:04 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:10.201 06:04:04 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:10.201 06:04:04 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:10.201 06:04:04 json_config -- json_config/common.sh@41 -- # kill -0 115114 00:05:10.201 06:04:04 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:10.201 06:04:04 json_config -- json_config/common.sh@43 -- # break 00:05:10.201 06:04:04 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:10.201 06:04:04 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:10.201 SPDK target shutdown done 00:05:10.201 06:04:04 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:10.201 INFO: relaunching applications... 00:05:10.201 06:04:04 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:10.201 06:04:04 json_config -- json_config/common.sh@9 -- # local app=target 00:05:10.201 06:04:04 json_config -- json_config/common.sh@10 -- # shift 00:05:10.201 06:04:04 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:10.201 06:04:04 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:10.201 06:04:04 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:10.201 06:04:04 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:10.201 06:04:04 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:10.201 06:04:04 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=116791 00:05:10.201 06:04:04 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:10.201 Waiting for target to run... 00:05:10.201 06:04:04 json_config -- json_config/common.sh@25 -- # waitforlisten 116791 /var/tmp/spdk_tgt.sock 00:05:10.201 06:04:04 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:10.201 06:04:04 json_config -- common/autotest_common.sh@835 -- # '[' -z 116791 ']' 00:05:10.201 06:04:04 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:10.201 06:04:04 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:10.201 06:04:04 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:10.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:10.201 06:04:04 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:10.201 06:04:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.201 [2024-12-09 06:04:04.744396] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:05:10.202 [2024-12-09 06:04:04.744478] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116791 ] 00:05:10.771 [2024-12-09 06:04:05.180972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.771 [2024-12-09 06:04:05.215528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.072 [2024-12-09 06:04:08.236146] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:14.072 [2024-12-09 06:04:08.268485] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:14.644 06:04:08 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:14.644 06:04:08 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:14.644 06:04:08 json_config -- json_config/common.sh@26 -- # echo '' 00:05:14.644 00:05:14.644 06:04:08 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:14.644 06:04:08 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:14.644 INFO: Checking if target configuration is the same... 00:05:14.644 06:04:08 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:14.644 06:04:08 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:14.644 06:04:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:14.644 + '[' 2 -ne 2 ']' 00:05:14.644 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:14.644 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:14.644 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:14.644 +++ basename /dev/fd/62 00:05:14.644 ++ mktemp /tmp/62.XXX 00:05:14.644 + tmp_file_1=/tmp/62.iGV 00:05:14.644 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:14.644 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:14.644 + tmp_file_2=/tmp/spdk_tgt_config.json.csd 00:05:14.644 + ret=0 00:05:14.644 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:14.905 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:14.905 + diff -u /tmp/62.iGV /tmp/spdk_tgt_config.json.csd 00:05:14.905 + echo 'INFO: JSON config files are the same' 00:05:14.905 INFO: JSON config files are the same 00:05:14.905 + rm /tmp/62.iGV /tmp/spdk_tgt_config.json.csd 00:05:14.905 + exit 0 00:05:14.905 06:04:09 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:14.905 06:04:09 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:14.905 INFO: changing configuration and checking if this can be detected... 00:05:14.905 06:04:09 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:14.905 06:04:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:14.905 06:04:09 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:14.905 06:04:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:14.905 06:04:09 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:14.905 + '[' 2 -ne 2 ']' 00:05:14.905 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:14.905 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:14.905 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:14.905 +++ basename /dev/fd/62 00:05:14.906 ++ mktemp /tmp/62.XXX 00:05:15.167 + tmp_file_1=/tmp/62.cOd 00:05:15.167 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:15.167 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:15.167 + tmp_file_2=/tmp/spdk_tgt_config.json.WMn 00:05:15.167 + ret=0 00:05:15.167 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:15.429 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:15.429 + diff -u /tmp/62.cOd /tmp/spdk_tgt_config.json.WMn 00:05:15.429 + ret=1 00:05:15.429 + echo '=== Start of file: /tmp/62.cOd ===' 00:05:15.429 + cat /tmp/62.cOd 00:05:15.429 + echo '=== End of file: /tmp/62.cOd ===' 00:05:15.429 + echo '' 00:05:15.429 + echo '=== Start of file: /tmp/spdk_tgt_config.json.WMn ===' 00:05:15.429 + cat /tmp/spdk_tgt_config.json.WMn 00:05:15.429 + echo '=== End of file: /tmp/spdk_tgt_config.json.WMn ===' 00:05:15.429 + echo '' 00:05:15.429 + rm /tmp/62.cOd /tmp/spdk_tgt_config.json.WMn 00:05:15.429 + exit 1 00:05:15.429 06:04:09 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:15.429 INFO: configuration change detected. 00:05:15.429 06:04:09 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:15.429 06:04:09 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:15.429 06:04:09 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:15.429 06:04:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.429 06:04:09 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:15.429 06:04:09 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:15.429 06:04:09 json_config -- json_config/json_config.sh@324 -- # [[ -n 116791 ]] 00:05:15.429 06:04:09 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:15.429 06:04:09 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:15.429 06:04:09 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:15.429 06:04:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.429 06:04:09 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:15.429 06:04:09 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:15.430 06:04:09 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:15.430 06:04:09 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:15.430 06:04:09 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:15.430 06:04:09 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:15.430 06:04:09 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:15.430 06:04:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.430 06:04:09 json_config -- json_config/json_config.sh@330 -- # killprocess 116791 00:05:15.430 06:04:09 json_config -- common/autotest_common.sh@954 -- # '[' -z 116791 ']' 00:05:15.430 06:04:09 json_config -- common/autotest_common.sh@958 -- # kill -0 116791 00:05:15.430 06:04:09 json_config -- common/autotest_common.sh@959 -- # uname 00:05:15.430 06:04:09 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:15.430 06:04:09 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 116791 00:05:15.430 06:04:09 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:15.430 06:04:09 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:15.430 06:04:09 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 116791' 00:05:15.430 killing process with pid 116791 00:05:15.430 06:04:09 json_config -- common/autotest_common.sh@973 -- # kill 116791 00:05:15.430 06:04:09 json_config -- common/autotest_common.sh@978 -- # wait 116791 00:05:17.977 06:04:12 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:17.977 06:04:12 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:17.977 06:04:12 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:17.977 06:04:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:17.977 06:04:12 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:17.977 06:04:12 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:17.977 INFO: Success 00:05:17.977 00:05:17.977 real 0m17.214s 00:05:17.977 user 0m17.223s 00:05:17.977 sys 0m2.671s 00:05:17.978 06:04:12 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:17.978 06:04:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:17.978 ************************************ 00:05:17.978 END TEST json_config 00:05:17.978 ************************************ 00:05:17.978 06:04:12 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:17.978 06:04:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:17.978 06:04:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:17.978 06:04:12 -- common/autotest_common.sh@10 -- # set +x 00:05:17.978 ************************************ 00:05:17.978 START TEST json_config_extra_key 00:05:17.978 ************************************ 00:05:17.978 06:04:12 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:17.978 06:04:12 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:18.241 06:04:12 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:05:18.241 06:04:12 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:18.241 06:04:12 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:18.241 06:04:12 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:18.241 06:04:12 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:18.241 06:04:12 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:18.241 06:04:12 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:18.241 06:04:12 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:18.241 06:04:12 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:18.241 06:04:12 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:18.241 06:04:12 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:18.241 06:04:12 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:18.241 06:04:12 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:18.241 06:04:12 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:18.241 06:04:12 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:18.241 06:04:12 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:18.241 06:04:12 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:18.241 06:04:12 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:18.241 06:04:12 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:18.241 06:04:12 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:18.241 06:04:12 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:18.241 06:04:12 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:18.241 06:04:12 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:18.241 06:04:12 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:18.241 06:04:12 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:18.241 06:04:12 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:18.241 06:04:12 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:18.241 06:04:12 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:18.241 06:04:12 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:18.241 06:04:12 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:18.241 06:04:12 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:18.241 06:04:12 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:18.241 06:04:12 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:18.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.241 --rc genhtml_branch_coverage=1 00:05:18.241 --rc genhtml_function_coverage=1 00:05:18.241 --rc genhtml_legend=1 00:05:18.241 --rc geninfo_all_blocks=1 00:05:18.241 --rc geninfo_unexecuted_blocks=1 00:05:18.241 00:05:18.241 ' 00:05:18.241 06:04:12 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:18.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.241 --rc genhtml_branch_coverage=1 00:05:18.241 --rc genhtml_function_coverage=1 00:05:18.241 --rc genhtml_legend=1 00:05:18.241 --rc geninfo_all_blocks=1 00:05:18.241 --rc geninfo_unexecuted_blocks=1 00:05:18.241 00:05:18.241 ' 00:05:18.241 06:04:12 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:18.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.241 --rc genhtml_branch_coverage=1 00:05:18.241 --rc genhtml_function_coverage=1 00:05:18.241 --rc genhtml_legend=1 00:05:18.241 --rc geninfo_all_blocks=1 00:05:18.241 --rc geninfo_unexecuted_blocks=1 00:05:18.241 00:05:18.241 ' 00:05:18.241 06:04:12 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:18.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.241 --rc genhtml_branch_coverage=1 00:05:18.241 --rc genhtml_function_coverage=1 00:05:18.241 --rc genhtml_legend=1 00:05:18.241 --rc geninfo_all_blocks=1 00:05:18.241 --rc geninfo_unexecuted_blocks=1 00:05:18.241 00:05:18.241 ' 00:05:18.241 06:04:12 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:18.241 06:04:12 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:18.241 06:04:12 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:18.241 06:04:12 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:18.241 06:04:12 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:18.241 06:04:12 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:18.241 06:04:12 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:18.241 06:04:12 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:18.241 06:04:12 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:18.241 06:04:12 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:18.241 06:04:12 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:18.241 06:04:12 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:18.241 06:04:12 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:05:18.241 06:04:12 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:05:18.241 06:04:12 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:18.241 06:04:12 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:18.241 06:04:12 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:18.241 06:04:12 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:18.241 06:04:12 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:18.241 06:04:12 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:18.241 06:04:12 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:18.241 06:04:12 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:18.241 06:04:12 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:18.241 06:04:12 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.242 06:04:12 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.242 06:04:12 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.242 06:04:12 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:18.242 06:04:12 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.242 06:04:12 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:18.242 06:04:12 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:18.242 06:04:12 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:18.242 06:04:12 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:18.242 06:04:12 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:18.242 06:04:12 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:18.242 06:04:12 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:18.242 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:18.242 06:04:12 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:18.242 06:04:12 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:18.242 06:04:12 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:18.242 06:04:12 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:18.242 06:04:12 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:18.242 06:04:12 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:18.242 06:04:12 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:18.242 06:04:12 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:18.242 06:04:12 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:18.242 06:04:12 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:18.242 06:04:12 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:18.242 06:04:12 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:18.242 06:04:12 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:18.242 06:04:12 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:18.242 INFO: launching applications... 00:05:18.242 06:04:12 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:18.242 06:04:12 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:18.242 06:04:12 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:18.242 06:04:12 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:18.242 06:04:12 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:18.242 06:04:12 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:18.242 06:04:12 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:18.242 06:04:12 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:18.242 06:04:12 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=118413 00:05:18.242 06:04:12 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:18.242 Waiting for target to run... 00:05:18.242 06:04:12 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 118413 /var/tmp/spdk_tgt.sock 00:05:18.242 06:04:12 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 118413 ']' 00:05:18.242 06:04:12 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:18.242 06:04:12 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:18.242 06:04:12 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:18.242 06:04:12 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:18.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:18.242 06:04:12 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:18.242 06:04:12 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:18.242 [2024-12-09 06:04:12.730477] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:05:18.242 [2024-12-09 06:04:12.730536] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118413 ] 00:05:18.503 [2024-12-09 06:04:13.007745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.503 [2024-12-09 06:04:13.032731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.075 06:04:13 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:19.075 06:04:13 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:19.075 06:04:13 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:19.075 00:05:19.075 06:04:13 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:19.075 INFO: shutting down applications... 00:05:19.075 06:04:13 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:19.075 06:04:13 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:19.075 06:04:13 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:19.075 06:04:13 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 118413 ]] 00:05:19.075 06:04:13 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 118413 00:05:19.075 06:04:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:19.075 06:04:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:19.075 06:04:13 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 118413 00:05:19.075 06:04:13 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:19.645 06:04:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:19.645 06:04:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:19.645 06:04:14 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 118413 00:05:19.645 06:04:14 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:19.645 06:04:14 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:19.645 06:04:14 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:19.645 06:04:14 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:19.645 SPDK target shutdown done 00:05:19.645 06:04:14 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:19.645 Success 00:05:19.645 00:05:19.645 real 0m1.541s 00:05:19.645 user 0m1.177s 00:05:19.645 sys 0m0.374s 00:05:19.645 06:04:14 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.645 06:04:14 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:19.645 ************************************ 00:05:19.645 END TEST json_config_extra_key 00:05:19.645 ************************************ 00:05:19.645 06:04:14 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:19.645 06:04:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:19.645 06:04:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.645 06:04:14 -- common/autotest_common.sh@10 -- # set +x 00:05:19.645 ************************************ 00:05:19.645 START TEST alias_rpc 00:05:19.645 ************************************ 00:05:19.645 06:04:14 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:19.645 * Looking for test storage... 00:05:19.645 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:19.645 06:04:14 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:19.645 06:04:14 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:19.645 06:04:14 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:19.906 06:04:14 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:19.906 06:04:14 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:19.906 06:04:14 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:19.906 06:04:14 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:19.906 06:04:14 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:19.906 06:04:14 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:19.906 06:04:14 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:19.906 06:04:14 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:19.906 06:04:14 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:19.906 06:04:14 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:19.906 06:04:14 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:19.906 06:04:14 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:19.906 06:04:14 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:19.906 06:04:14 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:19.906 06:04:14 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:19.906 06:04:14 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:19.906 06:04:14 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:19.906 06:04:14 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:19.906 06:04:14 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:19.906 06:04:14 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:19.906 06:04:14 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:19.906 06:04:14 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:19.906 06:04:14 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:19.906 06:04:14 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:19.906 06:04:14 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:19.906 06:04:14 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:19.906 06:04:14 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:19.906 06:04:14 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:19.906 06:04:14 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:19.906 06:04:14 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:19.906 06:04:14 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:19.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.906 --rc genhtml_branch_coverage=1 00:05:19.906 --rc genhtml_function_coverage=1 00:05:19.906 --rc genhtml_legend=1 00:05:19.906 --rc geninfo_all_blocks=1 00:05:19.906 --rc geninfo_unexecuted_blocks=1 00:05:19.906 00:05:19.906 ' 00:05:19.906 06:04:14 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:19.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.906 --rc genhtml_branch_coverage=1 00:05:19.906 --rc genhtml_function_coverage=1 00:05:19.906 --rc genhtml_legend=1 00:05:19.906 --rc geninfo_all_blocks=1 00:05:19.906 --rc geninfo_unexecuted_blocks=1 00:05:19.906 00:05:19.906 ' 00:05:19.906 06:04:14 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:19.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.906 --rc genhtml_branch_coverage=1 00:05:19.906 --rc genhtml_function_coverage=1 00:05:19.906 --rc genhtml_legend=1 00:05:19.906 --rc geninfo_all_blocks=1 00:05:19.906 --rc geninfo_unexecuted_blocks=1 00:05:19.906 00:05:19.906 ' 00:05:19.906 06:04:14 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:19.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.906 --rc genhtml_branch_coverage=1 00:05:19.906 --rc genhtml_function_coverage=1 00:05:19.906 --rc genhtml_legend=1 00:05:19.906 --rc geninfo_all_blocks=1 00:05:19.906 --rc geninfo_unexecuted_blocks=1 00:05:19.906 00:05:19.906 ' 00:05:19.906 06:04:14 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:19.906 06:04:14 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=118774 00:05:19.906 06:04:14 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:19.906 06:04:14 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 118774 00:05:19.906 06:04:14 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 118774 ']' 00:05:19.906 06:04:14 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.906 06:04:14 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:19.906 06:04:14 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.906 06:04:14 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:19.906 06:04:14 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.906 [2024-12-09 06:04:14.342512] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:05:19.906 [2024-12-09 06:04:14.342564] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118774 ] 00:05:19.906 [2024-12-09 06:04:14.427806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.906 [2024-12-09 06:04:14.458696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.166 06:04:14 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:20.166 06:04:14 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:20.166 06:04:14 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:20.426 06:04:14 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 118774 00:05:20.426 06:04:14 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 118774 ']' 00:05:20.426 06:04:14 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 118774 00:05:20.426 06:04:14 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:20.426 06:04:14 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:20.426 06:04:14 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 118774 00:05:20.426 06:04:14 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:20.426 06:04:14 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:20.427 06:04:14 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 118774' 00:05:20.427 killing process with pid 118774 00:05:20.427 06:04:14 alias_rpc -- common/autotest_common.sh@973 -- # kill 118774 00:05:20.427 06:04:14 alias_rpc -- common/autotest_common.sh@978 -- # wait 118774 00:05:20.694 00:05:20.694 real 0m0.976s 00:05:20.694 user 0m1.029s 00:05:20.694 sys 0m0.359s 00:05:20.694 06:04:15 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:20.694 06:04:15 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.695 ************************************ 00:05:20.695 END TEST alias_rpc 00:05:20.695 ************************************ 00:05:20.695 06:04:15 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:20.695 06:04:15 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:20.695 06:04:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:20.695 06:04:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.695 06:04:15 -- common/autotest_common.sh@10 -- # set +x 00:05:20.695 ************************************ 00:05:20.695 START TEST spdkcli_tcp 00:05:20.695 ************************************ 00:05:20.695 06:04:15 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:20.695 * Looking for test storage... 00:05:20.695 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:20.695 06:04:15 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:20.695 06:04:15 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:05:20.695 06:04:15 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:20.970 06:04:15 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:20.970 06:04:15 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:20.970 06:04:15 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:20.970 06:04:15 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:20.970 06:04:15 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:20.970 06:04:15 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:20.970 06:04:15 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:20.970 06:04:15 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:20.970 06:04:15 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:20.970 06:04:15 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:20.970 06:04:15 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:20.970 06:04:15 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:20.970 06:04:15 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:20.970 06:04:15 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:20.970 06:04:15 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:20.970 06:04:15 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:20.970 06:04:15 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:20.970 06:04:15 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:20.970 06:04:15 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:20.970 06:04:15 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:20.970 06:04:15 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:20.970 06:04:15 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:20.970 06:04:15 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:20.970 06:04:15 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:20.970 06:04:15 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:20.970 06:04:15 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:20.970 06:04:15 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:20.970 06:04:15 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:20.970 06:04:15 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:20.970 06:04:15 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:20.970 06:04:15 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:20.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.970 --rc genhtml_branch_coverage=1 00:05:20.970 --rc genhtml_function_coverage=1 00:05:20.970 --rc genhtml_legend=1 00:05:20.970 --rc geninfo_all_blocks=1 00:05:20.970 --rc geninfo_unexecuted_blocks=1 00:05:20.970 00:05:20.970 ' 00:05:20.970 06:04:15 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:20.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.970 --rc genhtml_branch_coverage=1 00:05:20.970 --rc genhtml_function_coverage=1 00:05:20.970 --rc genhtml_legend=1 00:05:20.970 --rc geninfo_all_blocks=1 00:05:20.970 --rc geninfo_unexecuted_blocks=1 00:05:20.970 00:05:20.970 ' 00:05:20.970 06:04:15 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:20.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.970 --rc genhtml_branch_coverage=1 00:05:20.970 --rc genhtml_function_coverage=1 00:05:20.970 --rc genhtml_legend=1 00:05:20.970 --rc geninfo_all_blocks=1 00:05:20.970 --rc geninfo_unexecuted_blocks=1 00:05:20.970 00:05:20.970 ' 00:05:20.970 06:04:15 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:20.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.970 --rc genhtml_branch_coverage=1 00:05:20.970 --rc genhtml_function_coverage=1 00:05:20.970 --rc genhtml_legend=1 00:05:20.970 --rc geninfo_all_blocks=1 00:05:20.970 --rc geninfo_unexecuted_blocks=1 00:05:20.970 00:05:20.970 ' 00:05:20.970 06:04:15 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:20.970 06:04:15 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:20.970 06:04:15 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:20.970 06:04:15 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:20.970 06:04:15 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:20.970 06:04:15 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:20.970 06:04:15 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:20.970 06:04:15 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:20.970 06:04:15 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:20.970 06:04:15 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=118865 00:05:20.970 06:04:15 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 118865 00:05:20.970 06:04:15 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:20.970 06:04:15 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 118865 ']' 00:05:20.970 06:04:15 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.970 06:04:15 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:20.970 06:04:15 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.970 06:04:15 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:20.970 06:04:15 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:20.970 [2024-12-09 06:04:15.409619] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:05:20.970 [2024-12-09 06:04:15.409688] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118865 ] 00:05:20.970 [2024-12-09 06:04:15.496187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:20.970 [2024-12-09 06:04:15.529213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:20.970 [2024-12-09 06:04:15.529216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.909 06:04:16 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:21.909 06:04:16 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:21.909 06:04:16 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=119154 00:05:21.909 06:04:16 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:21.909 06:04:16 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:21.909 [ 00:05:21.909 "bdev_malloc_delete", 00:05:21.909 "bdev_malloc_create", 00:05:21.909 "bdev_null_resize", 00:05:21.909 "bdev_null_delete", 00:05:21.909 "bdev_null_create", 00:05:21.909 "bdev_nvme_cuse_unregister", 00:05:21.909 "bdev_nvme_cuse_register", 00:05:21.909 "bdev_opal_new_user", 00:05:21.909 "bdev_opal_set_lock_state", 00:05:21.909 "bdev_opal_delete", 00:05:21.909 "bdev_opal_get_info", 00:05:21.909 "bdev_opal_create", 00:05:21.909 "bdev_nvme_opal_revert", 00:05:21.909 "bdev_nvme_opal_init", 00:05:21.909 "bdev_nvme_send_cmd", 00:05:21.909 "bdev_nvme_set_keys", 00:05:21.909 "bdev_nvme_get_path_iostat", 00:05:21.909 "bdev_nvme_get_mdns_discovery_info", 00:05:21.909 "bdev_nvme_stop_mdns_discovery", 00:05:21.909 "bdev_nvme_start_mdns_discovery", 00:05:21.909 "bdev_nvme_set_multipath_policy", 00:05:21.909 "bdev_nvme_set_preferred_path", 00:05:21.909 "bdev_nvme_get_io_paths", 00:05:21.909 "bdev_nvme_remove_error_injection", 00:05:21.909 "bdev_nvme_add_error_injection", 00:05:21.909 "bdev_nvme_get_discovery_info", 00:05:21.909 "bdev_nvme_stop_discovery", 00:05:21.909 "bdev_nvme_start_discovery", 00:05:21.909 "bdev_nvme_get_controller_health_info", 00:05:21.909 "bdev_nvme_disable_controller", 00:05:21.909 "bdev_nvme_enable_controller", 00:05:21.909 "bdev_nvme_reset_controller", 00:05:21.909 "bdev_nvme_get_transport_statistics", 00:05:21.909 "bdev_nvme_apply_firmware", 00:05:21.909 "bdev_nvme_detach_controller", 00:05:21.909 "bdev_nvme_get_controllers", 00:05:21.909 "bdev_nvme_attach_controller", 00:05:21.909 "bdev_nvme_set_hotplug", 00:05:21.909 "bdev_nvme_set_options", 00:05:21.909 "bdev_passthru_delete", 00:05:21.909 "bdev_passthru_create", 00:05:21.909 "bdev_lvol_set_parent_bdev", 00:05:21.909 "bdev_lvol_set_parent", 00:05:21.909 "bdev_lvol_check_shallow_copy", 00:05:21.909 "bdev_lvol_start_shallow_copy", 00:05:21.909 "bdev_lvol_grow_lvstore", 00:05:21.909 "bdev_lvol_get_lvols", 00:05:21.909 "bdev_lvol_get_lvstores", 00:05:21.909 "bdev_lvol_delete", 00:05:21.909 "bdev_lvol_set_read_only", 00:05:21.909 "bdev_lvol_resize", 00:05:21.909 "bdev_lvol_decouple_parent", 00:05:21.909 "bdev_lvol_inflate", 00:05:21.909 "bdev_lvol_rename", 00:05:21.909 "bdev_lvol_clone_bdev", 00:05:21.909 "bdev_lvol_clone", 00:05:21.909 "bdev_lvol_snapshot", 00:05:21.909 "bdev_lvol_create", 00:05:21.909 "bdev_lvol_delete_lvstore", 00:05:21.909 "bdev_lvol_rename_lvstore", 00:05:21.909 "bdev_lvol_create_lvstore", 00:05:21.909 "bdev_raid_set_options", 00:05:21.909 "bdev_raid_remove_base_bdev", 00:05:21.909 "bdev_raid_add_base_bdev", 00:05:21.909 "bdev_raid_delete", 00:05:21.909 "bdev_raid_create", 00:05:21.909 "bdev_raid_get_bdevs", 00:05:21.909 "bdev_error_inject_error", 00:05:21.909 "bdev_error_delete", 00:05:21.909 "bdev_error_create", 00:05:21.909 "bdev_split_delete", 00:05:21.909 "bdev_split_create", 00:05:21.909 "bdev_delay_delete", 00:05:21.909 "bdev_delay_create", 00:05:21.909 "bdev_delay_update_latency", 00:05:21.909 "bdev_zone_block_delete", 00:05:21.909 "bdev_zone_block_create", 00:05:21.909 "blobfs_create", 00:05:21.909 "blobfs_detect", 00:05:21.909 "blobfs_set_cache_size", 00:05:21.909 "bdev_aio_delete", 00:05:21.909 "bdev_aio_rescan", 00:05:21.909 "bdev_aio_create", 00:05:21.909 "bdev_ftl_set_property", 00:05:21.909 "bdev_ftl_get_properties", 00:05:21.909 "bdev_ftl_get_stats", 00:05:21.909 "bdev_ftl_unmap", 00:05:21.909 "bdev_ftl_unload", 00:05:21.909 "bdev_ftl_delete", 00:05:21.909 "bdev_ftl_load", 00:05:21.909 "bdev_ftl_create", 00:05:21.909 "bdev_virtio_attach_controller", 00:05:21.909 "bdev_virtio_scsi_get_devices", 00:05:21.909 "bdev_virtio_detach_controller", 00:05:21.909 "bdev_virtio_blk_set_hotplug", 00:05:21.909 "bdev_iscsi_delete", 00:05:21.909 "bdev_iscsi_create", 00:05:21.909 "bdev_iscsi_set_options", 00:05:21.909 "accel_error_inject_error", 00:05:21.909 "ioat_scan_accel_module", 00:05:21.909 "dsa_scan_accel_module", 00:05:21.909 "iaa_scan_accel_module", 00:05:21.909 "vfu_virtio_create_fs_endpoint", 00:05:21.909 "vfu_virtio_create_scsi_endpoint", 00:05:21.909 "vfu_virtio_scsi_remove_target", 00:05:21.909 "vfu_virtio_scsi_add_target", 00:05:21.909 "vfu_virtio_create_blk_endpoint", 00:05:21.909 "vfu_virtio_delete_endpoint", 00:05:21.909 "keyring_file_remove_key", 00:05:21.909 "keyring_file_add_key", 00:05:21.909 "keyring_linux_set_options", 00:05:21.909 "fsdev_aio_delete", 00:05:21.909 "fsdev_aio_create", 00:05:21.909 "iscsi_get_histogram", 00:05:21.909 "iscsi_enable_histogram", 00:05:21.909 "iscsi_set_options", 00:05:21.909 "iscsi_get_auth_groups", 00:05:21.909 "iscsi_auth_group_remove_secret", 00:05:21.909 "iscsi_auth_group_add_secret", 00:05:21.909 "iscsi_delete_auth_group", 00:05:21.909 "iscsi_create_auth_group", 00:05:21.909 "iscsi_set_discovery_auth", 00:05:21.909 "iscsi_get_options", 00:05:21.909 "iscsi_target_node_request_logout", 00:05:21.909 "iscsi_target_node_set_redirect", 00:05:21.909 "iscsi_target_node_set_auth", 00:05:21.909 "iscsi_target_node_add_lun", 00:05:21.909 "iscsi_get_stats", 00:05:21.909 "iscsi_get_connections", 00:05:21.909 "iscsi_portal_group_set_auth", 00:05:21.909 "iscsi_start_portal_group", 00:05:21.909 "iscsi_delete_portal_group", 00:05:21.910 "iscsi_create_portal_group", 00:05:21.910 "iscsi_get_portal_groups", 00:05:21.910 "iscsi_delete_target_node", 00:05:21.910 "iscsi_target_node_remove_pg_ig_maps", 00:05:21.910 "iscsi_target_node_add_pg_ig_maps", 00:05:21.910 "iscsi_create_target_node", 00:05:21.910 "iscsi_get_target_nodes", 00:05:21.910 "iscsi_delete_initiator_group", 00:05:21.910 "iscsi_initiator_group_remove_initiators", 00:05:21.910 "iscsi_initiator_group_add_initiators", 00:05:21.910 "iscsi_create_initiator_group", 00:05:21.910 "iscsi_get_initiator_groups", 00:05:21.910 "nvmf_set_crdt", 00:05:21.910 "nvmf_set_config", 00:05:21.910 "nvmf_set_max_subsystems", 00:05:21.910 "nvmf_stop_mdns_prr", 00:05:21.910 "nvmf_publish_mdns_prr", 00:05:21.910 "nvmf_subsystem_get_listeners", 00:05:21.910 "nvmf_subsystem_get_qpairs", 00:05:21.910 "nvmf_subsystem_get_controllers", 00:05:21.910 "nvmf_get_stats", 00:05:21.910 "nvmf_get_transports", 00:05:21.910 "nvmf_create_transport", 00:05:21.910 "nvmf_get_targets", 00:05:21.910 "nvmf_delete_target", 00:05:21.910 "nvmf_create_target", 00:05:21.910 "nvmf_subsystem_allow_any_host", 00:05:21.910 "nvmf_subsystem_set_keys", 00:05:21.910 "nvmf_subsystem_remove_host", 00:05:21.910 "nvmf_subsystem_add_host", 00:05:21.910 "nvmf_ns_remove_host", 00:05:21.910 "nvmf_ns_add_host", 00:05:21.910 "nvmf_subsystem_remove_ns", 00:05:21.910 "nvmf_subsystem_set_ns_ana_group", 00:05:21.910 "nvmf_subsystem_add_ns", 00:05:21.910 "nvmf_subsystem_listener_set_ana_state", 00:05:21.910 "nvmf_discovery_get_referrals", 00:05:21.910 "nvmf_discovery_remove_referral", 00:05:21.910 "nvmf_discovery_add_referral", 00:05:21.910 "nvmf_subsystem_remove_listener", 00:05:21.910 "nvmf_subsystem_add_listener", 00:05:21.910 "nvmf_delete_subsystem", 00:05:21.910 "nvmf_create_subsystem", 00:05:21.910 "nvmf_get_subsystems", 00:05:21.910 "env_dpdk_get_mem_stats", 00:05:21.910 "nbd_get_disks", 00:05:21.910 "nbd_stop_disk", 00:05:21.910 "nbd_start_disk", 00:05:21.910 "ublk_recover_disk", 00:05:21.910 "ublk_get_disks", 00:05:21.910 "ublk_stop_disk", 00:05:21.910 "ublk_start_disk", 00:05:21.910 "ublk_destroy_target", 00:05:21.910 "ublk_create_target", 00:05:21.910 "virtio_blk_create_transport", 00:05:21.910 "virtio_blk_get_transports", 00:05:21.910 "vhost_controller_set_coalescing", 00:05:21.910 "vhost_get_controllers", 00:05:21.910 "vhost_delete_controller", 00:05:21.910 "vhost_create_blk_controller", 00:05:21.910 "vhost_scsi_controller_remove_target", 00:05:21.910 "vhost_scsi_controller_add_target", 00:05:21.910 "vhost_start_scsi_controller", 00:05:21.910 "vhost_create_scsi_controller", 00:05:21.910 "thread_set_cpumask", 00:05:21.910 "scheduler_set_options", 00:05:21.910 "framework_get_governor", 00:05:21.910 "framework_get_scheduler", 00:05:21.910 "framework_set_scheduler", 00:05:21.910 "framework_get_reactors", 00:05:21.910 "thread_get_io_channels", 00:05:21.910 "thread_get_pollers", 00:05:21.910 "thread_get_stats", 00:05:21.910 "framework_monitor_context_switch", 00:05:21.910 "spdk_kill_instance", 00:05:21.910 "log_enable_timestamps", 00:05:21.910 "log_get_flags", 00:05:21.910 "log_clear_flag", 00:05:21.910 "log_set_flag", 00:05:21.910 "log_get_level", 00:05:21.910 "log_set_level", 00:05:21.910 "log_get_print_level", 00:05:21.910 "log_set_print_level", 00:05:21.910 "framework_enable_cpumask_locks", 00:05:21.910 "framework_disable_cpumask_locks", 00:05:21.910 "framework_wait_init", 00:05:21.910 "framework_start_init", 00:05:21.910 "scsi_get_devices", 00:05:21.910 "bdev_get_histogram", 00:05:21.910 "bdev_enable_histogram", 00:05:21.910 "bdev_set_qos_limit", 00:05:21.910 "bdev_set_qd_sampling_period", 00:05:21.910 "bdev_get_bdevs", 00:05:21.910 "bdev_reset_iostat", 00:05:21.910 "bdev_get_iostat", 00:05:21.910 "bdev_examine", 00:05:21.910 "bdev_wait_for_examine", 00:05:21.910 "bdev_set_options", 00:05:21.910 "accel_get_stats", 00:05:21.910 "accel_set_options", 00:05:21.910 "accel_set_driver", 00:05:21.910 "accel_crypto_key_destroy", 00:05:21.910 "accel_crypto_keys_get", 00:05:21.910 "accel_crypto_key_create", 00:05:21.910 "accel_assign_opc", 00:05:21.910 "accel_get_module_info", 00:05:21.910 "accel_get_opc_assignments", 00:05:21.910 "vmd_rescan", 00:05:21.910 "vmd_remove_device", 00:05:21.910 "vmd_enable", 00:05:21.910 "sock_get_default_impl", 00:05:21.910 "sock_set_default_impl", 00:05:21.910 "sock_impl_set_options", 00:05:21.910 "sock_impl_get_options", 00:05:21.910 "iobuf_get_stats", 00:05:21.910 "iobuf_set_options", 00:05:21.910 "keyring_get_keys", 00:05:21.910 "vfu_tgt_set_base_path", 00:05:21.910 "framework_get_pci_devices", 00:05:21.910 "framework_get_config", 00:05:21.910 "framework_get_subsystems", 00:05:21.910 "fsdev_set_opts", 00:05:21.910 "fsdev_get_opts", 00:05:21.910 "trace_get_info", 00:05:21.910 "trace_get_tpoint_group_mask", 00:05:21.910 "trace_disable_tpoint_group", 00:05:21.910 "trace_enable_tpoint_group", 00:05:21.910 "trace_clear_tpoint_mask", 00:05:21.910 "trace_set_tpoint_mask", 00:05:21.910 "notify_get_notifications", 00:05:21.910 "notify_get_types", 00:05:21.910 "spdk_get_version", 00:05:21.910 "rpc_get_methods" 00:05:21.910 ] 00:05:21.910 06:04:16 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:21.910 06:04:16 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:21.910 06:04:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:21.910 06:04:16 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:21.910 06:04:16 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 118865 00:05:21.910 06:04:16 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 118865 ']' 00:05:21.910 06:04:16 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 118865 00:05:21.910 06:04:16 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:21.910 06:04:16 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:21.910 06:04:16 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 118865 00:05:21.910 06:04:16 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:21.910 06:04:16 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:21.910 06:04:16 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 118865' 00:05:21.910 killing process with pid 118865 00:05:21.910 06:04:16 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 118865 00:05:21.910 06:04:16 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 118865 00:05:22.169 00:05:22.170 real 0m1.521s 00:05:22.170 user 0m2.760s 00:05:22.170 sys 0m0.488s 00:05:22.170 06:04:16 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:22.170 06:04:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:22.170 ************************************ 00:05:22.170 END TEST spdkcli_tcp 00:05:22.170 ************************************ 00:05:22.170 06:04:16 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:22.170 06:04:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:22.170 06:04:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:22.170 06:04:16 -- common/autotest_common.sh@10 -- # set +x 00:05:22.170 ************************************ 00:05:22.170 START TEST dpdk_mem_utility 00:05:22.170 ************************************ 00:05:22.170 06:04:16 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:22.430 * Looking for test storage... 00:05:22.430 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:22.430 06:04:16 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:22.430 06:04:16 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:05:22.430 06:04:16 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:22.430 06:04:16 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:22.430 06:04:16 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:22.430 06:04:16 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:22.430 06:04:16 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:22.430 06:04:16 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:22.430 06:04:16 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:22.430 06:04:16 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:22.430 06:04:16 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:22.430 06:04:16 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:22.430 06:04:16 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:22.430 06:04:16 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:22.430 06:04:16 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:22.430 06:04:16 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:22.430 06:04:16 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:22.430 06:04:16 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:22.430 06:04:16 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:22.430 06:04:16 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:22.430 06:04:16 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:22.430 06:04:16 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:22.430 06:04:16 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:22.430 06:04:16 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:22.430 06:04:16 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:22.430 06:04:16 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:22.430 06:04:16 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:22.430 06:04:16 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:22.430 06:04:16 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:22.430 06:04:16 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:22.430 06:04:16 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:22.430 06:04:16 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:22.430 06:04:16 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:22.430 06:04:16 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:22.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.430 --rc genhtml_branch_coverage=1 00:05:22.430 --rc genhtml_function_coverage=1 00:05:22.430 --rc genhtml_legend=1 00:05:22.430 --rc geninfo_all_blocks=1 00:05:22.430 --rc geninfo_unexecuted_blocks=1 00:05:22.430 00:05:22.430 ' 00:05:22.430 06:04:16 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:22.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.430 --rc genhtml_branch_coverage=1 00:05:22.430 --rc genhtml_function_coverage=1 00:05:22.430 --rc genhtml_legend=1 00:05:22.430 --rc geninfo_all_blocks=1 00:05:22.430 --rc geninfo_unexecuted_blocks=1 00:05:22.430 00:05:22.430 ' 00:05:22.430 06:04:16 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:22.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.430 --rc genhtml_branch_coverage=1 00:05:22.430 --rc genhtml_function_coverage=1 00:05:22.430 --rc genhtml_legend=1 00:05:22.430 --rc geninfo_all_blocks=1 00:05:22.430 --rc geninfo_unexecuted_blocks=1 00:05:22.430 00:05:22.430 ' 00:05:22.430 06:04:16 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:22.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.430 --rc genhtml_branch_coverage=1 00:05:22.430 --rc genhtml_function_coverage=1 00:05:22.430 --rc genhtml_legend=1 00:05:22.430 --rc geninfo_all_blocks=1 00:05:22.430 --rc geninfo_unexecuted_blocks=1 00:05:22.430 00:05:22.430 ' 00:05:22.430 06:04:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:22.430 06:04:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=119236 00:05:22.430 06:04:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 119236 00:05:22.430 06:04:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:22.430 06:04:16 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 119236 ']' 00:05:22.430 06:04:16 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.430 06:04:16 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:22.430 06:04:16 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.431 06:04:16 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:22.431 06:04:16 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:22.431 [2024-12-09 06:04:16.994466] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:05:22.431 [2024-12-09 06:04:16.994520] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119236 ] 00:05:22.726 [2024-12-09 06:04:17.075672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.726 [2024-12-09 06:04:17.108640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.298 06:04:17 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:23.298 06:04:17 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:23.298 06:04:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:23.298 06:04:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:23.298 06:04:17 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.298 06:04:17 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:23.298 { 00:05:23.298 "filename": "/tmp/spdk_mem_dump.txt" 00:05:23.298 } 00:05:23.298 06:04:17 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.298 06:04:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:23.298 DPDK memory size 818.000000 MiB in 1 heap(s) 00:05:23.298 1 heaps totaling size 818.000000 MiB 00:05:23.298 size: 818.000000 MiB heap id: 0 00:05:23.298 end heaps---------- 00:05:23.298 9 mempools totaling size 603.782043 MiB 00:05:23.298 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:23.298 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:23.298 size: 100.555481 MiB name: bdev_io_119236 00:05:23.298 size: 50.003479 MiB name: msgpool_119236 00:05:23.298 size: 36.509338 MiB name: fsdev_io_119236 00:05:23.298 size: 21.763794 MiB name: PDU_Pool 00:05:23.298 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:23.298 size: 4.133484 MiB name: evtpool_119236 00:05:23.298 size: 0.026123 MiB name: Session_Pool 00:05:23.298 end mempools------- 00:05:23.298 6 memzones totaling size 4.142822 MiB 00:05:23.298 size: 1.000366 MiB name: RG_ring_0_119236 00:05:23.298 size: 1.000366 MiB name: RG_ring_1_119236 00:05:23.298 size: 1.000366 MiB name: RG_ring_4_119236 00:05:23.298 size: 1.000366 MiB name: RG_ring_5_119236 00:05:23.298 size: 0.125366 MiB name: RG_ring_2_119236 00:05:23.298 size: 0.015991 MiB name: RG_ring_3_119236 00:05:23.298 end memzones------- 00:05:23.298 06:04:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:23.558 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:05:23.558 list of free elements. size: 10.852478 MiB 00:05:23.558 element at address: 0x200019200000 with size: 0.999878 MiB 00:05:23.558 element at address: 0x200019400000 with size: 0.999878 MiB 00:05:23.558 element at address: 0x200000400000 with size: 0.998535 MiB 00:05:23.558 element at address: 0x200032000000 with size: 0.994446 MiB 00:05:23.558 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:23.558 element at address: 0x200012c00000 with size: 0.944275 MiB 00:05:23.558 element at address: 0x200019600000 with size: 0.936584 MiB 00:05:23.558 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:23.558 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:05:23.558 element at address: 0x200000c00000 with size: 0.495422 MiB 00:05:23.558 element at address: 0x20000a600000 with size: 0.490723 MiB 00:05:23.558 element at address: 0x200019800000 with size: 0.485657 MiB 00:05:23.558 element at address: 0x200003e00000 with size: 0.481934 MiB 00:05:23.558 element at address: 0x200028200000 with size: 0.410034 MiB 00:05:23.558 element at address: 0x200000800000 with size: 0.355042 MiB 00:05:23.558 list of standard malloc elements. size: 199.218628 MiB 00:05:23.558 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:23.558 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:23.558 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:23.558 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:05:23.558 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:05:23.558 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:23.559 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:05:23.559 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:23.559 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:05:23.559 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:23.559 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:23.559 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:23.559 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:23.559 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:05:23.559 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:23.559 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:23.559 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:05:23.559 element at address: 0x20000085b040 with size: 0.000183 MiB 00:05:23.559 element at address: 0x20000085f300 with size: 0.000183 MiB 00:05:23.559 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:23.559 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:23.559 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:23.559 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:23.559 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:23.559 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:23.559 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:23.559 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:23.559 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:23.559 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:23.559 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:23.559 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:23.559 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:23.559 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:23.559 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:05:23.559 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:05:23.559 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:05:23.559 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:05:23.559 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:05:23.559 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:05:23.559 element at address: 0x200028268f80 with size: 0.000183 MiB 00:05:23.559 element at address: 0x200028269040 with size: 0.000183 MiB 00:05:23.559 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:05:23.559 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:05:23.559 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:05:23.559 list of memzone associated elements. size: 607.928894 MiB 00:05:23.559 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:05:23.559 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:23.559 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:05:23.559 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:23.559 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:05:23.559 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_119236_0 00:05:23.559 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:23.559 associated memzone info: size: 48.002930 MiB name: MP_msgpool_119236_0 00:05:23.559 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:23.559 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_119236_0 00:05:23.559 element at address: 0x2000199be940 with size: 20.255554 MiB 00:05:23.559 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:23.559 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:05:23.559 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:23.559 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:23.559 associated memzone info: size: 3.000122 MiB name: MP_evtpool_119236_0 00:05:23.559 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:23.559 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_119236 00:05:23.559 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:23.559 associated memzone info: size: 1.007996 MiB name: MP_evtpool_119236 00:05:23.559 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:23.559 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:23.559 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:05:23.559 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:23.559 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:23.559 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:23.559 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:23.559 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:23.559 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:23.559 associated memzone info: size: 1.000366 MiB name: RG_ring_0_119236 00:05:23.559 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:23.559 associated memzone info: size: 1.000366 MiB name: RG_ring_1_119236 00:05:23.559 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:05:23.559 associated memzone info: size: 1.000366 MiB name: RG_ring_4_119236 00:05:23.559 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:05:23.559 associated memzone info: size: 1.000366 MiB name: RG_ring_5_119236 00:05:23.559 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:23.559 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_119236 00:05:23.559 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:23.559 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_119236 00:05:23.559 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:23.559 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:23.559 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:23.559 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:23.559 element at address: 0x20001987c540 with size: 0.250488 MiB 00:05:23.559 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:23.559 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:23.559 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_119236 00:05:23.559 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:05:23.559 associated memzone info: size: 0.125366 MiB name: RG_ring_2_119236 00:05:23.559 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:23.559 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:23.559 element at address: 0x200028269100 with size: 0.023743 MiB 00:05:23.559 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:23.559 element at address: 0x20000085b100 with size: 0.016113 MiB 00:05:23.559 associated memzone info: size: 0.015991 MiB name: RG_ring_3_119236 00:05:23.559 element at address: 0x20002826f240 with size: 0.002441 MiB 00:05:23.559 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:23.559 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:05:23.559 associated memzone info: size: 0.000183 MiB name: MP_msgpool_119236 00:05:23.559 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:23.559 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_119236 00:05:23.559 element at address: 0x20000085af00 with size: 0.000305 MiB 00:05:23.559 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_119236 00:05:23.559 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:05:23.559 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:23.559 06:04:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:23.559 06:04:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 119236 00:05:23.559 06:04:17 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 119236 ']' 00:05:23.559 06:04:17 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 119236 00:05:23.559 06:04:17 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:23.559 06:04:17 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:23.559 06:04:17 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 119236 00:05:23.559 06:04:17 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:23.559 06:04:17 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:23.559 06:04:17 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 119236' 00:05:23.559 killing process with pid 119236 00:05:23.559 06:04:17 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 119236 00:05:23.559 06:04:17 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 119236 00:05:23.819 00:05:23.819 real 0m1.406s 00:05:23.819 user 0m1.513s 00:05:23.819 sys 0m0.388s 00:05:23.819 06:04:18 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.819 06:04:18 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:23.819 ************************************ 00:05:23.819 END TEST dpdk_mem_utility 00:05:23.819 ************************************ 00:05:23.819 06:04:18 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:23.819 06:04:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:23.819 06:04:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.819 06:04:18 -- common/autotest_common.sh@10 -- # set +x 00:05:23.819 ************************************ 00:05:23.819 START TEST event 00:05:23.819 ************************************ 00:05:23.819 06:04:18 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:23.819 * Looking for test storage... 00:05:23.819 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:23.819 06:04:18 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:23.819 06:04:18 event -- common/autotest_common.sh@1711 -- # lcov --version 00:05:23.819 06:04:18 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:23.819 06:04:18 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:23.819 06:04:18 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:23.819 06:04:18 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:23.819 06:04:18 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:23.819 06:04:18 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:23.819 06:04:18 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:23.819 06:04:18 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:23.819 06:04:18 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:23.819 06:04:18 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:23.819 06:04:18 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:23.819 06:04:18 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:23.819 06:04:18 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:23.819 06:04:18 event -- scripts/common.sh@344 -- # case "$op" in 00:05:23.819 06:04:18 event -- scripts/common.sh@345 -- # : 1 00:05:23.819 06:04:18 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:23.819 06:04:18 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:23.819 06:04:18 event -- scripts/common.sh@365 -- # decimal 1 00:05:23.819 06:04:18 event -- scripts/common.sh@353 -- # local d=1 00:05:23.819 06:04:18 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:23.819 06:04:18 event -- scripts/common.sh@355 -- # echo 1 00:05:23.819 06:04:18 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:23.819 06:04:18 event -- scripts/common.sh@366 -- # decimal 2 00:05:23.819 06:04:18 event -- scripts/common.sh@353 -- # local d=2 00:05:23.819 06:04:18 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:23.819 06:04:18 event -- scripts/common.sh@355 -- # echo 2 00:05:23.819 06:04:18 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:23.819 06:04:18 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:23.819 06:04:18 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:23.819 06:04:18 event -- scripts/common.sh@368 -- # return 0 00:05:23.819 06:04:18 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:23.819 06:04:18 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:23.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.819 --rc genhtml_branch_coverage=1 00:05:23.819 --rc genhtml_function_coverage=1 00:05:23.819 --rc genhtml_legend=1 00:05:23.819 --rc geninfo_all_blocks=1 00:05:23.819 --rc geninfo_unexecuted_blocks=1 00:05:23.819 00:05:23.819 ' 00:05:23.819 06:04:18 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:23.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.819 --rc genhtml_branch_coverage=1 00:05:23.819 --rc genhtml_function_coverage=1 00:05:23.819 --rc genhtml_legend=1 00:05:23.819 --rc geninfo_all_blocks=1 00:05:23.819 --rc geninfo_unexecuted_blocks=1 00:05:23.819 00:05:23.819 ' 00:05:23.819 06:04:18 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:23.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.819 --rc genhtml_branch_coverage=1 00:05:23.819 --rc genhtml_function_coverage=1 00:05:23.819 --rc genhtml_legend=1 00:05:23.819 --rc geninfo_all_blocks=1 00:05:23.819 --rc geninfo_unexecuted_blocks=1 00:05:23.819 00:05:23.819 ' 00:05:23.819 06:04:18 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:23.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.819 --rc genhtml_branch_coverage=1 00:05:23.819 --rc genhtml_function_coverage=1 00:05:23.819 --rc genhtml_legend=1 00:05:23.819 --rc geninfo_all_blocks=1 00:05:23.819 --rc geninfo_unexecuted_blocks=1 00:05:23.819 00:05:23.819 ' 00:05:23.820 06:04:18 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:23.820 06:04:18 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:23.820 06:04:18 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:23.820 06:04:18 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:23.820 06:04:18 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.820 06:04:18 event -- common/autotest_common.sh@10 -- # set +x 00:05:24.079 ************************************ 00:05:24.079 START TEST event_perf 00:05:24.079 ************************************ 00:05:24.079 06:04:18 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:24.079 Running I/O for 1 seconds...[2024-12-09 06:04:18.437301] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:05:24.079 [2024-12-09 06:04:18.437405] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119602 ] 00:05:24.079 [2024-12-09 06:04:18.528850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:24.079 [2024-12-09 06:04:18.574066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:24.079 [2024-12-09 06:04:18.574213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:24.079 [2024-12-09 06:04:18.574357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.079 Running I/O for 1 seconds...[2024-12-09 06:04:18.574358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:25.020 00:05:25.020 lcore 0: 195832 00:05:25.020 lcore 1: 195834 00:05:25.020 lcore 2: 195834 00:05:25.020 lcore 3: 195834 00:05:25.020 done. 00:05:25.020 00:05:25.020 real 0m1.187s 00:05:25.020 user 0m4.095s 00:05:25.020 sys 0m0.089s 00:05:25.020 06:04:19 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:25.020 06:04:19 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:25.020 ************************************ 00:05:25.020 END TEST event_perf 00:05:25.020 ************************************ 00:05:25.281 06:04:19 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:25.281 06:04:19 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:25.281 06:04:19 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:25.281 06:04:19 event -- common/autotest_common.sh@10 -- # set +x 00:05:25.281 ************************************ 00:05:25.281 START TEST event_reactor 00:05:25.281 ************************************ 00:05:25.281 06:04:19 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:25.281 [2024-12-09 06:04:19.682741] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:05:25.281 [2024-12-09 06:04:19.682848] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119923 ] 00:05:25.281 [2024-12-09 06:04:19.768862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.281 [2024-12-09 06:04:19.806289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.664 test_start 00:05:26.664 oneshot 00:05:26.664 tick 100 00:05:26.664 tick 100 00:05:26.664 tick 250 00:05:26.664 tick 100 00:05:26.664 tick 100 00:05:26.664 tick 250 00:05:26.664 tick 100 00:05:26.664 tick 500 00:05:26.664 tick 100 00:05:26.664 tick 100 00:05:26.664 tick 250 00:05:26.664 tick 100 00:05:26.664 tick 100 00:05:26.664 test_end 00:05:26.664 00:05:26.664 real 0m1.171s 00:05:26.664 user 0m1.097s 00:05:26.665 sys 0m0.071s 00:05:26.665 06:04:20 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.665 06:04:20 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:26.665 ************************************ 00:05:26.665 END TEST event_reactor 00:05:26.665 ************************************ 00:05:26.665 06:04:20 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:26.665 06:04:20 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:26.665 06:04:20 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.665 06:04:20 event -- common/autotest_common.sh@10 -- # set +x 00:05:26.665 ************************************ 00:05:26.665 START TEST event_reactor_perf 00:05:26.665 ************************************ 00:05:26.665 06:04:20 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:26.665 [2024-12-09 06:04:20.925194] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:05:26.665 [2024-12-09 06:04:20.925292] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119973 ] 00:05:26.665 [2024-12-09 06:04:21.017270] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.665 [2024-12-09 06:04:21.055747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.607 test_start 00:05:27.607 test_end 00:05:27.607 Performance: 523497 events per second 00:05:27.607 00:05:27.607 real 0m1.177s 00:05:27.607 user 0m1.088s 00:05:27.607 sys 0m0.085s 00:05:27.607 06:04:22 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:27.607 06:04:22 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:27.607 ************************************ 00:05:27.607 END TEST event_reactor_perf 00:05:27.607 ************************************ 00:05:27.607 06:04:22 event -- event/event.sh@49 -- # uname -s 00:05:27.607 06:04:22 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:27.607 06:04:22 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:27.607 06:04:22 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:27.607 06:04:22 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:27.607 06:04:22 event -- common/autotest_common.sh@10 -- # set +x 00:05:27.607 ************************************ 00:05:27.607 START TEST event_scheduler 00:05:27.607 ************************************ 00:05:27.607 06:04:22 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:27.870 * Looking for test storage... 00:05:27.870 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:27.870 06:04:22 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:27.870 06:04:22 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:05:27.870 06:04:22 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:27.870 06:04:22 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:27.870 06:04:22 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:27.870 06:04:22 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:27.870 06:04:22 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:27.870 06:04:22 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:27.870 06:04:22 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:27.870 06:04:22 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:27.870 06:04:22 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:27.870 06:04:22 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:27.870 06:04:22 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:27.870 06:04:22 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:27.870 06:04:22 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:27.870 06:04:22 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:27.870 06:04:22 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:27.870 06:04:22 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:27.870 06:04:22 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:27.870 06:04:22 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:27.870 06:04:22 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:27.870 06:04:22 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:27.870 06:04:22 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:27.870 06:04:22 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:27.870 06:04:22 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:27.870 06:04:22 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:27.870 06:04:22 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:27.870 06:04:22 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:27.870 06:04:22 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:27.870 06:04:22 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:27.870 06:04:22 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:27.870 06:04:22 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:27.870 06:04:22 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:27.870 06:04:22 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:27.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.870 --rc genhtml_branch_coverage=1 00:05:27.870 --rc genhtml_function_coverage=1 00:05:27.870 --rc genhtml_legend=1 00:05:27.870 --rc geninfo_all_blocks=1 00:05:27.870 --rc geninfo_unexecuted_blocks=1 00:05:27.870 00:05:27.870 ' 00:05:27.870 06:04:22 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:27.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.870 --rc genhtml_branch_coverage=1 00:05:27.870 --rc genhtml_function_coverage=1 00:05:27.870 --rc genhtml_legend=1 00:05:27.870 --rc geninfo_all_blocks=1 00:05:27.870 --rc geninfo_unexecuted_blocks=1 00:05:27.870 00:05:27.870 ' 00:05:27.870 06:04:22 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:27.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.870 --rc genhtml_branch_coverage=1 00:05:27.870 --rc genhtml_function_coverage=1 00:05:27.870 --rc genhtml_legend=1 00:05:27.870 --rc geninfo_all_blocks=1 00:05:27.870 --rc geninfo_unexecuted_blocks=1 00:05:27.870 00:05:27.870 ' 00:05:27.870 06:04:22 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:27.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.870 --rc genhtml_branch_coverage=1 00:05:27.870 --rc genhtml_function_coverage=1 00:05:27.870 --rc genhtml_legend=1 00:05:27.870 --rc geninfo_all_blocks=1 00:05:27.870 --rc geninfo_unexecuted_blocks=1 00:05:27.870 00:05:27.870 ' 00:05:27.870 06:04:22 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:27.870 06:04:22 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=120319 00:05:27.870 06:04:22 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:27.870 06:04:22 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 120319 00:05:27.870 06:04:22 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:27.870 06:04:22 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 120319 ']' 00:05:27.870 06:04:22 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.870 06:04:22 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:27.870 06:04:22 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.870 06:04:22 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:27.871 06:04:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:27.871 [2024-12-09 06:04:22.381771] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:05:27.871 [2024-12-09 06:04:22.381840] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120319 ] 00:05:28.132 [2024-12-09 06:04:22.456291] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:28.132 [2024-12-09 06:04:22.509692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.132 [2024-12-09 06:04:22.509847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.132 [2024-12-09 06:04:22.509999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:28.132 [2024-12-09 06:04:22.510000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:28.702 06:04:23 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:28.702 06:04:23 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:28.702 06:04:23 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:28.702 06:04:23 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:28.703 06:04:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:28.703 [2024-12-09 06:04:23.200263] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:28.703 [2024-12-09 06:04:23.200278] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:28.703 [2024-12-09 06:04:23.200287] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:28.703 [2024-12-09 06:04:23.200291] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:28.703 [2024-12-09 06:04:23.200295] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:28.703 06:04:23 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:28.703 06:04:23 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:28.703 06:04:23 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:28.703 06:04:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:28.703 [2024-12-09 06:04:23.256012] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:28.703 06:04:23 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:28.703 06:04:23 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:28.703 06:04:23 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:28.703 06:04:23 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.703 06:04:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:28.963 ************************************ 00:05:28.963 START TEST scheduler_create_thread 00:05:28.963 ************************************ 00:05:28.963 06:04:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:28.963 06:04:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:28.963 06:04:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:28.963 06:04:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.963 2 00:05:28.963 06:04:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:28.963 06:04:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:28.963 06:04:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:28.963 06:04:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.963 3 00:05:28.963 06:04:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:28.963 06:04:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:28.963 06:04:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:28.963 06:04:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.963 4 00:05:28.963 06:04:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:28.963 06:04:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:28.963 06:04:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:28.963 06:04:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.963 5 00:05:28.963 06:04:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:28.963 06:04:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:28.963 06:04:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:28.963 06:04:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.963 6 00:05:28.963 06:04:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:28.963 06:04:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:28.963 06:04:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:28.963 06:04:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.963 7 00:05:28.963 06:04:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:28.963 06:04:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:28.963 06:04:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:28.963 06:04:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.963 8 00:05:28.963 06:04:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:28.963 06:04:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:28.963 06:04:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:28.963 06:04:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.963 9 00:05:28.963 06:04:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:28.963 06:04:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:28.963 06:04:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:28.963 06:04:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.963 10 00:05:28.963 06:04:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:28.963 06:04:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:28.963 06:04:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:28.963 06:04:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.347 06:04:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.347 06:04:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:30.347 06:04:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:30.347 06:04:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.347 06:04:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.287 06:04:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.287 06:04:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:31.287 06:04:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.287 06:04:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.858 06:04:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.858 06:04:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:31.858 06:04:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:31.858 06:04:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.858 06:04:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.797 06:04:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.797 00:05:32.797 real 0m3.894s 00:05:32.797 user 0m0.028s 00:05:32.797 sys 0m0.001s 00:05:32.797 06:04:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:32.797 06:04:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.797 ************************************ 00:05:32.797 END TEST scheduler_create_thread 00:05:32.797 ************************************ 00:05:32.797 06:04:27 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:32.797 06:04:27 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 120319 00:05:32.797 06:04:27 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 120319 ']' 00:05:32.797 06:04:27 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 120319 00:05:32.797 06:04:27 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:32.797 06:04:27 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:32.797 06:04:27 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 120319 00:05:32.797 06:04:27 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:32.797 06:04:27 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:32.797 06:04:27 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 120319' 00:05:32.797 killing process with pid 120319 00:05:32.797 06:04:27 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 120319 00:05:32.797 06:04:27 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 120319 00:05:33.057 [2024-12-09 06:04:27.562570] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:33.317 00:05:33.317 real 0m5.598s 00:05:33.317 user 0m12.110s 00:05:33.317 sys 0m0.384s 00:05:33.317 06:04:27 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.317 06:04:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:33.317 ************************************ 00:05:33.317 END TEST event_scheduler 00:05:33.317 ************************************ 00:05:33.317 06:04:27 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:33.317 06:04:27 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:33.317 06:04:27 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:33.317 06:04:27 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.317 06:04:27 event -- common/autotest_common.sh@10 -- # set +x 00:05:33.317 ************************************ 00:05:33.317 START TEST app_repeat 00:05:33.317 ************************************ 00:05:33.317 06:04:27 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:33.317 06:04:27 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.317 06:04:27 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.317 06:04:27 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:33.317 06:04:27 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:33.317 06:04:27 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:33.317 06:04:27 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:33.317 06:04:27 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:33.317 06:04:27 event.app_repeat -- event/event.sh@19 -- # repeat_pid=121284 00:05:33.317 06:04:27 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:33.317 06:04:27 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:33.317 06:04:27 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 121284' 00:05:33.317 Process app_repeat pid: 121284 00:05:33.317 06:04:27 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:33.317 06:04:27 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:33.318 spdk_app_start Round 0 00:05:33.318 06:04:27 event.app_repeat -- event/event.sh@25 -- # waitforlisten 121284 /var/tmp/spdk-nbd.sock 00:05:33.318 06:04:27 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 121284 ']' 00:05:33.318 06:04:27 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:33.318 06:04:27 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:33.318 06:04:27 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:33.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:33.318 06:04:27 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:33.318 06:04:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:33.318 [2024-12-09 06:04:27.860130] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:05:33.318 [2024-12-09 06:04:27.860201] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121284 ] 00:05:33.577 [2024-12-09 06:04:27.949742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:33.577 [2024-12-09 06:04:27.988816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:33.577 [2024-12-09 06:04:27.988816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.577 06:04:28 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:33.577 06:04:28 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:33.577 06:04:28 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:33.839 Malloc0 00:05:33.839 06:04:28 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:33.839 Malloc1 00:05:33.839 06:04:28 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:33.839 06:04:28 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.839 06:04:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:33.839 06:04:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:33.839 06:04:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.839 06:04:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:33.839 06:04:28 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:33.839 06:04:28 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.839 06:04:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:33.839 06:04:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:33.839 06:04:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.839 06:04:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:33.839 06:04:28 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:33.839 06:04:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:33.839 06:04:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:33.839 06:04:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:34.100 /dev/nbd0 00:05:34.100 06:04:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:34.100 06:04:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:34.100 06:04:28 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:34.100 06:04:28 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:34.100 06:04:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:34.100 06:04:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:34.100 06:04:28 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:34.100 06:04:28 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:34.100 06:04:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:34.100 06:04:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:34.100 06:04:28 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:34.100 1+0 records in 00:05:34.100 1+0 records out 00:05:34.100 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000271061 s, 15.1 MB/s 00:05:34.100 06:04:28 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:34.100 06:04:28 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:34.100 06:04:28 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:34.100 06:04:28 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:34.100 06:04:28 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:34.100 06:04:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:34.100 06:04:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:34.100 06:04:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:34.360 /dev/nbd1 00:05:34.360 06:04:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:34.360 06:04:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:34.360 06:04:28 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:34.360 06:04:28 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:34.360 06:04:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:34.360 06:04:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:34.361 06:04:28 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:34.361 06:04:28 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:34.361 06:04:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:34.361 06:04:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:34.361 06:04:28 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:34.361 1+0 records in 00:05:34.361 1+0 records out 00:05:34.361 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000216371 s, 18.9 MB/s 00:05:34.361 06:04:28 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:34.361 06:04:28 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:34.361 06:04:28 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:34.361 06:04:28 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:34.361 06:04:28 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:34.361 06:04:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:34.361 06:04:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:34.361 06:04:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:34.361 06:04:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.361 06:04:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:34.622 06:04:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:34.622 { 00:05:34.622 "nbd_device": "/dev/nbd0", 00:05:34.622 "bdev_name": "Malloc0" 00:05:34.622 }, 00:05:34.622 { 00:05:34.622 "nbd_device": "/dev/nbd1", 00:05:34.622 "bdev_name": "Malloc1" 00:05:34.622 } 00:05:34.622 ]' 00:05:34.622 06:04:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:34.622 { 00:05:34.622 "nbd_device": "/dev/nbd0", 00:05:34.622 "bdev_name": "Malloc0" 00:05:34.622 }, 00:05:34.622 { 00:05:34.622 "nbd_device": "/dev/nbd1", 00:05:34.622 "bdev_name": "Malloc1" 00:05:34.622 } 00:05:34.622 ]' 00:05:34.622 06:04:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:34.622 06:04:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:34.622 /dev/nbd1' 00:05:34.622 06:04:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:34.622 /dev/nbd1' 00:05:34.622 06:04:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:34.622 06:04:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:34.622 06:04:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:34.622 06:04:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:34.622 06:04:29 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:34.622 06:04:29 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:34.622 06:04:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.622 06:04:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:34.622 06:04:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:34.622 06:04:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:34.622 06:04:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:34.622 06:04:29 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:34.622 256+0 records in 00:05:34.622 256+0 records out 00:05:34.622 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127725 s, 82.1 MB/s 00:05:34.622 06:04:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:34.622 06:04:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:34.622 256+0 records in 00:05:34.622 256+0 records out 00:05:34.622 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0121569 s, 86.3 MB/s 00:05:34.622 06:04:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:34.622 06:04:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:34.622 256+0 records in 00:05:34.622 256+0 records out 00:05:34.622 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0365156 s, 28.7 MB/s 00:05:34.622 06:04:29 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:34.622 06:04:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.622 06:04:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:34.622 06:04:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:34.622 06:04:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:34.622 06:04:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:34.622 06:04:29 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:34.622 06:04:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:34.622 06:04:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:34.622 06:04:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:34.622 06:04:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:34.622 06:04:29 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:34.883 06:04:29 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:34.883 06:04:29 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.883 06:04:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.883 06:04:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:34.883 06:04:29 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:34.883 06:04:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:34.883 06:04:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:34.883 06:04:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:34.883 06:04:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:34.883 06:04:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:34.883 06:04:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:34.883 06:04:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:34.883 06:04:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:34.883 06:04:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:34.883 06:04:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:34.883 06:04:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:34.883 06:04:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:35.146 06:04:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:35.146 06:04:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:35.146 06:04:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:35.146 06:04:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:35.146 06:04:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:35.146 06:04:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:35.146 06:04:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:35.146 06:04:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:35.146 06:04:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:35.146 06:04:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.146 06:04:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:35.408 06:04:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:35.408 06:04:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:35.408 06:04:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:35.408 06:04:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:35.408 06:04:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:35.408 06:04:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:35.408 06:04:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:35.408 06:04:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:35.408 06:04:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:35.408 06:04:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:35.408 06:04:29 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:35.408 06:04:29 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:35.408 06:04:29 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:35.669 06:04:30 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:35.669 [2024-12-09 06:04:30.101107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:35.669 [2024-12-09 06:04:30.131469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:35.669 [2024-12-09 06:04:30.131471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.669 [2024-12-09 06:04:30.159599] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:35.669 [2024-12-09 06:04:30.159633] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:38.968 06:04:33 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:38.968 06:04:33 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:38.968 spdk_app_start Round 1 00:05:38.968 06:04:33 event.app_repeat -- event/event.sh@25 -- # waitforlisten 121284 /var/tmp/spdk-nbd.sock 00:05:38.968 06:04:33 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 121284 ']' 00:05:38.968 06:04:33 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:38.968 06:04:33 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:38.968 06:04:33 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:38.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:38.968 06:04:33 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:38.968 06:04:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:38.968 06:04:33 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:38.968 06:04:33 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:38.968 06:04:33 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:38.968 Malloc0 00:05:38.968 06:04:33 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:38.968 Malloc1 00:05:38.968 06:04:33 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:38.968 06:04:33 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.968 06:04:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:38.968 06:04:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:38.968 06:04:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.968 06:04:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:38.968 06:04:33 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:38.968 06:04:33 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.968 06:04:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:38.968 06:04:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:38.968 06:04:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.968 06:04:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:38.968 06:04:33 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:38.968 06:04:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:38.968 06:04:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:38.968 06:04:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:39.230 /dev/nbd0 00:05:39.230 06:04:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:39.230 06:04:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:39.230 06:04:33 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:39.230 06:04:33 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:39.230 06:04:33 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:39.230 06:04:33 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:39.230 06:04:33 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:39.230 06:04:33 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:39.230 06:04:33 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:39.230 06:04:33 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:39.230 06:04:33 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:39.230 1+0 records in 00:05:39.230 1+0 records out 00:05:39.230 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000270249 s, 15.2 MB/s 00:05:39.230 06:04:33 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:39.230 06:04:33 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:39.230 06:04:33 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:39.230 06:04:33 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:39.230 06:04:33 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:39.230 06:04:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:39.231 06:04:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:39.231 06:04:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:39.494 /dev/nbd1 00:05:39.494 06:04:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:39.494 06:04:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:39.494 06:04:33 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:39.494 06:04:33 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:39.494 06:04:33 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:39.494 06:04:33 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:39.494 06:04:33 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:39.494 06:04:33 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:39.494 06:04:33 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:39.494 06:04:33 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:39.494 06:04:33 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:39.494 1+0 records in 00:05:39.494 1+0 records out 00:05:39.494 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000223239 s, 18.3 MB/s 00:05:39.494 06:04:33 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:39.494 06:04:33 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:39.494 06:04:33 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:39.494 06:04:33 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:39.494 06:04:33 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:39.494 06:04:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:39.494 06:04:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:39.494 06:04:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:39.494 06:04:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.494 06:04:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:39.754 06:04:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:39.754 { 00:05:39.754 "nbd_device": "/dev/nbd0", 00:05:39.754 "bdev_name": "Malloc0" 00:05:39.754 }, 00:05:39.754 { 00:05:39.754 "nbd_device": "/dev/nbd1", 00:05:39.754 "bdev_name": "Malloc1" 00:05:39.754 } 00:05:39.754 ]' 00:05:39.754 06:04:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:39.754 { 00:05:39.754 "nbd_device": "/dev/nbd0", 00:05:39.754 "bdev_name": "Malloc0" 00:05:39.754 }, 00:05:39.754 { 00:05:39.754 "nbd_device": "/dev/nbd1", 00:05:39.754 "bdev_name": "Malloc1" 00:05:39.754 } 00:05:39.754 ]' 00:05:39.754 06:04:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:39.754 06:04:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:39.754 /dev/nbd1' 00:05:39.754 06:04:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:39.754 /dev/nbd1' 00:05:39.754 06:04:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:39.754 06:04:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:39.754 06:04:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:39.754 06:04:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:39.754 06:04:34 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:39.754 06:04:34 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:39.754 06:04:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.754 06:04:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:39.754 06:04:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:39.754 06:04:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:39.754 06:04:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:39.754 06:04:34 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:39.754 256+0 records in 00:05:39.754 256+0 records out 00:05:39.754 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0117407 s, 89.3 MB/s 00:05:39.754 06:04:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:39.754 06:04:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:39.754 256+0 records in 00:05:39.754 256+0 records out 00:05:39.754 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124585 s, 84.2 MB/s 00:05:39.754 06:04:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:39.754 06:04:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:39.754 256+0 records in 00:05:39.754 256+0 records out 00:05:39.754 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0128971 s, 81.3 MB/s 00:05:39.754 06:04:34 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:39.754 06:04:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.754 06:04:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:39.754 06:04:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:39.754 06:04:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:39.754 06:04:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:39.754 06:04:34 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:39.754 06:04:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:39.754 06:04:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:39.754 06:04:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:39.754 06:04:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:39.754 06:04:34 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:39.754 06:04:34 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:39.754 06:04:34 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.754 06:04:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.754 06:04:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:39.754 06:04:34 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:39.754 06:04:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:39.754 06:04:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:40.013 06:04:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:40.013 06:04:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:40.013 06:04:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:40.013 06:04:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:40.013 06:04:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:40.013 06:04:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:40.013 06:04:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:40.013 06:04:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:40.013 06:04:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:40.013 06:04:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:40.273 06:04:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:40.273 06:04:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:40.273 06:04:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:40.273 06:04:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:40.273 06:04:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:40.273 06:04:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:40.273 06:04:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:40.273 06:04:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:40.273 06:04:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:40.273 06:04:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.273 06:04:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:40.273 06:04:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:40.273 06:04:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:40.273 06:04:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:40.532 06:04:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:40.532 06:04:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:40.532 06:04:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:40.532 06:04:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:40.532 06:04:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:40.532 06:04:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:40.532 06:04:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:40.532 06:04:34 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:40.532 06:04:34 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:40.532 06:04:34 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:40.532 06:04:35 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:40.791 [2024-12-09 06:04:35.179168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:40.791 [2024-12-09 06:04:35.209062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.791 [2024-12-09 06:04:35.209062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:40.791 [2024-12-09 06:04:35.238072] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:40.791 [2024-12-09 06:04:35.238109] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:44.086 06:04:38 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:44.086 06:04:38 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:44.086 spdk_app_start Round 2 00:05:44.086 06:04:38 event.app_repeat -- event/event.sh@25 -- # waitforlisten 121284 /var/tmp/spdk-nbd.sock 00:05:44.086 06:04:38 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 121284 ']' 00:05:44.086 06:04:38 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:44.086 06:04:38 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:44.086 06:04:38 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:44.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:44.086 06:04:38 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:44.086 06:04:38 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:44.086 06:04:38 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:44.086 06:04:38 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:44.086 06:04:38 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:44.086 Malloc0 00:05:44.086 06:04:38 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:44.086 Malloc1 00:05:44.086 06:04:38 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:44.086 06:04:38 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.086 06:04:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:44.086 06:04:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:44.086 06:04:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.086 06:04:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:44.086 06:04:38 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:44.086 06:04:38 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.086 06:04:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:44.086 06:04:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:44.086 06:04:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.086 06:04:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:44.086 06:04:38 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:44.086 06:04:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:44.086 06:04:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:44.086 06:04:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:44.346 /dev/nbd0 00:05:44.346 06:04:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:44.346 06:04:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:44.346 06:04:38 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:44.346 06:04:38 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:44.346 06:04:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:44.346 06:04:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:44.346 06:04:38 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:44.346 06:04:38 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:44.346 06:04:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:44.346 06:04:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:44.346 06:04:38 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:44.346 1+0 records in 00:05:44.346 1+0 records out 00:05:44.346 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000221366 s, 18.5 MB/s 00:05:44.346 06:04:38 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:44.346 06:04:38 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:44.346 06:04:38 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:44.346 06:04:38 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:44.346 06:04:38 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:44.346 06:04:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:44.346 06:04:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:44.346 06:04:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:44.606 /dev/nbd1 00:05:44.606 06:04:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:44.606 06:04:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:44.606 06:04:39 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:44.606 06:04:39 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:44.606 06:04:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:44.606 06:04:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:44.606 06:04:39 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:44.606 06:04:39 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:44.606 06:04:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:44.606 06:04:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:44.606 06:04:39 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:44.606 1+0 records in 00:05:44.606 1+0 records out 00:05:44.606 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000197561 s, 20.7 MB/s 00:05:44.606 06:04:39 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:44.606 06:04:39 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:44.606 06:04:39 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:44.606 06:04:39 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:44.606 06:04:39 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:44.606 06:04:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:44.606 06:04:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:44.606 06:04:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:44.606 06:04:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.606 06:04:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:44.867 06:04:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:44.867 { 00:05:44.867 "nbd_device": "/dev/nbd0", 00:05:44.867 "bdev_name": "Malloc0" 00:05:44.867 }, 00:05:44.867 { 00:05:44.867 "nbd_device": "/dev/nbd1", 00:05:44.867 "bdev_name": "Malloc1" 00:05:44.867 } 00:05:44.867 ]' 00:05:44.867 06:04:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:44.867 { 00:05:44.867 "nbd_device": "/dev/nbd0", 00:05:44.867 "bdev_name": "Malloc0" 00:05:44.867 }, 00:05:44.867 { 00:05:44.867 "nbd_device": "/dev/nbd1", 00:05:44.867 "bdev_name": "Malloc1" 00:05:44.867 } 00:05:44.868 ]' 00:05:44.868 06:04:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:44.868 06:04:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:44.868 /dev/nbd1' 00:05:44.868 06:04:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:44.868 /dev/nbd1' 00:05:44.868 06:04:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:44.868 06:04:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:44.868 06:04:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:44.868 06:04:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:44.868 06:04:39 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:44.868 06:04:39 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:44.868 06:04:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.868 06:04:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:44.868 06:04:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:44.868 06:04:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:44.868 06:04:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:44.868 06:04:39 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:44.868 256+0 records in 00:05:44.868 256+0 records out 00:05:44.868 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0126617 s, 82.8 MB/s 00:05:44.868 06:04:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:44.868 06:04:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:44.868 256+0 records in 00:05:44.868 256+0 records out 00:05:44.868 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0121795 s, 86.1 MB/s 00:05:44.868 06:04:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:44.868 06:04:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:44.868 256+0 records in 00:05:44.868 256+0 records out 00:05:44.868 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0131846 s, 79.5 MB/s 00:05:44.868 06:04:39 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:44.868 06:04:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.868 06:04:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:44.868 06:04:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:44.868 06:04:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:44.868 06:04:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:44.868 06:04:39 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:44.868 06:04:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:44.868 06:04:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:44.868 06:04:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:44.868 06:04:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:44.868 06:04:39 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:44.868 06:04:39 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:44.868 06:04:39 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.868 06:04:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.868 06:04:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:44.868 06:04:39 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:44.868 06:04:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:44.868 06:04:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:45.129 06:04:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:45.129 06:04:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:45.129 06:04:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:45.129 06:04:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:45.129 06:04:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:45.129 06:04:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:45.129 06:04:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:45.129 06:04:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:45.129 06:04:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:45.129 06:04:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:45.390 06:04:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:45.390 06:04:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:45.390 06:04:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:45.390 06:04:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:45.390 06:04:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:45.390 06:04:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:45.390 06:04:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:45.390 06:04:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:45.390 06:04:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:45.390 06:04:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.390 06:04:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:45.390 06:04:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:45.390 06:04:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:45.390 06:04:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:45.390 06:04:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:45.390 06:04:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:45.390 06:04:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:45.390 06:04:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:45.390 06:04:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:45.390 06:04:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:45.390 06:04:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:45.390 06:04:39 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:45.650 06:04:39 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:45.650 06:04:39 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:45.650 06:04:40 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:45.910 [2024-12-09 06:04:40.252519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:45.910 [2024-12-09 06:04:40.282648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.910 [2024-12-09 06:04:40.282649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:45.911 [2024-12-09 06:04:40.310885] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:45.911 [2024-12-09 06:04:40.310917] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:49.239 06:04:43 event.app_repeat -- event/event.sh@38 -- # waitforlisten 121284 /var/tmp/spdk-nbd.sock 00:05:49.239 06:04:43 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 121284 ']' 00:05:49.239 06:04:43 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:49.239 06:04:43 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:49.239 06:04:43 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:49.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:49.239 06:04:43 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:49.239 06:04:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:49.239 06:04:43 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:49.239 06:04:43 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:49.239 06:04:43 event.app_repeat -- event/event.sh@39 -- # killprocess 121284 00:05:49.239 06:04:43 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 121284 ']' 00:05:49.239 06:04:43 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 121284 00:05:49.239 06:04:43 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:49.239 06:04:43 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:49.239 06:04:43 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 121284 00:05:49.239 06:04:43 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:49.239 06:04:43 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:49.239 06:04:43 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 121284' 00:05:49.239 killing process with pid 121284 00:05:49.239 06:04:43 event.app_repeat -- common/autotest_common.sh@973 -- # kill 121284 00:05:49.239 06:04:43 event.app_repeat -- common/autotest_common.sh@978 -- # wait 121284 00:05:49.239 spdk_app_start is called in Round 0. 00:05:49.239 Shutdown signal received, stop current app iteration 00:05:49.239 Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 reinitialization... 00:05:49.239 spdk_app_start is called in Round 1. 00:05:49.239 Shutdown signal received, stop current app iteration 00:05:49.239 Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 reinitialization... 00:05:49.239 spdk_app_start is called in Round 2. 00:05:49.239 Shutdown signal received, stop current app iteration 00:05:49.239 Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 reinitialization... 00:05:49.239 spdk_app_start is called in Round 3. 00:05:49.239 Shutdown signal received, stop current app iteration 00:05:49.239 06:04:43 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:49.239 06:04:43 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:49.239 00:05:49.239 real 0m15.658s 00:05:49.239 user 0m34.158s 00:05:49.239 sys 0m2.328s 00:05:49.239 06:04:43 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.239 06:04:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:49.239 ************************************ 00:05:49.239 END TEST app_repeat 00:05:49.239 ************************************ 00:05:49.239 06:04:43 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:49.239 06:04:43 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:49.239 06:04:43 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.239 06:04:43 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.239 06:04:43 event -- common/autotest_common.sh@10 -- # set +x 00:05:49.239 ************************************ 00:05:49.239 START TEST cpu_locks 00:05:49.239 ************************************ 00:05:49.239 06:04:43 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:49.239 * Looking for test storage... 00:05:49.240 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:49.240 06:04:43 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:49.240 06:04:43 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:05:49.240 06:04:43 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:49.240 06:04:43 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:49.240 06:04:43 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:49.240 06:04:43 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:49.240 06:04:43 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:49.240 06:04:43 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:49.240 06:04:43 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:49.240 06:04:43 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:49.240 06:04:43 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:49.240 06:04:43 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:49.240 06:04:43 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:49.240 06:04:43 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:49.240 06:04:43 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:49.240 06:04:43 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:49.240 06:04:43 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:49.240 06:04:43 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:49.240 06:04:43 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:49.240 06:04:43 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:49.240 06:04:43 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:49.240 06:04:43 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:49.240 06:04:43 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:49.240 06:04:43 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:49.240 06:04:43 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:49.240 06:04:43 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:49.240 06:04:43 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:49.240 06:04:43 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:49.240 06:04:43 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:49.240 06:04:43 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:49.240 06:04:43 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:49.240 06:04:43 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:49.240 06:04:43 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:49.240 06:04:43 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:49.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.240 --rc genhtml_branch_coverage=1 00:05:49.240 --rc genhtml_function_coverage=1 00:05:49.240 --rc genhtml_legend=1 00:05:49.240 --rc geninfo_all_blocks=1 00:05:49.240 --rc geninfo_unexecuted_blocks=1 00:05:49.240 00:05:49.240 ' 00:05:49.240 06:04:43 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:49.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.240 --rc genhtml_branch_coverage=1 00:05:49.240 --rc genhtml_function_coverage=1 00:05:49.240 --rc genhtml_legend=1 00:05:49.240 --rc geninfo_all_blocks=1 00:05:49.240 --rc geninfo_unexecuted_blocks=1 00:05:49.240 00:05:49.240 ' 00:05:49.240 06:04:43 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:49.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.240 --rc genhtml_branch_coverage=1 00:05:49.240 --rc genhtml_function_coverage=1 00:05:49.240 --rc genhtml_legend=1 00:05:49.240 --rc geninfo_all_blocks=1 00:05:49.240 --rc geninfo_unexecuted_blocks=1 00:05:49.240 00:05:49.240 ' 00:05:49.240 06:04:43 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:49.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.240 --rc genhtml_branch_coverage=1 00:05:49.240 --rc genhtml_function_coverage=1 00:05:49.240 --rc genhtml_legend=1 00:05:49.240 --rc geninfo_all_blocks=1 00:05:49.240 --rc geninfo_unexecuted_blocks=1 00:05:49.240 00:05:49.240 ' 00:05:49.240 06:04:43 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:49.240 06:04:43 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:49.240 06:04:43 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:49.240 06:04:43 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:49.240 06:04:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.240 06:04:43 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.240 06:04:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:49.240 ************************************ 00:05:49.240 START TEST default_locks 00:05:49.240 ************************************ 00:05:49.240 06:04:43 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:49.240 06:04:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=124271 00:05:49.240 06:04:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 124271 00:05:49.240 06:04:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:49.240 06:04:43 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 124271 ']' 00:05:49.240 06:04:43 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.240 06:04:43 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:49.240 06:04:43 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.240 06:04:43 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:49.240 06:04:43 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:49.502 [2024-12-09 06:04:43.855079] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:05:49.502 [2024-12-09 06:04:43.855145] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124271 ] 00:05:49.502 [2024-12-09 06:04:43.941600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.502 [2024-12-09 06:04:43.975982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.073 06:04:44 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:50.073 06:04:44 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:50.073 06:04:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 124271 00:05:50.073 06:04:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 124271 00:05:50.073 06:04:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:50.652 lslocks: write error 00:05:50.652 06:04:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 124271 00:05:50.652 06:04:45 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 124271 ']' 00:05:50.652 06:04:45 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 124271 00:05:50.652 06:04:45 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:50.652 06:04:45 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:50.652 06:04:45 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 124271 00:05:50.652 06:04:45 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:50.652 06:04:45 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:50.652 06:04:45 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 124271' 00:05:50.652 killing process with pid 124271 00:05:50.652 06:04:45 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 124271 00:05:50.652 06:04:45 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 124271 00:05:50.914 06:04:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 124271 00:05:50.914 06:04:45 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:50.914 06:04:45 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 124271 00:05:50.914 06:04:45 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:50.914 06:04:45 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:50.914 06:04:45 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:50.914 06:04:45 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:50.914 06:04:45 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 124271 00:05:50.914 06:04:45 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 124271 ']' 00:05:50.914 06:04:45 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.914 06:04:45 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:50.914 06:04:45 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.914 06:04:45 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:50.914 06:04:45 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:50.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (124271) - No such process 00:05:50.914 ERROR: process (pid: 124271) is no longer running 00:05:50.914 06:04:45 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:50.914 06:04:45 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:50.914 06:04:45 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:50.914 06:04:45 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:50.914 06:04:45 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:50.914 06:04:45 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:50.914 06:04:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:50.914 06:04:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:50.914 06:04:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:50.914 06:04:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:50.914 00:05:50.914 real 0m1.614s 00:05:50.914 user 0m1.729s 00:05:50.914 sys 0m0.576s 00:05:50.914 06:04:45 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.914 06:04:45 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:50.914 ************************************ 00:05:50.915 END TEST default_locks 00:05:50.915 ************************************ 00:05:50.915 06:04:45 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:50.915 06:04:45 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:50.915 06:04:45 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.915 06:04:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:50.915 ************************************ 00:05:50.915 START TEST default_locks_via_rpc 00:05:50.915 ************************************ 00:05:50.915 06:04:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:50.915 06:04:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=124604 00:05:50.915 06:04:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 124604 00:05:50.915 06:04:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 124604 ']' 00:05:50.915 06:04:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.915 06:04:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:50.915 06:04:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.915 06:04:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:50.915 06:04:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.915 06:04:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:51.175 [2024-12-09 06:04:45.525502] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:05:51.175 [2024-12-09 06:04:45.525549] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124604 ] 00:05:51.175 [2024-12-09 06:04:45.609055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.175 [2024-12-09 06:04:45.641929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.743 06:04:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:51.743 06:04:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:51.743 06:04:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:51.743 06:04:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.743 06:04:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.743 06:04:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.002 06:04:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:52.002 06:04:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:52.002 06:04:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:52.002 06:04:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:52.002 06:04:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:52.002 06:04:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.002 06:04:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.002 06:04:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.002 06:04:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 124604 00:05:52.002 06:04:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 124604 00:05:52.002 06:04:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:52.002 06:04:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 124604 00:05:52.002 06:04:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 124604 ']' 00:05:52.002 06:04:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 124604 00:05:52.002 06:04:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:52.002 06:04:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:52.002 06:04:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 124604 00:05:52.262 06:04:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:52.262 06:04:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:52.262 06:04:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 124604' 00:05:52.262 killing process with pid 124604 00:05:52.262 06:04:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 124604 00:05:52.262 06:04:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 124604 00:05:52.262 00:05:52.262 real 0m1.353s 00:05:52.262 user 0m1.474s 00:05:52.262 sys 0m0.443s 00:05:52.262 06:04:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.262 06:04:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.262 ************************************ 00:05:52.262 END TEST default_locks_via_rpc 00:05:52.262 ************************************ 00:05:52.577 06:04:46 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:52.577 06:04:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:52.577 06:04:46 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.577 06:04:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:52.577 ************************************ 00:05:52.577 START TEST non_locking_app_on_locked_coremask 00:05:52.577 ************************************ 00:05:52.577 06:04:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:52.577 06:04:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=124938 00:05:52.577 06:04:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 124938 /var/tmp/spdk.sock 00:05:52.577 06:04:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 124938 ']' 00:05:52.577 06:04:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.577 06:04:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:52.577 06:04:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.577 06:04:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:52.577 06:04:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.577 06:04:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:52.577 [2024-12-09 06:04:46.952690] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:05:52.577 [2024-12-09 06:04:46.952743] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124938 ] 00:05:52.577 [2024-12-09 06:04:47.038692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.577 [2024-12-09 06:04:47.071190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.518 06:04:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:53.518 06:04:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:53.518 06:04:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=124963 00:05:53.518 06:04:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 124963 /var/tmp/spdk2.sock 00:05:53.518 06:04:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 124963 ']' 00:05:53.518 06:04:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:53.518 06:04:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:53.518 06:04:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:53.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:53.518 06:04:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:53.518 06:04:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:53.518 06:04:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:53.518 [2024-12-09 06:04:47.802924] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:05:53.518 [2024-12-09 06:04:47.802973] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124963 ] 00:05:53.518 [2024-12-09 06:04:47.889805] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:53.518 [2024-12-09 06:04:47.889826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.518 [2024-12-09 06:04:47.952430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.089 06:04:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:54.089 06:04:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:54.089 06:04:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 124938 00:05:54.089 06:04:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 124938 00:05:54.089 06:04:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:54.660 lslocks: write error 00:05:54.660 06:04:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 124938 00:05:54.660 06:04:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 124938 ']' 00:05:54.660 06:04:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 124938 00:05:54.660 06:04:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:54.660 06:04:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:54.660 06:04:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 124938 00:05:54.660 06:04:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:54.660 06:04:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:54.660 06:04:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 124938' 00:05:54.660 killing process with pid 124938 00:05:54.660 06:04:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 124938 00:05:54.660 06:04:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 124938 00:05:55.232 06:04:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 124963 00:05:55.232 06:04:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 124963 ']' 00:05:55.232 06:04:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 124963 00:05:55.232 06:04:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:55.232 06:04:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:55.232 06:04:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 124963 00:05:55.232 06:04:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:55.232 06:04:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:55.232 06:04:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 124963' 00:05:55.232 killing process with pid 124963 00:05:55.232 06:04:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 124963 00:05:55.232 06:04:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 124963 00:05:55.232 00:05:55.232 real 0m2.866s 00:05:55.232 user 0m3.225s 00:05:55.232 sys 0m0.854s 00:05:55.232 06:04:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.232 06:04:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.232 ************************************ 00:05:55.232 END TEST non_locking_app_on_locked_coremask 00:05:55.232 ************************************ 00:05:55.232 06:04:49 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:55.232 06:04:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:55.232 06:04:49 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:55.232 06:04:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:55.491 ************************************ 00:05:55.491 START TEST locking_app_on_unlocked_coremask 00:05:55.491 ************************************ 00:05:55.491 06:04:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:55.491 06:04:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=125329 00:05:55.491 06:04:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 125329 /var/tmp/spdk.sock 00:05:55.491 06:04:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:55.491 06:04:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 125329 ']' 00:05:55.491 06:04:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.491 06:04:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:55.491 06:04:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.491 06:04:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:55.491 06:04:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.491 [2024-12-09 06:04:49.889254] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:05:55.491 [2024-12-09 06:04:49.889309] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125329 ] 00:05:55.491 [2024-12-09 06:04:49.974050] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:55.491 [2024-12-09 06:04:49.974088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.492 [2024-12-09 06:04:50.017087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.433 06:04:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:56.433 06:04:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:56.433 06:04:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:56.433 06:04:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=125593 00:05:56.433 06:04:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 125593 /var/tmp/spdk2.sock 00:05:56.433 06:04:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 125593 ']' 00:05:56.433 06:04:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:56.433 06:04:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:56.433 06:04:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:56.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:56.433 06:04:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:56.433 06:04:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.433 [2024-12-09 06:04:50.734110] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:05:56.433 [2024-12-09 06:04:50.734162] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125593 ] 00:05:56.433 [2024-12-09 06:04:50.822632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.433 [2024-12-09 06:04:50.885238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.003 06:04:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:57.003 06:04:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:57.003 06:04:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 125593 00:05:57.003 06:04:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 125593 00:05:57.003 06:04:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:57.573 lslocks: write error 00:05:57.573 06:04:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 125329 00:05:57.573 06:04:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 125329 ']' 00:05:57.573 06:04:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 125329 00:05:57.573 06:04:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:57.573 06:04:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:57.573 06:04:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 125329 00:05:57.835 06:04:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:57.835 06:04:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:57.835 06:04:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 125329' 00:05:57.835 killing process with pid 125329 00:05:57.835 06:04:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 125329 00:05:57.835 06:04:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 125329 00:05:58.097 06:04:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 125593 00:05:58.097 06:04:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 125593 ']' 00:05:58.097 06:04:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 125593 00:05:58.097 06:04:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:58.097 06:04:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:58.098 06:04:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 125593 00:05:58.098 06:04:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:58.098 06:04:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:58.098 06:04:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 125593' 00:05:58.098 killing process with pid 125593 00:05:58.098 06:04:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 125593 00:05:58.098 06:04:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 125593 00:05:58.357 00:05:58.357 real 0m2.940s 00:05:58.357 user 0m3.246s 00:05:58.357 sys 0m0.922s 00:05:58.357 06:04:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:58.357 06:04:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.357 ************************************ 00:05:58.357 END TEST locking_app_on_unlocked_coremask 00:05:58.357 ************************************ 00:05:58.358 06:04:52 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:58.358 06:04:52 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:58.358 06:04:52 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:58.358 06:04:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:58.358 ************************************ 00:05:58.358 START TEST locking_app_on_locked_coremask 00:05:58.358 ************************************ 00:05:58.358 06:04:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:58.358 06:04:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=125939 00:05:58.358 06:04:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 125939 /var/tmp/spdk.sock 00:05:58.358 06:04:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 125939 ']' 00:05:58.358 06:04:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.358 06:04:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:58.358 06:04:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.358 06:04:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:58.358 06:04:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.358 06:04:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:58.358 [2024-12-09 06:04:52.898674] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:05:58.358 [2024-12-09 06:04:52.898726] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125939 ] 00:05:58.617 [2024-12-09 06:04:52.983008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.617 [2024-12-09 06:04:53.015165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.188 06:04:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:59.188 06:04:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:59.188 06:04:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=126106 00:05:59.188 06:04:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 126106 /var/tmp/spdk2.sock 00:05:59.188 06:04:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:59.188 06:04:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 126106 /var/tmp/spdk2.sock 00:05:59.188 06:04:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:59.188 06:04:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:59.188 06:04:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:59.188 06:04:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:59.188 06:04:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:59.188 06:04:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 126106 /var/tmp/spdk2.sock 00:05:59.188 06:04:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 126106 ']' 00:05:59.188 06:04:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:59.188 06:04:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.188 06:04:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:59.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:59.188 06:04:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.188 06:04:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:59.188 [2024-12-09 06:04:53.712024] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:05:59.188 [2024-12-09 06:04:53.712074] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126106 ] 00:05:59.448 [2024-12-09 06:04:53.797297] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 125939 has claimed it. 00:05:59.448 [2024-12-09 06:04:53.797331] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:00.018 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (126106) - No such process 00:06:00.018 ERROR: process (pid: 126106) is no longer running 00:06:00.018 06:04:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:00.018 06:04:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:00.018 06:04:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:00.018 06:04:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:00.018 06:04:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:00.018 06:04:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:00.018 06:04:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 125939 00:06:00.018 06:04:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 125939 00:06:00.018 06:04:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:00.018 lslocks: write error 00:06:00.018 06:04:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 125939 00:06:00.018 06:04:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 125939 ']' 00:06:00.018 06:04:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 125939 00:06:00.018 06:04:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:00.018 06:04:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:00.018 06:04:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 125939 00:06:00.018 06:04:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:00.018 06:04:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:00.018 06:04:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 125939' 00:06:00.018 killing process with pid 125939 00:06:00.018 06:04:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 125939 00:06:00.018 06:04:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 125939 00:06:00.278 00:06:00.278 real 0m1.929s 00:06:00.278 user 0m2.176s 00:06:00.278 sys 0m0.473s 00:06:00.278 06:04:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.278 06:04:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:00.278 ************************************ 00:06:00.278 END TEST locking_app_on_locked_coremask 00:06:00.278 ************************************ 00:06:00.278 06:04:54 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:00.278 06:04:54 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:00.278 06:04:54 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.278 06:04:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:00.278 ************************************ 00:06:00.278 START TEST locking_overlapped_coremask 00:06:00.278 ************************************ 00:06:00.278 06:04:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:00.278 06:04:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=126281 00:06:00.278 06:04:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 126281 /var/tmp/spdk.sock 00:06:00.278 06:04:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 126281 ']' 00:06:00.278 06:04:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.278 06:04:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.278 06:04:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.278 06:04:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.278 06:04:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:00.278 06:04:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:00.560 [2024-12-09 06:04:54.903148] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:06:00.560 [2024-12-09 06:04:54.903198] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126281 ] 00:06:00.560 [2024-12-09 06:04:54.989601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:00.560 [2024-12-09 06:04:55.023406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.560 [2024-12-09 06:04:55.023552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:00.560 [2024-12-09 06:04:55.023723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.131 06:04:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.131 06:04:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:01.131 06:04:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=126561 00:06:01.131 06:04:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 126561 /var/tmp/spdk2.sock 00:06:01.131 06:04:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:01.131 06:04:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:01.131 06:04:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 126561 /var/tmp/spdk2.sock 00:06:01.131 06:04:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:01.131 06:04:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:01.131 06:04:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:01.132 06:04:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:01.132 06:04:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 126561 /var/tmp/spdk2.sock 00:06:01.132 06:04:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 126561 ']' 00:06:01.132 06:04:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:01.132 06:04:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:01.132 06:04:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:01.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:01.132 06:04:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:01.132 06:04:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:01.391 [2024-12-09 06:04:55.761341] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:06:01.391 [2024-12-09 06:04:55.761392] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126561 ] 00:06:01.391 [2024-12-09 06:04:55.849918] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 126281 has claimed it. 00:06:01.391 [2024-12-09 06:04:55.849950] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:01.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (126561) - No such process 00:06:01.959 ERROR: process (pid: 126561) is no longer running 00:06:01.959 06:04:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.959 06:04:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:01.959 06:04:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:01.959 06:04:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:01.959 06:04:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:01.959 06:04:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:01.959 06:04:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:01.959 06:04:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:01.959 06:04:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:01.959 06:04:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:01.959 06:04:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 126281 00:06:01.959 06:04:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 126281 ']' 00:06:01.959 06:04:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 126281 00:06:01.959 06:04:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:01.959 06:04:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:01.959 06:04:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 126281 00:06:01.959 06:04:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:01.959 06:04:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:01.959 06:04:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 126281' 00:06:01.959 killing process with pid 126281 00:06:01.959 06:04:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 126281 00:06:01.959 06:04:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 126281 00:06:02.219 00:06:02.219 real 0m1.775s 00:06:02.219 user 0m5.153s 00:06:02.219 sys 0m0.387s 00:06:02.219 06:04:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.219 06:04:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:02.219 ************************************ 00:06:02.219 END TEST locking_overlapped_coremask 00:06:02.219 ************************************ 00:06:02.219 06:04:56 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:02.219 06:04:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:02.219 06:04:56 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.219 06:04:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:02.219 ************************************ 00:06:02.219 START TEST locking_overlapped_coremask_via_rpc 00:06:02.219 ************************************ 00:06:02.219 06:04:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:02.219 06:04:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=126626 00:06:02.219 06:04:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 126626 /var/tmp/spdk.sock 00:06:02.219 06:04:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:02.219 06:04:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 126626 ']' 00:06:02.219 06:04:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.219 06:04:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:02.219 06:04:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.219 06:04:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:02.219 06:04:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.219 [2024-12-09 06:04:56.748344] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:06:02.219 [2024-12-09 06:04:56.748389] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126626 ] 00:06:02.479 [2024-12-09 06:04:56.831788] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:02.479 [2024-12-09 06:04:56.831816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:02.479 [2024-12-09 06:04:56.866990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.479 [2024-12-09 06:04:56.867137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.479 [2024-12-09 06:04:56.867138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:03.072 06:04:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:03.072 06:04:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:03.072 06:04:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=126928 00:06:03.072 06:04:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 126928 /var/tmp/spdk2.sock 00:06:03.072 06:04:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 126928 ']' 00:06:03.072 06:04:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:03.072 06:04:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:03.072 06:04:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:03.072 06:04:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:03.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:03.072 06:04:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:03.072 06:04:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.072 [2024-12-09 06:04:57.604954] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:06:03.072 [2024-12-09 06:04:57.605006] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126928 ] 00:06:03.333 [2024-12-09 06:04:57.696822] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:03.333 [2024-12-09 06:04:57.696846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:03.333 [2024-12-09 06:04:57.755692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:03.333 [2024-12-09 06:04:57.755842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:03.333 [2024-12-09 06:04:57.755844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:03.903 06:04:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:03.903 06:04:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:03.903 06:04:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:03.903 06:04:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:03.903 06:04:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.903 06:04:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:03.903 06:04:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:03.903 06:04:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:03.903 06:04:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:03.903 06:04:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:03.903 06:04:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:03.903 06:04:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:03.903 06:04:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:03.903 06:04:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:03.903 06:04:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:03.903 06:04:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.903 [2024-12-09 06:04:58.405509] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 126626 has claimed it. 00:06:03.903 request: 00:06:03.903 { 00:06:03.903 "method": "framework_enable_cpumask_locks", 00:06:03.903 "req_id": 1 00:06:03.903 } 00:06:03.903 Got JSON-RPC error response 00:06:03.903 response: 00:06:03.903 { 00:06:03.903 "code": -32603, 00:06:03.903 "message": "Failed to claim CPU core: 2" 00:06:03.904 } 00:06:03.904 06:04:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:03.904 06:04:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:03.904 06:04:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:03.904 06:04:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:03.904 06:04:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:03.904 06:04:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 126626 /var/tmp/spdk.sock 00:06:03.904 06:04:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 126626 ']' 00:06:03.904 06:04:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.904 06:04:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:03.904 06:04:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.904 06:04:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:03.904 06:04:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.163 06:04:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:04.163 06:04:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:04.163 06:04:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 126928 /var/tmp/spdk2.sock 00:06:04.163 06:04:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 126928 ']' 00:06:04.163 06:04:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:04.163 06:04:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.163 06:04:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:04.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:04.164 06:04:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.164 06:04:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.424 06:04:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:04.424 06:04:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:04.424 06:04:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:04.424 06:04:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:04.424 06:04:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:04.424 06:04:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:04.424 00:06:04.424 real 0m2.069s 00:06:04.424 user 0m0.846s 00:06:04.424 sys 0m0.161s 00:06:04.424 06:04:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.424 06:04:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.424 ************************************ 00:06:04.424 END TEST locking_overlapped_coremask_via_rpc 00:06:04.424 ************************************ 00:06:04.424 06:04:58 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:04.424 06:04:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 126626 ]] 00:06:04.424 06:04:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 126626 00:06:04.424 06:04:58 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 126626 ']' 00:06:04.424 06:04:58 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 126626 00:06:04.424 06:04:58 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:04.424 06:04:58 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:04.424 06:04:58 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 126626 00:06:04.424 06:04:58 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:04.424 06:04:58 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:04.424 06:04:58 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 126626' 00:06:04.424 killing process with pid 126626 00:06:04.424 06:04:58 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 126626 00:06:04.424 06:04:58 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 126626 00:06:04.684 06:04:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 126928 ]] 00:06:04.684 06:04:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 126928 00:06:04.684 06:04:59 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 126928 ']' 00:06:04.684 06:04:59 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 126928 00:06:04.684 06:04:59 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:04.684 06:04:59 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:04.684 06:04:59 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 126928 00:06:04.684 06:04:59 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:04.684 06:04:59 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:04.684 06:04:59 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 126928' 00:06:04.684 killing process with pid 126928 00:06:04.684 06:04:59 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 126928 00:06:04.684 06:04:59 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 126928 00:06:04.944 06:04:59 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:04.944 06:04:59 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:04.944 06:04:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 126626 ]] 00:06:04.944 06:04:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 126626 00:06:04.944 06:04:59 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 126626 ']' 00:06:04.944 06:04:59 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 126626 00:06:04.944 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (126626) - No such process 00:06:04.944 06:04:59 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 126626 is not found' 00:06:04.944 Process with pid 126626 is not found 00:06:04.944 06:04:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 126928 ]] 00:06:04.944 06:04:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 126928 00:06:04.944 06:04:59 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 126928 ']' 00:06:04.944 06:04:59 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 126928 00:06:04.944 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (126928) - No such process 00:06:04.944 06:04:59 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 126928 is not found' 00:06:04.944 Process with pid 126928 is not found 00:06:04.944 06:04:59 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:04.944 00:06:04.944 real 0m15.762s 00:06:04.944 user 0m27.838s 00:06:04.944 sys 0m4.715s 00:06:04.944 06:04:59 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.944 06:04:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:04.944 ************************************ 00:06:04.944 END TEST cpu_locks 00:06:04.944 ************************************ 00:06:04.944 00:06:04.944 real 0m41.137s 00:06:04.944 user 1m20.617s 00:06:04.944 sys 0m8.055s 00:06:04.944 06:04:59 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.944 06:04:59 event -- common/autotest_common.sh@10 -- # set +x 00:06:04.944 ************************************ 00:06:04.944 END TEST event 00:06:04.944 ************************************ 00:06:04.944 06:04:59 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:04.944 06:04:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:04.944 06:04:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.944 06:04:59 -- common/autotest_common.sh@10 -- # set +x 00:06:04.944 ************************************ 00:06:04.944 START TEST thread 00:06:04.944 ************************************ 00:06:04.944 06:04:59 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:04.944 * Looking for test storage... 00:06:04.944 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:04.944 06:04:59 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:05.205 06:04:59 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:06:05.205 06:04:59 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:05.205 06:04:59 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:05.205 06:04:59 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:05.205 06:04:59 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:05.205 06:04:59 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:05.205 06:04:59 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:05.205 06:04:59 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:05.205 06:04:59 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:05.205 06:04:59 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:05.205 06:04:59 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:05.205 06:04:59 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:05.205 06:04:59 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:05.205 06:04:59 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:05.205 06:04:59 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:05.205 06:04:59 thread -- scripts/common.sh@345 -- # : 1 00:06:05.205 06:04:59 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:05.205 06:04:59 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:05.205 06:04:59 thread -- scripts/common.sh@365 -- # decimal 1 00:06:05.205 06:04:59 thread -- scripts/common.sh@353 -- # local d=1 00:06:05.205 06:04:59 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:05.205 06:04:59 thread -- scripts/common.sh@355 -- # echo 1 00:06:05.205 06:04:59 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:05.205 06:04:59 thread -- scripts/common.sh@366 -- # decimal 2 00:06:05.205 06:04:59 thread -- scripts/common.sh@353 -- # local d=2 00:06:05.205 06:04:59 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:05.205 06:04:59 thread -- scripts/common.sh@355 -- # echo 2 00:06:05.205 06:04:59 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:05.205 06:04:59 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:05.205 06:04:59 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:05.205 06:04:59 thread -- scripts/common.sh@368 -- # return 0 00:06:05.205 06:04:59 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:05.205 06:04:59 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:05.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.205 --rc genhtml_branch_coverage=1 00:06:05.205 --rc genhtml_function_coverage=1 00:06:05.205 --rc genhtml_legend=1 00:06:05.205 --rc geninfo_all_blocks=1 00:06:05.205 --rc geninfo_unexecuted_blocks=1 00:06:05.205 00:06:05.205 ' 00:06:05.205 06:04:59 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:05.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.205 --rc genhtml_branch_coverage=1 00:06:05.205 --rc genhtml_function_coverage=1 00:06:05.205 --rc genhtml_legend=1 00:06:05.205 --rc geninfo_all_blocks=1 00:06:05.205 --rc geninfo_unexecuted_blocks=1 00:06:05.205 00:06:05.205 ' 00:06:05.205 06:04:59 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:05.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.205 --rc genhtml_branch_coverage=1 00:06:05.205 --rc genhtml_function_coverage=1 00:06:05.205 --rc genhtml_legend=1 00:06:05.205 --rc geninfo_all_blocks=1 00:06:05.205 --rc geninfo_unexecuted_blocks=1 00:06:05.205 00:06:05.205 ' 00:06:05.205 06:04:59 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:05.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.205 --rc genhtml_branch_coverage=1 00:06:05.205 --rc genhtml_function_coverage=1 00:06:05.205 --rc genhtml_legend=1 00:06:05.205 --rc geninfo_all_blocks=1 00:06:05.205 --rc geninfo_unexecuted_blocks=1 00:06:05.205 00:06:05.205 ' 00:06:05.205 06:04:59 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:05.205 06:04:59 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:05.205 06:04:59 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.205 06:04:59 thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.205 ************************************ 00:06:05.205 START TEST thread_poller_perf 00:06:05.205 ************************************ 00:06:05.205 06:04:59 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:05.205 [2024-12-09 06:04:59.682888] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:06:05.205 [2024-12-09 06:04:59.682982] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127341 ] 00:06:05.205 [2024-12-09 06:04:59.770950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.466 [2024-12-09 06:04:59.803263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.466 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:06.408 [2024-12-09T05:05:00.995Z] ====================================== 00:06:06.408 [2024-12-09T05:05:00.995Z] busy:2609343930 (cyc) 00:06:06.408 [2024-12-09T05:05:00.995Z] total_run_count: 407000 00:06:06.408 [2024-12-09T05:05:00.995Z] tsc_hz: 2600000000 (cyc) 00:06:06.408 [2024-12-09T05:05:00.995Z] ====================================== 00:06:06.408 [2024-12-09T05:05:00.995Z] poller_cost: 6411 (cyc), 2465 (nsec) 00:06:06.408 00:06:06.408 real 0m1.177s 00:06:06.408 user 0m1.091s 00:06:06.408 sys 0m0.081s 00:06:06.408 06:05:00 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.408 06:05:00 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:06.408 ************************************ 00:06:06.408 END TEST thread_poller_perf 00:06:06.408 ************************************ 00:06:06.408 06:05:00 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:06.408 06:05:00 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:06.408 06:05:00 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.408 06:05:00 thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.408 ************************************ 00:06:06.408 START TEST thread_poller_perf 00:06:06.408 ************************************ 00:06:06.408 06:05:00 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:06.408 [2024-12-09 06:05:00.938707] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:06:06.408 [2024-12-09 06:05:00.938818] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127534 ] 00:06:06.668 [2024-12-09 06:05:01.027164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.668 [2024-12-09 06:05:01.066727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.668 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:07.610 [2024-12-09T05:05:02.197Z] ====================================== 00:06:07.610 [2024-12-09T05:05:02.197Z] busy:2601624332 (cyc) 00:06:07.610 [2024-12-09T05:05:02.197Z] total_run_count: 4957000 00:06:07.610 [2024-12-09T05:05:02.197Z] tsc_hz: 2600000000 (cyc) 00:06:07.610 [2024-12-09T05:05:02.197Z] ====================================== 00:06:07.610 [2024-12-09T05:05:02.197Z] poller_cost: 524 (cyc), 201 (nsec) 00:06:07.610 00:06:07.610 real 0m1.177s 00:06:07.610 user 0m1.093s 00:06:07.610 sys 0m0.080s 00:06:07.610 06:05:02 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.610 06:05:02 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:07.610 ************************************ 00:06:07.610 END TEST thread_poller_perf 00:06:07.610 ************************************ 00:06:07.610 06:05:02 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:07.610 00:06:07.610 real 0m2.703s 00:06:07.610 user 0m2.361s 00:06:07.610 sys 0m0.356s 00:06:07.610 06:05:02 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.610 06:05:02 thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.610 ************************************ 00:06:07.610 END TEST thread 00:06:07.610 ************************************ 00:06:07.610 06:05:02 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:07.610 06:05:02 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:07.610 06:05:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:07.610 06:05:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.610 06:05:02 -- common/autotest_common.sh@10 -- # set +x 00:06:07.871 ************************************ 00:06:07.871 START TEST app_cmdline 00:06:07.871 ************************************ 00:06:07.871 06:05:02 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:07.871 * Looking for test storage... 00:06:07.871 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:07.871 06:05:02 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:07.871 06:05:02 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:07.871 06:05:02 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:06:07.871 06:05:02 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:07.871 06:05:02 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:07.871 06:05:02 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:07.871 06:05:02 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:07.871 06:05:02 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:07.871 06:05:02 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:07.871 06:05:02 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:07.871 06:05:02 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:07.871 06:05:02 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:07.871 06:05:02 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:07.871 06:05:02 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:07.871 06:05:02 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:07.871 06:05:02 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:07.871 06:05:02 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:07.871 06:05:02 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:07.871 06:05:02 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:07.871 06:05:02 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:07.871 06:05:02 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:07.871 06:05:02 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:07.871 06:05:02 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:07.871 06:05:02 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:07.871 06:05:02 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:07.871 06:05:02 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:07.871 06:05:02 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:07.871 06:05:02 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:07.871 06:05:02 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:07.871 06:05:02 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:07.871 06:05:02 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:07.871 06:05:02 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:07.871 06:05:02 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:07.871 06:05:02 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:07.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.871 --rc genhtml_branch_coverage=1 00:06:07.871 --rc genhtml_function_coverage=1 00:06:07.871 --rc genhtml_legend=1 00:06:07.871 --rc geninfo_all_blocks=1 00:06:07.871 --rc geninfo_unexecuted_blocks=1 00:06:07.871 00:06:07.871 ' 00:06:07.871 06:05:02 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:07.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.871 --rc genhtml_branch_coverage=1 00:06:07.871 --rc genhtml_function_coverage=1 00:06:07.871 --rc genhtml_legend=1 00:06:07.871 --rc geninfo_all_blocks=1 00:06:07.871 --rc geninfo_unexecuted_blocks=1 00:06:07.871 00:06:07.871 ' 00:06:07.871 06:05:02 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:07.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.871 --rc genhtml_branch_coverage=1 00:06:07.871 --rc genhtml_function_coverage=1 00:06:07.871 --rc genhtml_legend=1 00:06:07.871 --rc geninfo_all_blocks=1 00:06:07.871 --rc geninfo_unexecuted_blocks=1 00:06:07.871 00:06:07.871 ' 00:06:07.871 06:05:02 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:07.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.871 --rc genhtml_branch_coverage=1 00:06:07.871 --rc genhtml_function_coverage=1 00:06:07.871 --rc genhtml_legend=1 00:06:07.871 --rc geninfo_all_blocks=1 00:06:07.871 --rc geninfo_unexecuted_blocks=1 00:06:07.871 00:06:07.871 ' 00:06:07.871 06:05:02 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:07.871 06:05:02 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=127767 00:06:07.871 06:05:02 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 127767 00:06:07.871 06:05:02 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:07.871 06:05:02 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 127767 ']' 00:06:07.871 06:05:02 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.871 06:05:02 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:07.871 06:05:02 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.871 06:05:02 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:07.871 06:05:02 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:07.871 [2024-12-09 06:05:02.438552] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:06:07.871 [2024-12-09 06:05:02.438628] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127767 ] 00:06:08.133 [2024-12-09 06:05:02.526862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.133 [2024-12-09 06:05:02.561673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.705 06:05:03 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:08.705 06:05:03 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:08.705 06:05:03 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:08.964 { 00:06:08.964 "version": "SPDK v25.01-pre git sha1 15ce1ba92", 00:06:08.964 "fields": { 00:06:08.964 "major": 25, 00:06:08.964 "minor": 1, 00:06:08.964 "patch": 0, 00:06:08.964 "suffix": "-pre", 00:06:08.964 "commit": "15ce1ba92" 00:06:08.964 } 00:06:08.964 } 00:06:08.964 06:05:03 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:08.964 06:05:03 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:08.964 06:05:03 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:08.964 06:05:03 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:08.964 06:05:03 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:08.964 06:05:03 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:08.964 06:05:03 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.964 06:05:03 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:08.964 06:05:03 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:08.964 06:05:03 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.964 06:05:03 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:08.965 06:05:03 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:08.965 06:05:03 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:08.965 06:05:03 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:08.965 06:05:03 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:08.965 06:05:03 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:08.965 06:05:03 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:08.965 06:05:03 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:08.965 06:05:03 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:08.965 06:05:03 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:08.965 06:05:03 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:08.965 06:05:03 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:08.965 06:05:03 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:08.965 06:05:03 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:09.225 request: 00:06:09.225 { 00:06:09.225 "method": "env_dpdk_get_mem_stats", 00:06:09.225 "req_id": 1 00:06:09.225 } 00:06:09.225 Got JSON-RPC error response 00:06:09.225 response: 00:06:09.225 { 00:06:09.225 "code": -32601, 00:06:09.225 "message": "Method not found" 00:06:09.225 } 00:06:09.225 06:05:03 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:09.225 06:05:03 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:09.225 06:05:03 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:09.225 06:05:03 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:09.225 06:05:03 app_cmdline -- app/cmdline.sh@1 -- # killprocess 127767 00:06:09.225 06:05:03 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 127767 ']' 00:06:09.225 06:05:03 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 127767 00:06:09.225 06:05:03 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:09.225 06:05:03 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:09.225 06:05:03 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 127767 00:06:09.225 06:05:03 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:09.225 06:05:03 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:09.225 06:05:03 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 127767' 00:06:09.225 killing process with pid 127767 00:06:09.225 06:05:03 app_cmdline -- common/autotest_common.sh@973 -- # kill 127767 00:06:09.225 06:05:03 app_cmdline -- common/autotest_common.sh@978 -- # wait 127767 00:06:09.486 00:06:09.486 real 0m1.637s 00:06:09.486 user 0m1.965s 00:06:09.486 sys 0m0.428s 00:06:09.486 06:05:03 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.486 06:05:03 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:09.486 ************************************ 00:06:09.486 END TEST app_cmdline 00:06:09.486 ************************************ 00:06:09.486 06:05:03 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:09.486 06:05:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:09.486 06:05:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.486 06:05:03 -- common/autotest_common.sh@10 -- # set +x 00:06:09.486 ************************************ 00:06:09.486 START TEST version 00:06:09.486 ************************************ 00:06:09.486 06:05:03 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:09.486 * Looking for test storage... 00:06:09.486 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:09.486 06:05:04 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:09.486 06:05:04 version -- common/autotest_common.sh@1711 -- # lcov --version 00:06:09.486 06:05:04 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:09.747 06:05:04 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:09.747 06:05:04 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:09.747 06:05:04 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:09.747 06:05:04 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:09.747 06:05:04 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:09.747 06:05:04 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:09.747 06:05:04 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:09.747 06:05:04 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:09.747 06:05:04 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:09.747 06:05:04 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:09.747 06:05:04 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:09.747 06:05:04 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:09.747 06:05:04 version -- scripts/common.sh@344 -- # case "$op" in 00:06:09.747 06:05:04 version -- scripts/common.sh@345 -- # : 1 00:06:09.747 06:05:04 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:09.747 06:05:04 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:09.747 06:05:04 version -- scripts/common.sh@365 -- # decimal 1 00:06:09.747 06:05:04 version -- scripts/common.sh@353 -- # local d=1 00:06:09.747 06:05:04 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:09.747 06:05:04 version -- scripts/common.sh@355 -- # echo 1 00:06:09.747 06:05:04 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:09.747 06:05:04 version -- scripts/common.sh@366 -- # decimal 2 00:06:09.747 06:05:04 version -- scripts/common.sh@353 -- # local d=2 00:06:09.747 06:05:04 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:09.747 06:05:04 version -- scripts/common.sh@355 -- # echo 2 00:06:09.747 06:05:04 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:09.747 06:05:04 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:09.747 06:05:04 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:09.747 06:05:04 version -- scripts/common.sh@368 -- # return 0 00:06:09.747 06:05:04 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:09.747 06:05:04 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:09.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.747 --rc genhtml_branch_coverage=1 00:06:09.747 --rc genhtml_function_coverage=1 00:06:09.747 --rc genhtml_legend=1 00:06:09.747 --rc geninfo_all_blocks=1 00:06:09.747 --rc geninfo_unexecuted_blocks=1 00:06:09.747 00:06:09.747 ' 00:06:09.747 06:05:04 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:09.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.747 --rc genhtml_branch_coverage=1 00:06:09.747 --rc genhtml_function_coverage=1 00:06:09.747 --rc genhtml_legend=1 00:06:09.747 --rc geninfo_all_blocks=1 00:06:09.747 --rc geninfo_unexecuted_blocks=1 00:06:09.747 00:06:09.747 ' 00:06:09.747 06:05:04 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:09.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.747 --rc genhtml_branch_coverage=1 00:06:09.747 --rc genhtml_function_coverage=1 00:06:09.747 --rc genhtml_legend=1 00:06:09.747 --rc geninfo_all_blocks=1 00:06:09.747 --rc geninfo_unexecuted_blocks=1 00:06:09.747 00:06:09.747 ' 00:06:09.747 06:05:04 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:09.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.747 --rc genhtml_branch_coverage=1 00:06:09.747 --rc genhtml_function_coverage=1 00:06:09.747 --rc genhtml_legend=1 00:06:09.747 --rc geninfo_all_blocks=1 00:06:09.747 --rc geninfo_unexecuted_blocks=1 00:06:09.747 00:06:09.747 ' 00:06:09.747 06:05:04 version -- app/version.sh@17 -- # get_header_version major 00:06:09.747 06:05:04 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:09.747 06:05:04 version -- app/version.sh@14 -- # cut -f2 00:06:09.747 06:05:04 version -- app/version.sh@14 -- # tr -d '"' 00:06:09.747 06:05:04 version -- app/version.sh@17 -- # major=25 00:06:09.747 06:05:04 version -- app/version.sh@18 -- # get_header_version minor 00:06:09.747 06:05:04 version -- app/version.sh@14 -- # tr -d '"' 00:06:09.747 06:05:04 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:09.747 06:05:04 version -- app/version.sh@14 -- # cut -f2 00:06:09.747 06:05:04 version -- app/version.sh@18 -- # minor=1 00:06:09.747 06:05:04 version -- app/version.sh@19 -- # get_header_version patch 00:06:09.747 06:05:04 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:09.747 06:05:04 version -- app/version.sh@14 -- # cut -f2 00:06:09.747 06:05:04 version -- app/version.sh@14 -- # tr -d '"' 00:06:09.747 06:05:04 version -- app/version.sh@19 -- # patch=0 00:06:09.747 06:05:04 version -- app/version.sh@20 -- # get_header_version suffix 00:06:09.747 06:05:04 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:09.747 06:05:04 version -- app/version.sh@14 -- # cut -f2 00:06:09.747 06:05:04 version -- app/version.sh@14 -- # tr -d '"' 00:06:09.747 06:05:04 version -- app/version.sh@20 -- # suffix=-pre 00:06:09.747 06:05:04 version -- app/version.sh@22 -- # version=25.1 00:06:09.747 06:05:04 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:09.747 06:05:04 version -- app/version.sh@28 -- # version=25.1rc0 00:06:09.747 06:05:04 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:09.747 06:05:04 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:09.747 06:05:04 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:09.747 06:05:04 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:09.747 00:06:09.747 real 0m0.267s 00:06:09.747 user 0m0.167s 00:06:09.747 sys 0m0.145s 00:06:09.747 06:05:04 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.747 06:05:04 version -- common/autotest_common.sh@10 -- # set +x 00:06:09.747 ************************************ 00:06:09.747 END TEST version 00:06:09.747 ************************************ 00:06:09.747 06:05:04 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:09.747 06:05:04 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:09.747 06:05:04 -- spdk/autotest.sh@194 -- # uname -s 00:06:09.747 06:05:04 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:09.747 06:05:04 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:09.747 06:05:04 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:09.747 06:05:04 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:09.747 06:05:04 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:09.747 06:05:04 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:09.747 06:05:04 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:09.747 06:05:04 -- common/autotest_common.sh@10 -- # set +x 00:06:09.747 06:05:04 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:09.747 06:05:04 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:06:09.747 06:05:04 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:06:09.747 06:05:04 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:06:09.747 06:05:04 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:06:09.747 06:05:04 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:06:09.747 06:05:04 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:09.747 06:05:04 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:09.747 06:05:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.747 06:05:04 -- common/autotest_common.sh@10 -- # set +x 00:06:09.747 ************************************ 00:06:09.747 START TEST nvmf_tcp 00:06:09.747 ************************************ 00:06:09.747 06:05:04 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:10.007 * Looking for test storage... 00:06:10.007 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:10.007 06:05:04 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:10.007 06:05:04 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:06:10.007 06:05:04 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:10.007 06:05:04 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:10.007 06:05:04 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:10.007 06:05:04 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:10.007 06:05:04 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:10.007 06:05:04 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:10.007 06:05:04 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:10.007 06:05:04 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:10.007 06:05:04 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:10.007 06:05:04 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:10.007 06:05:04 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:10.007 06:05:04 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:10.007 06:05:04 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:10.007 06:05:04 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:10.007 06:05:04 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:06:10.007 06:05:04 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:10.007 06:05:04 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:10.007 06:05:04 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:10.007 06:05:04 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:06:10.007 06:05:04 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:10.007 06:05:04 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:06:10.007 06:05:04 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:10.007 06:05:04 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:10.007 06:05:04 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:06:10.007 06:05:04 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:10.007 06:05:04 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:06:10.007 06:05:04 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:10.007 06:05:04 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:10.007 06:05:04 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:10.007 06:05:04 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:06:10.007 06:05:04 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:10.007 06:05:04 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:10.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.007 --rc genhtml_branch_coverage=1 00:06:10.007 --rc genhtml_function_coverage=1 00:06:10.007 --rc genhtml_legend=1 00:06:10.007 --rc geninfo_all_blocks=1 00:06:10.007 --rc geninfo_unexecuted_blocks=1 00:06:10.007 00:06:10.007 ' 00:06:10.007 06:05:04 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:10.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.007 --rc genhtml_branch_coverage=1 00:06:10.007 --rc genhtml_function_coverage=1 00:06:10.007 --rc genhtml_legend=1 00:06:10.007 --rc geninfo_all_blocks=1 00:06:10.007 --rc geninfo_unexecuted_blocks=1 00:06:10.007 00:06:10.007 ' 00:06:10.007 06:05:04 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:10.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.007 --rc genhtml_branch_coverage=1 00:06:10.007 --rc genhtml_function_coverage=1 00:06:10.007 --rc genhtml_legend=1 00:06:10.007 --rc geninfo_all_blocks=1 00:06:10.007 --rc geninfo_unexecuted_blocks=1 00:06:10.007 00:06:10.007 ' 00:06:10.007 06:05:04 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:10.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.007 --rc genhtml_branch_coverage=1 00:06:10.007 --rc genhtml_function_coverage=1 00:06:10.007 --rc genhtml_legend=1 00:06:10.007 --rc geninfo_all_blocks=1 00:06:10.007 --rc geninfo_unexecuted_blocks=1 00:06:10.007 00:06:10.007 ' 00:06:10.007 06:05:04 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:10.007 06:05:04 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:10.007 06:05:04 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:10.007 06:05:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:10.007 06:05:04 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.007 06:05:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:10.007 ************************************ 00:06:10.007 START TEST nvmf_target_core 00:06:10.007 ************************************ 00:06:10.008 06:05:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:10.267 * Looking for test storage... 00:06:10.267 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:10.267 06:05:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:10.267 06:05:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:06:10.267 06:05:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:10.267 06:05:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:10.267 06:05:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:10.267 06:05:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:10.267 06:05:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:10.267 06:05:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:06:10.267 06:05:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:06:10.267 06:05:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:06:10.267 06:05:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:06:10.267 06:05:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:06:10.267 06:05:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:06:10.267 06:05:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:06:10.267 06:05:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:10.267 06:05:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:06:10.267 06:05:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:06:10.267 06:05:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:10.267 06:05:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:10.267 06:05:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:06:10.267 06:05:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:06:10.267 06:05:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:10.267 06:05:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:06:10.267 06:05:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:06:10.267 06:05:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:06:10.267 06:05:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:06:10.267 06:05:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:10.267 06:05:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:06:10.267 06:05:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:06:10.267 06:05:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:10.267 06:05:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:10.267 06:05:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:06:10.267 06:05:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:10.267 06:05:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:10.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.267 --rc genhtml_branch_coverage=1 00:06:10.267 --rc genhtml_function_coverage=1 00:06:10.267 --rc genhtml_legend=1 00:06:10.267 --rc geninfo_all_blocks=1 00:06:10.267 --rc geninfo_unexecuted_blocks=1 00:06:10.267 00:06:10.267 ' 00:06:10.267 06:05:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:10.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.267 --rc genhtml_branch_coverage=1 00:06:10.267 --rc genhtml_function_coverage=1 00:06:10.267 --rc genhtml_legend=1 00:06:10.267 --rc geninfo_all_blocks=1 00:06:10.267 --rc geninfo_unexecuted_blocks=1 00:06:10.267 00:06:10.267 ' 00:06:10.267 06:05:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:10.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.267 --rc genhtml_branch_coverage=1 00:06:10.267 --rc genhtml_function_coverage=1 00:06:10.267 --rc genhtml_legend=1 00:06:10.267 --rc geninfo_all_blocks=1 00:06:10.267 --rc geninfo_unexecuted_blocks=1 00:06:10.267 00:06:10.267 ' 00:06:10.267 06:05:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:10.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.267 --rc genhtml_branch_coverage=1 00:06:10.267 --rc genhtml_function_coverage=1 00:06:10.267 --rc genhtml_legend=1 00:06:10.267 --rc geninfo_all_blocks=1 00:06:10.267 --rc geninfo_unexecuted_blocks=1 00:06:10.267 00:06:10.267 ' 00:06:10.267 06:05:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:10.267 06:05:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:10.267 06:05:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:10.267 06:05:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:10.267 06:05:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:10.267 06:05:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:10.267 06:05:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:10.267 06:05:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:10.267 06:05:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:10.267 06:05:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:10.267 06:05:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:10.267 06:05:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:10.267 06:05:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:10.267 06:05:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:10.267 06:05:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:06:10.267 06:05:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:06:10.267 06:05:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:10.267 06:05:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:10.267 06:05:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:10.267 06:05:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:10.267 06:05:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:10.267 06:05:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:06:10.267 06:05:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:10.267 06:05:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:10.267 06:05:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:10.267 06:05:04 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.267 06:05:04 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.267 06:05:04 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.267 06:05:04 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:10.267 06:05:04 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.267 06:05:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:06:10.267 06:05:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:10.267 06:05:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:10.267 06:05:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:10.267 06:05:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:10.267 06:05:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:10.268 06:05:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:10.268 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:10.268 06:05:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:10.268 06:05:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:10.268 06:05:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:10.268 06:05:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:10.268 06:05:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:10.268 06:05:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:10.268 06:05:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:10.268 06:05:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:10.268 06:05:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.268 06:05:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:10.268 ************************************ 00:06:10.268 START TEST nvmf_abort 00:06:10.268 ************************************ 00:06:10.268 06:05:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:10.532 * Looking for test storage... 00:06:10.532 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:10.532 06:05:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:10.532 06:05:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:10.532 06:05:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:06:10.532 06:05:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:10.532 06:05:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:10.532 06:05:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:10.532 06:05:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:10.532 06:05:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:06:10.532 06:05:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:06:10.532 06:05:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:06:10.532 06:05:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:06:10.532 06:05:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:06:10.532 06:05:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:06:10.532 06:05:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:06:10.532 06:05:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:10.532 06:05:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:06:10.533 06:05:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:06:10.533 06:05:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:10.533 06:05:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:10.533 06:05:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:06:10.533 06:05:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:06:10.533 06:05:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:10.533 06:05:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:06:10.533 06:05:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:06:10.533 06:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:06:10.533 06:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:06:10.533 06:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:10.533 06:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:06:10.533 06:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:06:10.533 06:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:10.533 06:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:10.533 06:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:06:10.533 06:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:10.533 06:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:10.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.533 --rc genhtml_branch_coverage=1 00:06:10.533 --rc genhtml_function_coverage=1 00:06:10.533 --rc genhtml_legend=1 00:06:10.533 --rc geninfo_all_blocks=1 00:06:10.533 --rc geninfo_unexecuted_blocks=1 00:06:10.533 00:06:10.533 ' 00:06:10.533 06:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:10.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.533 --rc genhtml_branch_coverage=1 00:06:10.533 --rc genhtml_function_coverage=1 00:06:10.533 --rc genhtml_legend=1 00:06:10.533 --rc geninfo_all_blocks=1 00:06:10.533 --rc geninfo_unexecuted_blocks=1 00:06:10.533 00:06:10.533 ' 00:06:10.533 06:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:10.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.533 --rc genhtml_branch_coverage=1 00:06:10.533 --rc genhtml_function_coverage=1 00:06:10.533 --rc genhtml_legend=1 00:06:10.533 --rc geninfo_all_blocks=1 00:06:10.533 --rc geninfo_unexecuted_blocks=1 00:06:10.533 00:06:10.533 ' 00:06:10.533 06:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:10.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.533 --rc genhtml_branch_coverage=1 00:06:10.533 --rc genhtml_function_coverage=1 00:06:10.533 --rc genhtml_legend=1 00:06:10.533 --rc geninfo_all_blocks=1 00:06:10.533 --rc geninfo_unexecuted_blocks=1 00:06:10.533 00:06:10.533 ' 00:06:10.533 06:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:10.533 06:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:10.533 06:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:10.533 06:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:10.533 06:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:10.533 06:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:10.533 06:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:10.533 06:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:10.533 06:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:10.533 06:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:10.533 06:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:10.533 06:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:10.533 06:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:06:10.533 06:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:06:10.533 06:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:10.533 06:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:10.533 06:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:10.533 06:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:10.533 06:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:10.533 06:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:06:10.533 06:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:10.533 06:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:10.533 06:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:10.533 06:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.533 06:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.533 06:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.533 06:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:10.533 06:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.533 06:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:06:10.533 06:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:10.533 06:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:10.533 06:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:10.533 06:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:10.533 06:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:10.533 06:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:10.533 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:10.533 06:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:10.533 06:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:10.533 06:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:10.533 06:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:10.533 06:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:10.533 06:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:10.533 06:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:10.533 06:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:10.533 06:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:10.533 06:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:10.533 06:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:10.533 06:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:10.533 06:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:10.533 06:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:10.533 06:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:10.533 06:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:10.533 06:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:06:10.533 06:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:18.672 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:18.672 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:06:18.672 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:18.672 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:18.672 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:18.672 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:18.672 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:18.672 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:06:18.672 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:18.672 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:06:18.672 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:06:18.672 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:06:18.672 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:06:18.672 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:06:18.672 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:18.673 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:18.673 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:18.673 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:18.673 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:18.673 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:18.673 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.625 ms 00:06:18.673 00:06:18.673 --- 10.0.0.2 ping statistics --- 00:06:18.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:18.673 rtt min/avg/max/mdev = 0.625/0.625/0.625/0.000 ms 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:18.673 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:18.673 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.268 ms 00:06:18.673 00:06:18.673 --- 10.0.0.1 ping statistics --- 00:06:18.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:18.673 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=132115 00:06:18.673 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 132115 00:06:18.674 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:18.674 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 132115 ']' 00:06:18.674 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.674 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:18.674 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.674 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:18.674 06:05:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:18.674 [2024-12-09 06:05:12.616771] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:06:18.674 [2024-12-09 06:05:12.616834] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:18.674 [2024-12-09 06:05:12.696218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:18.674 [2024-12-09 06:05:12.748785] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:18.674 [2024-12-09 06:05:12.748843] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:18.674 [2024-12-09 06:05:12.748851] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:18.674 [2024-12-09 06:05:12.748858] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:18.674 [2024-12-09 06:05:12.748864] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:18.674 [2024-12-09 06:05:12.750612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:18.674 [2024-12-09 06:05:12.750901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:18.674 [2024-12-09 06:05:12.750902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:18.935 06:05:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:18.935 06:05:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:06:18.935 06:05:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:18.935 06:05:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:18.935 06:05:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:18.935 06:05:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:18.935 06:05:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:18.935 06:05:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.935 06:05:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:18.935 [2024-12-09 06:05:13.520336] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:19.196 06:05:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.196 06:05:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:19.196 06:05:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.196 06:05:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:19.196 Malloc0 00:06:19.196 06:05:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.196 06:05:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:19.196 06:05:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.196 06:05:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:19.196 Delay0 00:06:19.196 06:05:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.196 06:05:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:19.196 06:05:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.196 06:05:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:19.196 06:05:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.196 06:05:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:19.196 06:05:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.196 06:05:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:19.196 06:05:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.196 06:05:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:19.196 06:05:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.196 06:05:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:19.196 [2024-12-09 06:05:13.606798] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:19.196 06:05:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.196 06:05:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:19.196 06:05:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.196 06:05:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:19.196 06:05:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.196 06:05:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:19.196 [2024-12-09 06:05:13.759464] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:21.764 Initializing NVMe Controllers 00:06:21.764 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:21.764 controller IO queue size 128 less than required 00:06:21.764 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:21.764 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:21.764 Initialization complete. Launching workers. 00:06:21.764 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 32454 00:06:21.764 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 32515, failed to submit 62 00:06:21.764 success 32458, unsuccessful 57, failed 0 00:06:21.764 06:05:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:21.764 06:05:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.764 06:05:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:21.764 06:05:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.764 06:05:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:21.764 06:05:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:21.764 06:05:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:21.764 06:05:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:06:21.764 06:05:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:21.764 06:05:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:06:21.764 06:05:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:21.764 06:05:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:21.764 rmmod nvme_tcp 00:06:21.764 rmmod nvme_fabrics 00:06:21.764 rmmod nvme_keyring 00:06:21.764 06:05:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:21.764 06:05:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:06:21.764 06:05:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:06:21.764 06:05:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 132115 ']' 00:06:21.764 06:05:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 132115 00:06:21.764 06:05:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 132115 ']' 00:06:21.764 06:05:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 132115 00:06:21.764 06:05:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:06:21.764 06:05:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:21.764 06:05:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 132115 00:06:21.764 06:05:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:21.764 06:05:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:21.764 06:05:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 132115' 00:06:21.764 killing process with pid 132115 00:06:21.764 06:05:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 132115 00:06:21.764 06:05:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 132115 00:06:21.764 06:05:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:21.764 06:05:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:21.764 06:05:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:21.764 06:05:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:06:21.764 06:05:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:06:21.764 06:05:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:21.764 06:05:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:06:21.764 06:05:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:21.764 06:05:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:21.764 06:05:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:21.764 06:05:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:21.764 06:05:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:23.676 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:23.676 00:06:23.676 real 0m13.378s 00:06:23.676 user 0m14.324s 00:06:23.676 sys 0m6.298s 00:06:23.676 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.676 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:23.676 ************************************ 00:06:23.676 END TEST nvmf_abort 00:06:23.676 ************************************ 00:06:23.676 06:05:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:23.676 06:05:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:23.676 06:05:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.676 06:05:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:23.955 ************************************ 00:06:23.955 START TEST nvmf_ns_hotplug_stress 00:06:23.955 ************************************ 00:06:23.955 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:23.955 * Looking for test storage... 00:06:23.955 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:23.955 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:23.955 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:06:23.955 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:23.955 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:23.955 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:23.955 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:23.955 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:23.955 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:06:23.955 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:06:23.955 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:06:23.955 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:06:23.955 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:06:23.955 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:06:23.955 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:06:23.955 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:23.955 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:06:23.955 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:06:23.955 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:23.955 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:23.955 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:06:23.955 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:06:23.955 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:23.955 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:06:23.955 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:06:23.955 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:06:23.955 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:06:23.955 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:23.955 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:06:23.955 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:06:23.955 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:23.955 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:23.955 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:06:23.955 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:23.955 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:23.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.955 --rc genhtml_branch_coverage=1 00:06:23.955 --rc genhtml_function_coverage=1 00:06:23.955 --rc genhtml_legend=1 00:06:23.955 --rc geninfo_all_blocks=1 00:06:23.955 --rc geninfo_unexecuted_blocks=1 00:06:23.955 00:06:23.955 ' 00:06:23.955 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:23.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.955 --rc genhtml_branch_coverage=1 00:06:23.955 --rc genhtml_function_coverage=1 00:06:23.955 --rc genhtml_legend=1 00:06:23.956 --rc geninfo_all_blocks=1 00:06:23.956 --rc geninfo_unexecuted_blocks=1 00:06:23.956 00:06:23.956 ' 00:06:23.956 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:23.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.956 --rc genhtml_branch_coverage=1 00:06:23.956 --rc genhtml_function_coverage=1 00:06:23.956 --rc genhtml_legend=1 00:06:23.956 --rc geninfo_all_blocks=1 00:06:23.956 --rc geninfo_unexecuted_blocks=1 00:06:23.956 00:06:23.956 ' 00:06:23.956 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:23.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.956 --rc genhtml_branch_coverage=1 00:06:23.956 --rc genhtml_function_coverage=1 00:06:23.956 --rc genhtml_legend=1 00:06:23.956 --rc geninfo_all_blocks=1 00:06:23.956 --rc geninfo_unexecuted_blocks=1 00:06:23.956 00:06:23.956 ' 00:06:23.956 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:23.956 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:23.956 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:23.956 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:23.956 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:23.956 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:23.956 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:23.956 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:23.956 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:23.956 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:23.956 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:23.956 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:23.956 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:06:23.956 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:06:23.956 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:23.956 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:23.956 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:23.956 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:23.956 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:23.956 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:06:23.956 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:23.956 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:23.956 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:23.956 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.956 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.956 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.956 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:23.956 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.956 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:06:23.956 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:23.956 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:23.956 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:23.956 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:23.956 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:23.956 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:23.956 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:23.956 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:23.956 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:23.956 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:23.956 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:23.956 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:23.956 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:23.956 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:23.956 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:23.956 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:23.956 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:23.956 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:23.956 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:23.956 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:23.956 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:23.956 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:23.956 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:06:23.956 06:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:32.096 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:32.096 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:32.096 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:32.096 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:32.096 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:32.097 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:32.097 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:32.097 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:32.097 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.560 ms 00:06:32.097 00:06:32.097 --- 10.0.0.2 ping statistics --- 00:06:32.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:32.097 rtt min/avg/max/mdev = 0.560/0.560/0.560/0.000 ms 00:06:32.097 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:32.097 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:32.097 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:06:32.097 00:06:32.097 --- 10.0.0.1 ping statistics --- 00:06:32.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:32.097 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:06:32.097 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:32.097 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:06:32.097 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:32.097 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:32.097 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:32.097 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:32.097 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:32.097 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:32.097 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:32.097 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:32.097 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:32.097 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:32.097 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:32.097 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=136695 00:06:32.097 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 136695 00:06:32.097 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:32.097 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 136695 ']' 00:06:32.097 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.097 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:32.097 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.097 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:32.097 06:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:32.097 [2024-12-09 06:05:25.948762] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:06:32.097 [2024-12-09 06:05:25.948821] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:32.097 [2024-12-09 06:05:26.027474] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:32.097 [2024-12-09 06:05:26.077146] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:32.097 [2024-12-09 06:05:26.077195] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:32.097 [2024-12-09 06:05:26.077204] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:32.097 [2024-12-09 06:05:26.077211] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:32.097 [2024-12-09 06:05:26.077218] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:32.097 [2024-12-09 06:05:26.078988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:32.097 [2024-12-09 06:05:26.079151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.097 [2024-12-09 06:05:26.079150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:32.357 06:05:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:32.357 06:05:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:06:32.357 06:05:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:32.357 06:05:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:32.357 06:05:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:32.357 06:05:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:32.357 06:05:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:32.357 06:05:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:32.617 [2024-12-09 06:05:26.994603] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:32.617 06:05:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:32.617 06:05:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:32.878 [2024-12-09 06:05:27.368982] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:32.878 06:05:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:33.138 06:05:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:33.398 Malloc0 00:06:33.398 06:05:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:33.398 Delay0 00:06:33.398 06:05:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:33.658 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:33.658 NULL1 00:06:33.917 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:33.918 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=137305 00:06:33.918 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:33.918 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137305 00:06:33.918 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:34.178 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:34.438 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:34.438 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:34.438 true 00:06:34.438 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137305 00:06:34.438 06:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:34.698 06:05:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:34.958 06:05:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:34.958 06:05:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:34.958 true 00:06:34.958 06:05:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137305 00:06:34.958 06:05:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:35.218 06:05:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:35.478 06:05:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:35.478 06:05:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:35.478 true 00:06:35.478 06:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137305 00:06:35.478 06:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:35.737 06:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:35.997 06:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:35.997 06:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:35.997 true 00:06:35.997 06:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137305 00:06:35.997 06:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:36.258 06:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:36.519 06:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:36.519 06:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:36.519 true 00:06:36.519 06:05:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137305 00:06:36.519 06:05:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:36.780 06:05:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:37.041 06:05:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:37.041 06:05:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:37.041 true 00:06:37.041 06:05:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137305 00:06:37.041 06:05:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:37.302 06:05:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:37.563 06:05:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:37.563 06:05:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:37.563 true 00:06:37.563 06:05:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137305 00:06:37.563 06:05:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:37.824 06:05:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:38.084 06:05:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:38.084 06:05:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:38.084 true 00:06:38.084 06:05:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137305 00:06:38.084 06:05:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.346 06:05:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:38.607 06:05:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:38.608 06:05:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:38.608 true 00:06:38.608 06:05:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137305 00:06:38.608 06:05:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.869 06:05:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:38.869 06:05:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:38.869 06:05:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:39.128 true 00:06:39.129 06:05:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137305 00:06:39.129 06:05:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.389 06:05:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:39.650 06:05:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:39.650 06:05:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:39.650 true 00:06:39.650 06:05:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137305 00:06:39.650 06:05:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.911 06:05:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:40.172 06:05:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:40.172 06:05:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:40.172 true 00:06:40.172 06:05:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137305 00:06:40.172 06:05:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.433 06:05:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:40.693 06:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:40.693 06:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:40.693 true 00:06:40.693 06:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137305 00:06:40.693 06:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.953 06:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:41.213 06:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:41.213 06:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:41.213 true 00:06:41.213 06:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137305 00:06:41.213 06:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.474 06:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:41.734 06:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:41.734 06:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:41.734 true 00:06:41.734 06:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137305 00:06:41.734 06:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.995 06:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:42.255 06:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:42.255 06:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:42.255 true 00:06:42.255 06:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137305 00:06:42.255 06:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.515 06:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:42.515 06:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:42.515 06:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:42.775 true 00:06:42.775 06:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137305 00:06:42.775 06:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:43.034 06:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:43.034 06:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:43.034 06:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:43.293 true 00:06:43.293 06:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137305 00:06:43.293 06:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:43.553 06:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:43.814 06:05:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:43.814 06:05:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:43.814 true 00:06:43.814 06:05:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137305 00:06:43.814 06:05:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.075 06:05:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:44.075 06:05:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:44.075 06:05:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:44.335 true 00:06:44.335 06:05:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137305 00:06:44.335 06:05:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.594 06:05:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:44.854 06:05:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:44.854 06:05:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:44.854 true 00:06:44.854 06:05:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137305 00:06:44.854 06:05:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.113 06:05:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:45.373 06:05:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:45.373 06:05:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:45.373 true 00:06:45.373 06:05:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137305 00:06:45.373 06:05:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.632 06:05:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:45.891 06:05:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:45.891 06:05:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:45.891 true 00:06:45.891 06:05:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137305 00:06:45.891 06:05:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.151 06:05:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:46.411 06:05:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:46.411 06:05:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:46.411 true 00:06:46.411 06:05:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137305 00:06:46.411 06:05:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.670 06:05:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:46.930 06:05:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:46.930 06:05:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:46.930 true 00:06:46.930 06:05:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137305 00:06:46.930 06:05:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:47.189 06:05:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:47.450 06:05:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:47.450 06:05:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:47.450 true 00:06:47.450 06:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137305 00:06:47.450 06:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:47.710 06:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:47.971 06:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:47.971 06:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:47.971 true 00:06:47.971 06:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137305 00:06:47.971 06:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.234 06:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:48.494 06:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:48.494 06:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:48.494 true 00:06:48.494 06:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137305 00:06:48.494 06:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.755 06:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:49.017 06:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:06:49.017 06:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:06:49.017 true 00:06:49.277 06:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137305 00:06:49.277 06:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.277 06:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:49.538 06:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:06:49.538 06:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:06:49.538 true 00:06:49.538 06:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137305 00:06:49.538 06:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.799 06:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:50.060 06:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:06:50.060 06:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:06:50.060 true 00:06:50.060 06:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137305 00:06:50.060 06:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.322 06:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:50.582 06:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:06:50.582 06:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:06:50.582 true 00:06:50.582 06:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137305 00:06:50.582 06:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.842 06:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:51.103 06:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:06:51.103 06:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:06:51.103 true 00:06:51.103 06:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137305 00:06:51.103 06:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.363 06:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:51.623 06:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:06:51.623 06:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:06:51.623 true 00:06:51.623 06:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137305 00:06:51.623 06:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.883 06:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:52.143 06:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:06:52.143 06:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:06:52.143 true 00:06:52.143 06:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137305 00:06:52.143 06:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.404 06:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:52.665 06:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:06:52.665 06:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:06:52.665 true 00:06:52.665 06:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137305 00:06:52.665 06:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.926 06:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:53.186 06:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:06:53.186 06:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:06:53.186 true 00:06:53.186 06:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137305 00:06:53.186 06:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:53.446 06:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:53.755 06:05:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:06:53.755 06:05:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:06:53.755 true 00:06:53.755 06:05:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137305 00:06:53.755 06:05:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.015 06:05:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:54.273 06:05:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:06:54.273 06:05:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:06:54.273 true 00:06:54.273 06:05:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137305 00:06:54.273 06:05:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.539 06:05:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:54.797 06:05:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:06:54.798 06:05:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:06:54.798 true 00:06:54.798 06:05:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137305 00:06:54.798 06:05:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.056 06:05:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:55.315 06:05:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:06:55.315 06:05:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:06:55.315 true 00:06:55.315 06:05:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137305 00:06:55.315 06:05:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.574 06:05:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:55.834 06:05:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:06:55.834 06:05:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:06:55.834 true 00:06:55.834 06:05:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137305 00:06:55.834 06:05:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:56.095 06:05:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:56.355 06:05:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:06:56.355 06:05:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:06:56.355 true 00:06:56.615 06:05:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137305 00:06:56.615 06:05:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:56.615 06:05:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:56.875 06:05:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:06:56.875 06:05:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:06:57.135 true 00:06:57.135 06:05:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137305 00:06:57.135 06:05:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.135 06:05:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:57.395 06:05:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:06:57.395 06:05:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:06:57.655 true 00:06:57.655 06:05:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137305 00:06:57.655 06:05:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.916 06:05:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:57.916 06:05:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:06:57.916 06:05:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:06:58.177 true 00:06:58.177 06:05:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137305 00:06:58.177 06:05:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.437 06:05:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:58.698 06:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:06:58.698 06:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:06:58.698 true 00:06:58.698 06:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137305 00:06:58.698 06:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.958 06:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.218 06:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:06:59.218 06:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:06:59.218 true 00:06:59.479 06:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137305 00:06:59.479 06:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.479 06:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.741 06:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:06:59.741 06:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:07:00.002 true 00:07:00.002 06:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137305 00:07:00.002 06:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.002 06:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:00.261 06:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:07:00.261 06:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:07:00.522 true 00:07:00.522 06:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137305 00:07:00.522 06:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.781 06:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:00.781 06:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:07:00.781 06:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:07:01.041 true 00:07:01.042 06:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137305 00:07:01.042 06:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.302 06:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:01.302 06:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:07:01.302 06:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:07:01.562 true 00:07:01.562 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137305 00:07:01.562 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.822 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:02.083 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:07:02.083 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:07:02.083 true 00:07:02.083 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137305 00:07:02.083 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.343 06:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:02.604 06:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:07:02.604 06:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:07:02.604 true 00:07:02.863 06:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137305 00:07:02.863 06:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.863 06:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:03.123 06:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:07:03.123 06:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:07:03.383 true 00:07:03.383 06:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137305 00:07:03.383 06:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:03.643 06:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:03.643 06:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1056 00:07:03.643 06:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1056 00:07:03.903 true 00:07:03.903 06:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137305 00:07:03.903 06:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.163 06:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:04.163 Initializing NVMe Controllers 00:07:04.163 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:04.163 Controller IO queue size 128, less than required. 00:07:04.163 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:04.163 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:04.163 Initialization complete. Launching workers. 00:07:04.163 ======================================================== 00:07:04.163 Latency(us) 00:07:04.163 Device Information : IOPS MiB/s Average min max 00:07:04.163 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 30257.10 14.77 4230.36 1176.08 7905.34 00:07:04.163 ======================================================== 00:07:04.163 Total : 30257.10 14.77 4230.36 1176.08 7905.34 00:07:04.163 00:07:04.163 06:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1057 00:07:04.163 06:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1057 00:07:04.423 true 00:07:04.423 06:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137305 00:07:04.423 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (137305) - No such process 00:07:04.423 06:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 137305 00:07:04.423 06:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.683 06:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:04.683 06:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:04.683 06:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:04.683 06:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:04.683 06:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:04.683 06:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:04.943 null0 00:07:04.943 06:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:04.943 06:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:04.943 06:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:05.202 null1 00:07:05.202 06:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:05.202 06:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:05.202 06:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:05.461 null2 00:07:05.461 06:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:05.461 06:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:05.461 06:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:05.461 null3 00:07:05.461 06:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:05.461 06:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:05.461 06:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:05.722 null4 00:07:05.722 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:05.722 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:05.722 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:05.982 null5 00:07:05.982 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:05.982 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:05.982 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:05.982 null6 00:07:05.982 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:05.982 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:05.982 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:06.244 null7 00:07:06.244 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:06.244 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:06.244 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:06.244 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:06.244 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:06.244 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:06.244 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:06.244 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:06.244 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:06.244 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:06.244 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.244 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:06.244 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:06.244 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:06.244 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:06.244 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:06.244 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:06.244 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:06.244 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:06.244 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:06.244 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.244 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:06.244 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:06.244 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:06.244 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:06.244 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.244 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:06.244 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:06.244 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:06.244 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:06.244 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:06.244 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:06.244 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:06.244 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:06.244 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.244 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:06.244 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:06.244 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:06.244 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:06.244 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:06.244 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:06.244 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:06.244 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:06.244 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.244 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:06.244 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:06.244 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:06.244 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:06.244 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:06.244 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:06.244 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:06.244 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:06.244 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:06.244 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:06.244 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:06.244 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:06.244 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.244 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.244 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:06.244 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:06.244 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:06.244 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:06.244 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:06.244 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 142747 142748 142750 142753 142754 142756 142758 142759 00:07:06.244 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:06.244 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:06.244 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:06.244 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.244 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:06.505 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.505 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:06.505 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:06.505 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:06.505 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:06.505 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:06.505 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:06.505 06:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:06.505 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.505 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.505 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:06.765 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.766 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.766 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.766 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.766 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:06.766 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:06.766 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.766 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.766 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:06.766 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.766 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.766 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:06.766 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.766 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.766 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:06.766 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.766 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.766 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:06.766 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.766 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.766 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:06.766 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.766 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:06.766 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:06.766 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:06.766 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:07.027 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:07.027 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:07.027 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:07.027 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.027 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.027 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:07.027 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.027 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.027 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:07.027 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.027 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.027 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:07.027 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.027 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.027 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:07.027 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.027 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.027 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:07.027 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.027 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.027 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:07.027 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.027 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.027 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:07.027 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.027 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.027 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:07.028 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.289 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:07.289 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:07.289 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:07.289 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:07.289 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:07.289 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:07.289 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:07.289 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.289 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.289 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:07.289 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.289 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.289 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:07.551 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.551 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.551 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:07.551 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.551 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.551 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:07.551 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.551 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.551 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:07.551 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.551 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.551 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:07.551 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.551 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.551 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:07.551 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.551 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.551 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:07.551 06:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.551 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:07.551 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:07.551 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:07.551 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:07.551 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.551 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.551 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:07.551 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:07.551 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:07.812 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:07.812 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.812 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.812 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:07.812 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.812 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.812 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:07.812 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.812 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.812 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:07.812 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.812 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.812 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:07.812 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.812 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.812 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.812 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:07.812 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.812 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.812 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:07.812 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.812 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.812 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:08.074 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:08.074 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:08.074 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:08.074 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:08.074 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.074 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.074 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:08.074 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:08.074 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:08.074 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:08.074 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.074 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.074 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:08.074 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.074 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.074 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:08.074 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:08.074 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.074 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.074 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:08.336 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.336 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.336 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:08.336 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.336 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.336 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:08.336 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.336 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.336 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.336 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:08.336 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.336 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:08.336 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:08.336 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.336 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.336 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:08.336 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:08.336 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:08.336 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:08.336 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:08.336 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:08.336 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:08.598 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.598 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.598 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:08.598 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.598 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.598 06:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:08.598 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.598 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.598 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:08.598 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:08.598 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.598 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.598 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:08.598 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.598 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.598 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:08.598 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.598 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.598 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:08.598 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.598 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.598 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:08.598 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:08.598 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:08.859 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.859 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.859 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:08.859 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:08.859 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:08.859 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:08.859 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:08.859 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:08.859 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.859 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.859 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:08.859 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.859 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.859 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:08.859 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:08.859 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.859 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.859 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.859 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:08.859 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.859 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:08.859 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.859 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.859 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:08.859 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.859 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.859 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:09.120 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.120 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.120 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:09.120 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:09.120 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:09.120 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.120 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.120 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:09.120 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:09.120 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:09.120 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:09.120 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:09.120 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:09.120 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.120 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.120 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.120 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:09.380 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.380 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.380 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:09.380 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.380 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.380 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:09.380 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.380 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.380 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:09.380 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.380 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.380 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:09.380 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.380 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.380 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:09.380 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.380 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.380 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:09.380 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.380 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.380 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:09.380 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:09.380 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:09.380 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:09.380 06:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:09.640 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:09.640 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.640 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:09.640 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.640 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.640 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:09.640 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:09.640 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.640 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.640 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:09.640 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.640 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.640 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:09.640 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.640 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.640 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:09.640 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.640 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.640 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:09.901 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:09.901 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.901 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.901 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.901 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.901 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:09.901 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.901 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.901 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:09.901 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:09.901 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:09.901 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:09.901 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:09.901 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.901 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.901 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.901 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.901 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:09.901 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:10.162 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.162 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.162 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.162 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.162 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.162 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.162 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.162 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.162 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.162 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.162 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:10.162 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:10.162 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:10.162 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:07:10.162 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:10.162 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:07:10.162 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:10.162 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:10.162 rmmod nvme_tcp 00:07:10.162 rmmod nvme_fabrics 00:07:10.162 rmmod nvme_keyring 00:07:10.162 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:10.162 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:07:10.162 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:07:10.162 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 136695 ']' 00:07:10.162 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 136695 00:07:10.162 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 136695 ']' 00:07:10.162 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 136695 00:07:10.162 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:07:10.162 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:10.162 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 136695 00:07:10.422 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:10.422 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:10.422 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 136695' 00:07:10.422 killing process with pid 136695 00:07:10.422 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 136695 00:07:10.422 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 136695 00:07:10.422 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:10.422 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:10.422 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:10.422 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:07:10.422 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:10.422 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:07:10.422 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:07:10.422 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:10.422 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:10.422 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:10.422 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:10.422 06:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:12.972 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:12.972 00:07:12.972 real 0m48.675s 00:07:12.972 user 3m19.233s 00:07:12.972 sys 0m17.113s 00:07:12.972 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:12.972 06:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:12.972 ************************************ 00:07:12.972 END TEST nvmf_ns_hotplug_stress 00:07:12.972 ************************************ 00:07:12.972 06:06:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:12.972 06:06:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:12.972 06:06:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:12.972 06:06:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:12.972 ************************************ 00:07:12.972 START TEST nvmf_delete_subsystem 00:07:12.972 ************************************ 00:07:12.972 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:12.972 * Looking for test storage... 00:07:12.972 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:12.972 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:12.972 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:07:12.972 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:12.972 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:12.972 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:12.972 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:12.972 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:12.972 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:07:12.972 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:07:12.972 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:07:12.972 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:07:12.972 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:07:12.972 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:07:12.972 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:07:12.972 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:12.972 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:07:12.972 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:07:12.972 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:12.972 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:12.972 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:07:12.972 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:07:12.972 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:12.972 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:07:12.972 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:07:12.972 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:07:12.972 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:07:12.972 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:12.972 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:07:12.972 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:07:12.972 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:12.972 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:12.973 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:07:12.973 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:12.973 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:12.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.973 --rc genhtml_branch_coverage=1 00:07:12.973 --rc genhtml_function_coverage=1 00:07:12.973 --rc genhtml_legend=1 00:07:12.973 --rc geninfo_all_blocks=1 00:07:12.973 --rc geninfo_unexecuted_blocks=1 00:07:12.973 00:07:12.973 ' 00:07:12.973 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:12.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.973 --rc genhtml_branch_coverage=1 00:07:12.973 --rc genhtml_function_coverage=1 00:07:12.973 --rc genhtml_legend=1 00:07:12.973 --rc geninfo_all_blocks=1 00:07:12.973 --rc geninfo_unexecuted_blocks=1 00:07:12.973 00:07:12.973 ' 00:07:12.973 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:12.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.973 --rc genhtml_branch_coverage=1 00:07:12.973 --rc genhtml_function_coverage=1 00:07:12.973 --rc genhtml_legend=1 00:07:12.973 --rc geninfo_all_blocks=1 00:07:12.973 --rc geninfo_unexecuted_blocks=1 00:07:12.973 00:07:12.973 ' 00:07:12.973 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:12.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.973 --rc genhtml_branch_coverage=1 00:07:12.973 --rc genhtml_function_coverage=1 00:07:12.973 --rc genhtml_legend=1 00:07:12.973 --rc geninfo_all_blocks=1 00:07:12.973 --rc geninfo_unexecuted_blocks=1 00:07:12.973 00:07:12.973 ' 00:07:12.973 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:12.973 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:12.973 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:12.973 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:12.973 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:12.973 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:12.973 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:12.973 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:12.973 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:12.973 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:12.973 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:12.973 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:12.973 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:07:12.973 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:07:12.973 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:12.973 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:12.973 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:12.973 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:12.973 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:12.973 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:07:12.973 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:12.973 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:12.973 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:12.973 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.973 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.973 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.973 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:12.973 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.973 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:07:12.973 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:12.973 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:12.973 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:12.973 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:12.973 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:12.973 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:12.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:12.973 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:12.973 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:12.973 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:12.973 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:12.973 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:12.973 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:12.973 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:12.973 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:12.973 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:12.973 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:12.973 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:12.973 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:12.973 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:12.973 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:12.973 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:07:12.973 06:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:21.100 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:21.100 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:07:21.100 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:21.100 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:21.100 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:21.100 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:21.100 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:21.100 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:07:21.100 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:21.100 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:07:21.100 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:07:21.100 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:07:21.100 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:07:21.100 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:07:21.100 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:07:21.100 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:21.100 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:21.100 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:21.100 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:21.100 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:21.100 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:21.100 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:21.100 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:21.100 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:21.100 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:21.100 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:21.100 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:21.100 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:21.100 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:21.100 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:21.100 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:21.100 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:21.100 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:21.100 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:21.100 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:21.100 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:21.100 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:21.100 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:21.100 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:21.100 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:21.100 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:21.100 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:21.100 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:21.100 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:21.100 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:21.100 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:21.100 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:21.100 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:21.100 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:21.100 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:21.100 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:21.100 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:21.100 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:21.100 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:21.100 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:21.100 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:21.100 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:21.100 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:21.100 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:21.100 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:21.100 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:21.100 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:21.100 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:21.100 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:21.100 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:21.100 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:21.100 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:21.100 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:21.100 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:21.100 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:21.100 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:21.100 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:21.100 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:21.100 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:07:21.100 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:21.100 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:21.100 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:21.100 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:21.101 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:21.101 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:21.101 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:21.101 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:21.101 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:21.101 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:21.101 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:21.101 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:21.101 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:21.101 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:21.101 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:21.101 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:21.101 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:21.101 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:21.101 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:21.101 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:21.101 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:21.101 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:21.101 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:21.101 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:21.101 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:21.101 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:21.101 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:21.101 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.540 ms 00:07:21.101 00:07:21.101 --- 10.0.0.2 ping statistics --- 00:07:21.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:21.101 rtt min/avg/max/mdev = 0.540/0.540/0.540/0.000 ms 00:07:21.101 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:21.101 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:21.101 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:07:21.101 00:07:21.101 --- 10.0.0.1 ping statistics --- 00:07:21.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:21.101 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:07:21.101 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:21.101 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:07:21.101 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:21.101 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:21.101 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:21.101 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:21.101 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:21.101 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:21.101 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:21.101 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:21.101 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:21.101 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:21.101 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:21.101 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=148251 00:07:21.101 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 148251 00:07:21.101 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 148251 ']' 00:07:21.101 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.101 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:21.101 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.101 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:21.101 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:21.101 06:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:21.101 [2024-12-09 06:06:14.512105] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:07:21.101 [2024-12-09 06:06:14.512169] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:21.101 [2024-12-09 06:06:14.606662] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:21.101 [2024-12-09 06:06:14.656390] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:21.101 [2024-12-09 06:06:14.656443] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:21.101 [2024-12-09 06:06:14.656462] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:21.101 [2024-12-09 06:06:14.656468] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:21.101 [2024-12-09 06:06:14.656474] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:21.101 [2024-12-09 06:06:14.658114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:21.101 [2024-12-09 06:06:14.658119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.101 06:06:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:21.101 06:06:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:07:21.101 06:06:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:21.101 06:06:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:21.101 06:06:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:21.101 06:06:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:21.101 06:06:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:21.101 06:06:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.101 06:06:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:21.101 [2024-12-09 06:06:15.379962] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:21.101 06:06:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.101 06:06:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:21.101 06:06:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.101 06:06:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:21.101 06:06:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.101 06:06:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:21.101 06:06:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.101 06:06:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:21.101 [2024-12-09 06:06:15.404243] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:21.101 06:06:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.101 06:06:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:21.101 06:06:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.101 06:06:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:21.101 NULL1 00:07:21.101 06:06:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.101 06:06:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:21.101 06:06:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.101 06:06:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:21.101 Delay0 00:07:21.101 06:06:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.101 06:06:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:21.101 06:06:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.101 06:06:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:21.101 06:06:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.101 06:06:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=148368 00:07:21.101 06:06:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:21.102 06:06:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:21.102 [2024-12-09 06:06:15.521329] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:23.016 06:06:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:23.016 06:06:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.016 06:06:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:23.277 Read completed with error (sct=0, sc=8) 00:07:23.277 Write completed with error (sct=0, sc=8) 00:07:23.277 Read completed with error (sct=0, sc=8) 00:07:23.277 Read completed with error (sct=0, sc=8) 00:07:23.277 starting I/O failed: -6 00:07:23.277 Write completed with error (sct=0, sc=8) 00:07:23.277 Write completed with error (sct=0, sc=8) 00:07:23.277 Write completed with error (sct=0, sc=8) 00:07:23.277 Read completed with error (sct=0, sc=8) 00:07:23.277 starting I/O failed: -6 00:07:23.277 Read completed with error (sct=0, sc=8) 00:07:23.277 Write completed with error (sct=0, sc=8) 00:07:23.277 Read completed with error (sct=0, sc=8) 00:07:23.277 Read completed with error (sct=0, sc=8) 00:07:23.277 starting I/O failed: -6 00:07:23.277 Write completed with error (sct=0, sc=8) 00:07:23.277 Write completed with error (sct=0, sc=8) 00:07:23.277 Read completed with error (sct=0, sc=8) 00:07:23.277 Read completed with error (sct=0, sc=8) 00:07:23.277 starting I/O failed: -6 00:07:23.277 Read completed with error (sct=0, sc=8) 00:07:23.277 Write completed with error (sct=0, sc=8) 00:07:23.277 Read completed with error (sct=0, sc=8) 00:07:23.277 Read completed with error (sct=0, sc=8) 00:07:23.277 starting I/O failed: -6 00:07:23.277 Write completed with error (sct=0, sc=8) 00:07:23.277 Read completed with error (sct=0, sc=8) 00:07:23.277 Read completed with error (sct=0, sc=8) 00:07:23.277 Read completed with error (sct=0, sc=8) 00:07:23.277 starting I/O failed: -6 00:07:23.277 Read completed with error (sct=0, sc=8) 00:07:23.277 Read completed with error (sct=0, sc=8) 00:07:23.277 Read completed with error (sct=0, sc=8) 00:07:23.277 Read completed with error (sct=0, sc=8) 00:07:23.277 starting I/O failed: -6 00:07:23.277 Read completed with error (sct=0, sc=8) 00:07:23.277 Read completed with error (sct=0, sc=8) 00:07:23.277 Read completed with error (sct=0, sc=8) 00:07:23.277 Write completed with error (sct=0, sc=8) 00:07:23.277 starting I/O failed: -6 00:07:23.277 Write completed with error (sct=0, sc=8) 00:07:23.277 Read completed with error (sct=0, sc=8) 00:07:23.277 Read completed with error (sct=0, sc=8) 00:07:23.277 Read completed with error (sct=0, sc=8) 00:07:23.277 starting I/O failed: -6 00:07:23.277 Write completed with error (sct=0, sc=8) 00:07:23.277 Read completed with error (sct=0, sc=8) 00:07:23.277 Read completed with error (sct=0, sc=8) 00:07:23.277 Read completed with error (sct=0, sc=8) 00:07:23.277 starting I/O failed: -6 00:07:23.277 Read completed with error (sct=0, sc=8) 00:07:23.277 Read completed with error (sct=0, sc=8) 00:07:23.277 Write completed with error (sct=0, sc=8) 00:07:23.277 [2024-12-09 06:06:17.651797] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13542c0 is same with the state(6) to be set 00:07:23.277 Read completed with error (sct=0, sc=8) 00:07:23.277 Read completed with error (sct=0, sc=8) 00:07:23.277 Read completed with error (sct=0, sc=8) 00:07:23.277 starting I/O failed: -6 00:07:23.277 Write completed with error (sct=0, sc=8) 00:07:23.277 Write completed with error (sct=0, sc=8) 00:07:23.277 Read completed with error (sct=0, sc=8) 00:07:23.277 Read completed with error (sct=0, sc=8) 00:07:23.277 Read completed with error (sct=0, sc=8) 00:07:23.277 Read completed with error (sct=0, sc=8) 00:07:23.277 Read completed with error (sct=0, sc=8) 00:07:23.277 Read completed with error (sct=0, sc=8) 00:07:23.277 Read completed with error (sct=0, sc=8) 00:07:23.277 starting I/O failed: -6 00:07:23.277 Read completed with error (sct=0, sc=8) 00:07:23.277 Write completed with error (sct=0, sc=8) 00:07:23.277 Read completed with error (sct=0, sc=8) 00:07:23.277 Write completed with error (sct=0, sc=8) 00:07:23.277 Write completed with error (sct=0, sc=8) 00:07:23.277 Write completed with error (sct=0, sc=8) 00:07:23.277 Read completed with error (sct=0, sc=8) 00:07:23.277 Read completed with error (sct=0, sc=8) 00:07:23.277 Write completed with error (sct=0, sc=8) 00:07:23.277 Read completed with error (sct=0, sc=8) 00:07:23.277 Read completed with error (sct=0, sc=8) 00:07:23.277 Read completed with error (sct=0, sc=8) 00:07:23.277 starting I/O failed: -6 00:07:23.277 Read completed with error (sct=0, sc=8) 00:07:23.277 Read completed with error (sct=0, sc=8) 00:07:23.277 Write completed with error (sct=0, sc=8) 00:07:23.277 Read completed with error (sct=0, sc=8) 00:07:23.277 Read completed with error (sct=0, sc=8) 00:07:23.277 Read completed with error (sct=0, sc=8) 00:07:23.277 Read completed with error (sct=0, sc=8) 00:07:23.277 Write completed with error (sct=0, sc=8) 00:07:23.277 Write completed with error (sct=0, sc=8) 00:07:23.277 starting I/O failed: -6 00:07:23.277 Read completed with error (sct=0, sc=8) 00:07:23.277 Write completed with error (sct=0, sc=8) 00:07:23.277 Read completed with error (sct=0, sc=8) 00:07:23.277 Read completed with error (sct=0, sc=8) 00:07:23.277 Read completed with error (sct=0, sc=8) 00:07:23.277 Read completed with error (sct=0, sc=8) 00:07:23.277 Write completed with error (sct=0, sc=8) 00:07:23.277 Write completed with error (sct=0, sc=8) 00:07:23.277 Read completed with error (sct=0, sc=8) 00:07:23.277 starting I/O failed: -6 00:07:23.277 Write completed with error (sct=0, sc=8) 00:07:23.277 Read completed with error (sct=0, sc=8) 00:07:23.277 Read completed with error (sct=0, sc=8) 00:07:23.277 Write completed with error (sct=0, sc=8) 00:07:23.277 Write completed with error (sct=0, sc=8) 00:07:23.277 Read completed with error (sct=0, sc=8) 00:07:23.277 Read completed with error (sct=0, sc=8) 00:07:23.277 Read completed with error (sct=0, sc=8) 00:07:23.277 Write completed with error (sct=0, sc=8) 00:07:23.277 Read completed with error (sct=0, sc=8) 00:07:23.277 Read completed with error (sct=0, sc=8) 00:07:23.277 Read completed with error (sct=0, sc=8) 00:07:23.277 starting I/O failed: -6 00:07:23.277 Write completed with error (sct=0, sc=8) 00:07:23.277 Write completed with error (sct=0, sc=8) 00:07:23.277 Read completed with error (sct=0, sc=8) 00:07:23.277 Read completed with error (sct=0, sc=8) 00:07:23.277 Write completed with error (sct=0, sc=8) 00:07:23.277 Write completed with error (sct=0, sc=8) 00:07:23.277 Read completed with error (sct=0, sc=8) 00:07:23.277 Write completed with error (sct=0, sc=8) 00:07:23.277 Read completed with error (sct=0, sc=8) 00:07:23.278 Read completed with error (sct=0, sc=8) 00:07:23.278 Read completed with error (sct=0, sc=8) 00:07:23.278 starting I/O failed: -6 00:07:23.278 Read completed with error (sct=0, sc=8) 00:07:23.278 Write completed with error (sct=0, sc=8) 00:07:23.278 Read completed with error (sct=0, sc=8) 00:07:23.278 Write completed with error (sct=0, sc=8) 00:07:23.278 Read completed with error (sct=0, sc=8) 00:07:23.278 Read completed with error (sct=0, sc=8) 00:07:23.278 Write completed with error (sct=0, sc=8) 00:07:23.278 Write completed with error (sct=0, sc=8) 00:07:23.278 Write completed with error (sct=0, sc=8) 00:07:23.278 Read completed with error (sct=0, sc=8) 00:07:23.278 starting I/O failed: -6 00:07:23.278 Read completed with error (sct=0, sc=8) 00:07:23.278 Read completed with error (sct=0, sc=8) 00:07:23.278 Write completed with error (sct=0, sc=8) 00:07:23.278 Write completed with error (sct=0, sc=8) 00:07:23.278 Read completed with error (sct=0, sc=8) 00:07:23.278 Read completed with error (sct=0, sc=8) 00:07:23.278 Read completed with error (sct=0, sc=8) 00:07:23.278 Write completed with error (sct=0, sc=8) 00:07:23.278 Read completed with error (sct=0, sc=8) 00:07:23.278 Write completed with error (sct=0, sc=8) 00:07:23.278 starting I/O failed: -6 00:07:23.278 Write completed with error (sct=0, sc=8) 00:07:23.278 Write completed with error (sct=0, sc=8) 00:07:23.278 Read completed with error (sct=0, sc=8) 00:07:23.278 Read completed with error (sct=0, sc=8) 00:07:23.278 Read completed with error (sct=0, sc=8) 00:07:23.278 Read completed with error (sct=0, sc=8) 00:07:23.278 Read completed with error (sct=0, sc=8) 00:07:23.278 starting I/O failed: -6 00:07:23.278 Read completed with error (sct=0, sc=8) 00:07:23.278 Write completed with error (sct=0, sc=8) 00:07:23.278 Read completed with error (sct=0, sc=8) 00:07:23.278 Read completed with error (sct=0, sc=8) 00:07:23.278 starting I/O failed: -6 00:07:23.278 Read completed with error (sct=0, sc=8) 00:07:23.278 Read completed with error (sct=0, sc=8) 00:07:23.278 starting I/O failed: -6 00:07:23.278 Read completed with error (sct=0, sc=8) 00:07:23.278 Read completed with error (sct=0, sc=8) 00:07:23.278 starting I/O failed: -6 00:07:23.278 Write completed with error (sct=0, sc=8) 00:07:23.278 Write completed with error (sct=0, sc=8) 00:07:23.278 starting I/O failed: -6 00:07:23.278 Write completed with error (sct=0, sc=8) 00:07:23.278 Read completed with error (sct=0, sc=8) 00:07:23.278 starting I/O failed: -6 00:07:23.278 Read completed with error (sct=0, sc=8) 00:07:23.278 Read completed with error (sct=0, sc=8) 00:07:23.278 starting I/O failed: -6 00:07:23.278 Write completed with error (sct=0, sc=8) 00:07:23.278 Write completed with error (sct=0, sc=8) 00:07:23.278 starting I/O failed: -6 00:07:23.278 Read completed with error (sct=0, sc=8) 00:07:23.278 Read completed with error (sct=0, sc=8) 00:07:23.278 starting I/O failed: -6 00:07:23.278 Write completed with error (sct=0, sc=8) 00:07:23.278 Write completed with error (sct=0, sc=8) 00:07:23.278 starting I/O failed: -6 00:07:23.278 Read completed with error (sct=0, sc=8) 00:07:23.278 Write completed with error (sct=0, sc=8) 00:07:23.278 starting I/O failed: -6 00:07:23.278 Read completed with error (sct=0, sc=8) 00:07:23.278 Read completed with error (sct=0, sc=8) 00:07:23.278 starting I/O failed: -6 00:07:23.278 Read completed with error (sct=0, sc=8) 00:07:23.278 Read completed with error (sct=0, sc=8) 00:07:23.278 starting I/O failed: -6 00:07:23.278 Read completed with error (sct=0, sc=8) 00:07:23.278 Write completed with error (sct=0, sc=8) 00:07:23.278 starting I/O failed: -6 00:07:23.278 Write completed with error (sct=0, sc=8) 00:07:23.278 Write completed with error (sct=0, sc=8) 00:07:23.278 starting I/O failed: -6 00:07:23.278 Read completed with error (sct=0, sc=8) 00:07:23.278 Read completed with error (sct=0, sc=8) 00:07:23.278 starting I/O failed: -6 00:07:23.278 Read completed with error (sct=0, sc=8) 00:07:23.278 Read completed with error (sct=0, sc=8) 00:07:23.278 starting I/O failed: -6 00:07:23.278 Read completed with error (sct=0, sc=8) 00:07:23.278 Write completed with error (sct=0, sc=8) 00:07:23.278 starting I/O failed: -6 00:07:23.278 Write completed with error (sct=0, sc=8) 00:07:23.278 Read completed with error (sct=0, sc=8) 00:07:23.278 starting I/O failed: -6 00:07:23.278 Read completed with error (sct=0, sc=8) 00:07:23.278 Read completed with error (sct=0, sc=8) 00:07:23.278 starting I/O failed: -6 00:07:23.278 Read completed with error (sct=0, sc=8) 00:07:23.278 Read completed with error (sct=0, sc=8) 00:07:23.278 starting I/O failed: -6 00:07:23.278 Read completed with error (sct=0, sc=8) 00:07:23.278 Read completed with error (sct=0, sc=8) 00:07:23.278 starting I/O failed: -6 00:07:23.278 Read completed with error (sct=0, sc=8) 00:07:23.278 Read completed with error (sct=0, sc=8) 00:07:23.278 starting I/O failed: -6 00:07:23.278 Write completed with error (sct=0, sc=8) 00:07:23.278 Write completed with error (sct=0, sc=8) 00:07:23.278 starting I/O failed: -6 00:07:23.278 Write completed with error (sct=0, sc=8) 00:07:23.278 Write completed with error (sct=0, sc=8) 00:07:23.278 starting I/O failed: -6 00:07:23.278 Write completed with error (sct=0, sc=8) 00:07:23.278 Write completed with error (sct=0, sc=8) 00:07:23.278 starting I/O failed: -6 00:07:23.278 Read completed with error (sct=0, sc=8) 00:07:23.278 Read completed with error (sct=0, sc=8) 00:07:23.278 starting I/O failed: -6 00:07:23.278 Read completed with error (sct=0, sc=8) 00:07:23.278 Read completed with error (sct=0, sc=8) 00:07:23.278 starting I/O failed: -6 00:07:23.278 Read completed with error (sct=0, sc=8) 00:07:23.278 Read completed with error (sct=0, sc=8) 00:07:23.278 starting I/O failed: -6 00:07:23.278 Read completed with error (sct=0, sc=8) 00:07:23.278 starting I/O failed: -6 00:07:23.278 starting I/O failed: -6 00:07:23.278 starting I/O failed: -6 00:07:23.278 starting I/O failed: -6 00:07:23.278 starting I/O failed: -6 00:07:23.278 starting I/O failed: -6 00:07:23.278 starting I/O failed: -6 00:07:24.261 [2024-12-09 06:06:18.622807] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13559b0 is same with the state(6) to be set 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Write completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Write completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Write completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Write completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Write completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 [2024-12-09 06:06:18.651381] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1354860 is same with the state(6) to be set 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Write completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Write completed with error (sct=0, sc=8) 00:07:24.261 Write completed with error (sct=0, sc=8) 00:07:24.261 Write completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Write completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 [2024-12-09 06:06:18.651473] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13544a0 is same with the state(6) to be set 00:07:24.261 Write completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Write completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Write completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Write completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Write completed with error (sct=0, sc=8) 00:07:24.261 Write completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Write completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Write completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Write completed with error (sct=0, sc=8) 00:07:24.261 Write completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Write completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Write completed with error (sct=0, sc=8) 00:07:24.261 Write completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 [2024-12-09 06:06:18.652057] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fcb1000d680 is same with the state(6) to be set 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Write completed with error (sct=0, sc=8) 00:07:24.261 Write completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Write completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Write completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Write completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Write completed with error (sct=0, sc=8) 00:07:24.261 Write completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Write completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Write completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 Read completed with error (sct=0, sc=8) 00:07:24.261 [2024-12-09 06:06:18.654342] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fcb1000d020 is same with the state(6) to be set 00:07:24.261 Initializing NVMe Controllers 00:07:24.261 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:24.261 Controller IO queue size 128, less than required. 00:07:24.261 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:24.261 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:24.261 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:24.261 Initialization complete. Launching workers. 00:07:24.261 ======================================================== 00:07:24.261 Latency(us) 00:07:24.261 Device Information : IOPS MiB/s Average min max 00:07:24.261 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 163.87 0.08 908059.64 430.63 1010472.01 00:07:24.261 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 183.23 0.09 912453.89 378.90 1011110.96 00:07:24.261 ======================================================== 00:07:24.261 Total : 347.10 0.17 910379.35 378.90 1011110.96 00:07:24.261 00:07:24.261 [2024-12-09 06:06:18.654637] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13559b0 (9): Bad file descriptor 00:07:24.261 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:24.261 06:06:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.261 06:06:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:24.261 06:06:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 148368 00:07:24.261 06:06:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:24.832 06:06:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:24.832 06:06:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 148368 00:07:24.832 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (148368) - No such process 00:07:24.832 06:06:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 148368 00:07:24.832 06:06:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:07:24.832 06:06:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 148368 00:07:24.832 06:06:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:07:24.832 06:06:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:24.832 06:06:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:07:24.832 06:06:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:24.832 06:06:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 148368 00:07:24.832 06:06:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:07:24.832 06:06:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:24.832 06:06:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:24.832 06:06:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:24.832 06:06:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:24.832 06:06:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.832 06:06:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:24.832 06:06:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.832 06:06:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:24.832 06:06:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.832 06:06:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:24.832 [2024-12-09 06:06:19.177152] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:24.832 06:06:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.832 06:06:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:24.832 06:06:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.832 06:06:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:24.832 06:06:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.832 06:06:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=149046 00:07:24.833 06:06:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:24.833 06:06:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 149046 00:07:24.833 06:06:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:24.833 06:06:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:24.833 [2024-12-09 06:06:19.253754] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:25.403 06:06:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:25.403 06:06:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 149046 00:07:25.403 06:06:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:25.663 06:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:25.663 06:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 149046 00:07:25.663 06:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:26.236 06:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:26.236 06:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 149046 00:07:26.236 06:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:26.809 06:06:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:26.809 06:06:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 149046 00:07:26.809 06:06:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:27.380 06:06:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:27.380 06:06:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 149046 00:07:27.380 06:06:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:27.641 06:06:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:27.641 06:06:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 149046 00:07:27.641 06:06:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:27.902 Initializing NVMe Controllers 00:07:27.902 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:27.902 Controller IO queue size 128, less than required. 00:07:27.902 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:27.902 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:27.902 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:27.902 Initialization complete. Launching workers. 00:07:27.902 ======================================================== 00:07:27.902 Latency(us) 00:07:27.902 Device Information : IOPS MiB/s Average min max 00:07:27.903 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003311.82 1000216.22 1008361.73 00:07:27.903 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002476.23 1000151.89 1007754.90 00:07:27.903 ======================================================== 00:07:27.903 Total : 256.00 0.12 1002894.03 1000151.89 1008361.73 00:07:27.903 00:07:28.165 06:06:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:28.165 06:06:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 149046 00:07:28.165 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (149046) - No such process 00:07:28.165 06:06:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 149046 00:07:28.165 06:06:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:28.165 06:06:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:28.165 06:06:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:28.165 06:06:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:07:28.165 06:06:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:28.165 06:06:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:07:28.165 06:06:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:28.165 06:06:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:28.165 rmmod nvme_tcp 00:07:28.426 rmmod nvme_fabrics 00:07:28.426 rmmod nvme_keyring 00:07:28.426 06:06:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:28.426 06:06:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:07:28.426 06:06:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:07:28.426 06:06:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 148251 ']' 00:07:28.426 06:06:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 148251 00:07:28.426 06:06:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 148251 ']' 00:07:28.426 06:06:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 148251 00:07:28.426 06:06:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:07:28.426 06:06:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:28.426 06:06:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 148251 00:07:28.426 06:06:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:28.427 06:06:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:28.427 06:06:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 148251' 00:07:28.427 killing process with pid 148251 00:07:28.427 06:06:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 148251 00:07:28.427 06:06:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 148251 00:07:28.427 06:06:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:28.427 06:06:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:28.427 06:06:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:28.427 06:06:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:07:28.427 06:06:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:07:28.427 06:06:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:28.427 06:06:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:07:28.427 06:06:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:28.427 06:06:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:28.427 06:06:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:28.427 06:06:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:28.427 06:06:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:30.974 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:30.974 00:07:30.974 real 0m17.999s 00:07:30.974 user 0m30.632s 00:07:30.974 sys 0m6.496s 00:07:30.974 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.974 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:30.974 ************************************ 00:07:30.974 END TEST nvmf_delete_subsystem 00:07:30.974 ************************************ 00:07:30.974 06:06:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:30.974 06:06:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:30.974 06:06:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.975 06:06:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:30.975 ************************************ 00:07:30.975 START TEST nvmf_host_management 00:07:30.975 ************************************ 00:07:30.975 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:30.975 * Looking for test storage... 00:07:30.975 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:30.975 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:30.975 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:07:30.975 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:30.975 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:30.975 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:30.975 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:30.975 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:30.975 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:30.975 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:30.975 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:30.975 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:30.975 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:30.975 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:30.975 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:30.975 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:30.975 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:30.975 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:30.975 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:30.975 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:30.975 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:30.975 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:30.975 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:30.975 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:30.975 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:30.975 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:30.975 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:30.975 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:30.975 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:30.975 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:30.975 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:30.975 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:30.975 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:30.975 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:30.975 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:30.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.975 --rc genhtml_branch_coverage=1 00:07:30.975 --rc genhtml_function_coverage=1 00:07:30.975 --rc genhtml_legend=1 00:07:30.975 --rc geninfo_all_blocks=1 00:07:30.975 --rc geninfo_unexecuted_blocks=1 00:07:30.975 00:07:30.975 ' 00:07:30.975 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:30.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.975 --rc genhtml_branch_coverage=1 00:07:30.975 --rc genhtml_function_coverage=1 00:07:30.975 --rc genhtml_legend=1 00:07:30.975 --rc geninfo_all_blocks=1 00:07:30.975 --rc geninfo_unexecuted_blocks=1 00:07:30.975 00:07:30.975 ' 00:07:30.975 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:30.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.975 --rc genhtml_branch_coverage=1 00:07:30.975 --rc genhtml_function_coverage=1 00:07:30.975 --rc genhtml_legend=1 00:07:30.975 --rc geninfo_all_blocks=1 00:07:30.975 --rc geninfo_unexecuted_blocks=1 00:07:30.975 00:07:30.975 ' 00:07:30.975 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:30.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.975 --rc genhtml_branch_coverage=1 00:07:30.975 --rc genhtml_function_coverage=1 00:07:30.975 --rc genhtml_legend=1 00:07:30.975 --rc geninfo_all_blocks=1 00:07:30.975 --rc geninfo_unexecuted_blocks=1 00:07:30.975 00:07:30.975 ' 00:07:30.975 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:30.975 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:30.975 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:30.975 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:30.975 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:30.975 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:30.975 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:30.975 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:30.975 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:30.975 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:30.975 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:30.975 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:30.975 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:07:30.975 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:07:30.975 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:30.975 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:30.975 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:30.975 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:30.975 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:30.975 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:30.975 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:30.975 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:30.975 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:30.975 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.975 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.975 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.975 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:30.976 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.976 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:30.976 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:30.976 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:30.976 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:30.976 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:30.976 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:30.976 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:30.976 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:30.976 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:30.976 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:30.976 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:30.976 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:30.976 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:30.976 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:30.976 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:30.976 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:30.976 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:30.976 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:30.976 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:30.976 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:30.976 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:30.976 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:30.976 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:30.976 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:30.976 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:07:30.976 06:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:39.113 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:39.113 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:07:39.113 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:39.113 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:39.113 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:39.113 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:39.113 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:39.113 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:07:39.113 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:39.113 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:07:39.113 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:07:39.113 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:07:39.113 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:07:39.113 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:07:39.113 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:07:39.113 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:39.113 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:39.113 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:39.113 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:39.113 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:39.113 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:39.113 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:39.113 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:39.113 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:39.113 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:39.113 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:39.113 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:39.113 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:39.113 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:39.113 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:39.113 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:39.113 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:39.113 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:39.113 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:39.113 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:39.113 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:39.113 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:39.113 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:39.113 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:39.113 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:39.113 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:39.113 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:39.113 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:39.113 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:39.113 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:39.114 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:39.114 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:39.114 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:39.114 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:39.114 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:39.114 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:39.114 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:39.114 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:39.114 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:39.114 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:39.114 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:39.114 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:39.114 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:39.114 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:39.114 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:39.114 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:39.114 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:39.114 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:39.114 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:39.114 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:39.114 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:39.114 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:39.114 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:39.114 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:39.114 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:39.114 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:39.114 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:39.114 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:39.114 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:07:39.114 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:39.114 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:39.114 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:39.114 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:39.114 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:39.114 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:39.114 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:39.114 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:39.114 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:39.114 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:39.114 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:39.114 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:39.114 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:39.114 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:39.114 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:39.114 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:39.114 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:39.114 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:39.114 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:39.114 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:39.114 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:39.114 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:39.114 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:39.114 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:39.114 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:39.114 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:39.114 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:39.114 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.608 ms 00:07:39.114 00:07:39.114 --- 10.0.0.2 ping statistics --- 00:07:39.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:39.114 rtt min/avg/max/mdev = 0.608/0.608/0.608/0.000 ms 00:07:39.114 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:39.114 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:39.114 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.300 ms 00:07:39.114 00:07:39.114 --- 10.0.0.1 ping statistics --- 00:07:39.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:39.114 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:07:39.114 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:39.114 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:07:39.114 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:39.114 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:39.114 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:39.114 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:39.114 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:39.114 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:39.114 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:39.114 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:39.114 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:39.114 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:39.114 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:39.114 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:39.114 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:39.114 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=153745 00:07:39.114 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 153745 00:07:39.114 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 153745 ']' 00:07:39.114 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.114 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:39.114 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.114 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:39.114 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:39.114 06:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:39.114 [2024-12-09 06:06:32.630605] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:07:39.114 [2024-12-09 06:06:32.630669] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:39.114 [2024-12-09 06:06:32.708317] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:39.114 [2024-12-09 06:06:32.759360] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:39.114 [2024-12-09 06:06:32.759412] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:39.114 [2024-12-09 06:06:32.759421] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:39.114 [2024-12-09 06:06:32.759428] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:39.114 [2024-12-09 06:06:32.759434] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:39.114 [2024-12-09 06:06:32.761353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:39.114 [2024-12-09 06:06:32.761494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:39.114 [2024-12-09 06:06:32.761660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:39.114 [2024-12-09 06:06:32.761661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:39.114 06:06:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:39.114 06:06:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:39.114 06:06:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:39.115 06:06:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:39.115 06:06:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:39.115 06:06:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:39.115 06:06:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:39.115 06:06:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.115 06:06:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:39.115 [2024-12-09 06:06:33.520514] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:39.115 06:06:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.115 06:06:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:39.115 06:06:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:39.115 06:06:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:39.115 06:06:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:39.115 06:06:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:39.115 06:06:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:39.115 06:06:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.115 06:06:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:39.115 Malloc0 00:07:39.115 [2024-12-09 06:06:33.607745] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:39.115 06:06:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.115 06:06:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:39.115 06:06:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:39.115 06:06:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:39.115 06:06:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=153905 00:07:39.115 06:06:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 153905 /var/tmp/bdevperf.sock 00:07:39.115 06:06:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 153905 ']' 00:07:39.115 06:06:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:39.115 06:06:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:39.115 06:06:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:39.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:39.115 06:06:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:39.115 06:06:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:39.115 06:06:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:39.115 06:06:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:39.115 06:06:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:39.115 06:06:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:39.115 06:06:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:39.115 06:06:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:39.115 { 00:07:39.115 "params": { 00:07:39.115 "name": "Nvme$subsystem", 00:07:39.115 "trtype": "$TEST_TRANSPORT", 00:07:39.115 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:39.115 "adrfam": "ipv4", 00:07:39.115 "trsvcid": "$NVMF_PORT", 00:07:39.115 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:39.115 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:39.115 "hdgst": ${hdgst:-false}, 00:07:39.115 "ddgst": ${ddgst:-false} 00:07:39.115 }, 00:07:39.115 "method": "bdev_nvme_attach_controller" 00:07:39.115 } 00:07:39.115 EOF 00:07:39.115 )") 00:07:39.115 06:06:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:39.115 06:06:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:39.115 06:06:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:39.115 06:06:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:39.115 "params": { 00:07:39.115 "name": "Nvme0", 00:07:39.115 "trtype": "tcp", 00:07:39.115 "traddr": "10.0.0.2", 00:07:39.115 "adrfam": "ipv4", 00:07:39.115 "trsvcid": "4420", 00:07:39.115 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:39.115 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:39.115 "hdgst": false, 00:07:39.115 "ddgst": false 00:07:39.115 }, 00:07:39.115 "method": "bdev_nvme_attach_controller" 00:07:39.115 }' 00:07:39.376 [2024-12-09 06:06:33.716930] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:07:39.376 [2024-12-09 06:06:33.716998] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid153905 ] 00:07:39.376 [2024-12-09 06:06:33.809150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.376 [2024-12-09 06:06:33.861739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.637 Running I/O for 10 seconds... 00:07:40.213 06:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:40.213 06:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:40.213 06:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:40.213 06:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.213 06:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:40.213 06:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.213 06:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:40.213 06:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:40.213 06:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:40.213 06:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:40.213 06:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:40.213 06:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:40.213 06:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:40.213 06:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:40.213 06:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:40.213 06:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:40.213 06:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.213 06:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:40.213 06:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.213 06:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=963 00:07:40.213 06:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 963 -ge 100 ']' 00:07:40.213 06:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:40.213 06:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:40.213 06:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:40.213 06:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:40.213 06:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.213 06:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:40.213 [2024-12-09 06:06:34.595014] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223b230 is same with the state(6) to be set 00:07:40.213 [2024-12-09 06:06:34.595052] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223b230 is same with the state(6) to be set 00:07:40.213 [2024-12-09 06:06:34.595064] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223b230 is same with the state(6) to be set 00:07:40.213 [2024-12-09 06:06:34.595069] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223b230 is same with the state(6) to be set 00:07:40.213 [2024-12-09 06:06:34.595074] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223b230 is same with the state(6) to be set 00:07:40.213 [2024-12-09 06:06:34.595079] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223b230 is same with the state(6) to be set 00:07:40.213 [2024-12-09 06:06:34.595084] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223b230 is same with the state(6) to be set 00:07:40.213 [2024-12-09 06:06:34.595088] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223b230 is same with the state(6) to be set 00:07:40.213 [2024-12-09 06:06:34.595093] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223b230 is same with the state(6) to be set 00:07:40.213 [2024-12-09 06:06:34.595098] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223b230 is same with the state(6) to be set 00:07:40.213 [2024-12-09 06:06:34.595102] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223b230 is same with the state(6) to be set 00:07:40.213 [2024-12-09 06:06:34.595107] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223b230 is same with the state(6) to be set 00:07:40.213 06:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.213 06:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:40.213 06:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.213 06:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:40.213 [2024-12-09 06:06:34.602977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:40.213 [2024-12-09 06:06:34.603011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.213 [2024-12-09 06:06:34.603020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:07:40.213 [2024-12-09 06:06:34.603027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.213 [2024-12-09 06:06:34.603035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:07:40.213 [2024-12-09 06:06:34.603042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.213 [2024-12-09 06:06:34.603049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:40.213 [2024-12-09 06:06:34.603056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.213 [2024-12-09 06:06:34.603064] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe636e0 is same with the state(6) to be set 00:07:40.213 [2024-12-09 06:06:34.603350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.213 [2024-12-09 06:06:34.603366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.213 [2024-12-09 06:06:34.603380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.213 [2024-12-09 06:06:34.603388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.213 [2024-12-09 06:06:34.603401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.213 [2024-12-09 06:06:34.603409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.213 [2024-12-09 06:06:34.603417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.213 [2024-12-09 06:06:34.603424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.213 [2024-12-09 06:06:34.603433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.214 [2024-12-09 06:06:34.603441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.214 [2024-12-09 06:06:34.603455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.214 [2024-12-09 06:06:34.603463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.214 [2024-12-09 06:06:34.603472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.214 [2024-12-09 06:06:34.603479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.214 [2024-12-09 06:06:34.603487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.214 [2024-12-09 06:06:34.603494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.214 [2024-12-09 06:06:34.603503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.214 [2024-12-09 06:06:34.603510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.214 [2024-12-09 06:06:34.603519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.214 [2024-12-09 06:06:34.603526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.214 [2024-12-09 06:06:34.603534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.214 [2024-12-09 06:06:34.603541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.214 [2024-12-09 06:06:34.603550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.214 [2024-12-09 06:06:34.603557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.214 [2024-12-09 06:06:34.603566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.214 [2024-12-09 06:06:34.603573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.214 [2024-12-09 06:06:34.603581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.214 [2024-12-09 06:06:34.603588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.214 [2024-12-09 06:06:34.603597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.214 [2024-12-09 06:06:34.603606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.214 [2024-12-09 06:06:34.603615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.214 [2024-12-09 06:06:34.603622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.214 [2024-12-09 06:06:34.603630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.214 [2024-12-09 06:06:34.603637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.214 [2024-12-09 06:06:34.603646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.214 [2024-12-09 06:06:34.603653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.214 [2024-12-09 06:06:34.603662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.214 [2024-12-09 06:06:34.603669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.214 [2024-12-09 06:06:34.603677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.214 [2024-12-09 06:06:34.603684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.214 [2024-12-09 06:06:34.603693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.214 [2024-12-09 06:06:34.603699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.214 [2024-12-09 06:06:34.603709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.214 [2024-12-09 06:06:34.603716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.214 [2024-12-09 06:06:34.603725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.214 [2024-12-09 06:06:34.603732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.214 [2024-12-09 06:06:34.603741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.214 [2024-12-09 06:06:34.603747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.214 [2024-12-09 06:06:34.603756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:3072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.214 [2024-12-09 06:06:34.603763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.214 [2024-12-09 06:06:34.603772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:3200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.214 [2024-12-09 06:06:34.603778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.214 [2024-12-09 06:06:34.603787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:3328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.214 [2024-12-09 06:06:34.603794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.214 [2024-12-09 06:06:34.603804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.214 [2024-12-09 06:06:34.603811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.214 [2024-12-09 06:06:34.603820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:3584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.214 [2024-12-09 06:06:34.603827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.214 [2024-12-09 06:06:34.603835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.214 [2024-12-09 06:06:34.603842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.214 [2024-12-09 06:06:34.603851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.214 [2024-12-09 06:06:34.603857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.214 [2024-12-09 06:06:34.603866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.214 [2024-12-09 06:06:34.603873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.214 [2024-12-09 06:06:34.603885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.214 [2024-12-09 06:06:34.603891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.214 [2024-12-09 06:06:34.603900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.214 [2024-12-09 06:06:34.603907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.214 [2024-12-09 06:06:34.603915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.214 [2024-12-09 06:06:34.603922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.214 [2024-12-09 06:06:34.603931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.214 [2024-12-09 06:06:34.603938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.214 [2024-12-09 06:06:34.603946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.214 [2024-12-09 06:06:34.603953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.214 [2024-12-09 06:06:34.603961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.214 [2024-12-09 06:06:34.603968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.214 [2024-12-09 06:06:34.603977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.214 [2024-12-09 06:06:34.603983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.214 [2024-12-09 06:06:34.603992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.214 [2024-12-09 06:06:34.604000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.214 [2024-12-09 06:06:34.604009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.214 [2024-12-09 06:06:34.604015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.214 [2024-12-09 06:06:34.604024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.214 [2024-12-09 06:06:34.604031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.214 [2024-12-09 06:06:34.604039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.214 [2024-12-09 06:06:34.604046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.214 [2024-12-09 06:06:34.604054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.214 [2024-12-09 06:06:34.604061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.215 [2024-12-09 06:06:34.604070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.215 [2024-12-09 06:06:34.604077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.215 [2024-12-09 06:06:34.604085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.215 [2024-12-09 06:06:34.604093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.215 [2024-12-09 06:06:34.604102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.215 [2024-12-09 06:06:34.604108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.215 [2024-12-09 06:06:34.604117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:6016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.215 [2024-12-09 06:06:34.604124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.215 [2024-12-09 06:06:34.604133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.215 [2024-12-09 06:06:34.604139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.215 [2024-12-09 06:06:34.604148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:6272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.215 [2024-12-09 06:06:34.604154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.215 [2024-12-09 06:06:34.604163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:6400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.215 [2024-12-09 06:06:34.604170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.215 [2024-12-09 06:06:34.604179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:6528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.215 [2024-12-09 06:06:34.604186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.215 [2024-12-09 06:06:34.604196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:6656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.215 [2024-12-09 06:06:34.604202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.215 [2024-12-09 06:06:34.604211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:6784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.215 [2024-12-09 06:06:34.604218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.215 [2024-12-09 06:06:34.604227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:6912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.215 [2024-12-09 06:06:34.604233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.215 [2024-12-09 06:06:34.604242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.215 [2024-12-09 06:06:34.604249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.215 [2024-12-09 06:06:34.604257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.215 [2024-12-09 06:06:34.604264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.215 [2024-12-09 06:06:34.604273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.215 [2024-12-09 06:06:34.604282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.215 [2024-12-09 06:06:34.604291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.215 [2024-12-09 06:06:34.604298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.215 [2024-12-09 06:06:34.604306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.215 [2024-12-09 06:06:34.604313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.215 [2024-12-09 06:06:34.604321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.215 [2024-12-09 06:06:34.604328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.215 [2024-12-09 06:06:34.604337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.215 [2024-12-09 06:06:34.604344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.215 [2024-12-09 06:06:34.604352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.215 [2024-12-09 06:06:34.604359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.215 [2024-12-09 06:06:34.604367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.215 [2024-12-09 06:06:34.604374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.215 [2024-12-09 06:06:34.605503] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:07:40.215 task offset: 0 on job bdev=Nvme0n1 fails 00:07:40.215 00:07:40.215 Latency(us) 00:07:40.215 [2024-12-09T05:06:34.802Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:40.215 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:40.215 Job: Nvme0n1 ended in about 0.53 seconds with error 00:07:40.215 Verification LBA range: start 0x0 length 0x400 00:07:40.215 Nvme0n1 : 0.53 1933.39 120.84 120.84 0.00 30367.13 1802.24 30045.74 00:07:40.215 [2024-12-09T05:06:34.802Z] =================================================================================================================== 00:07:40.215 [2024-12-09T05:06:34.802Z] Total : 1933.39 120.84 120.84 0.00 30367.13 1802.24 30045.74 00:07:40.215 [2024-12-09 06:06:34.607347] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:40.215 [2024-12-09 06:06:34.607367] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe636e0 (9): Bad file descriptor 00:07:40.215 06:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.215 06:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:40.215 [2024-12-09 06:06:34.654341] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:07:41.156 06:06:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 153905 00:07:41.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (153905) - No such process 00:07:41.156 06:06:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:41.156 06:06:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:41.156 06:06:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:41.156 06:06:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:41.156 06:06:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:41.156 06:06:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:41.156 06:06:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:41.156 06:06:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:41.156 { 00:07:41.156 "params": { 00:07:41.156 "name": "Nvme$subsystem", 00:07:41.156 "trtype": "$TEST_TRANSPORT", 00:07:41.156 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:41.156 "adrfam": "ipv4", 00:07:41.156 "trsvcid": "$NVMF_PORT", 00:07:41.156 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:41.156 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:41.156 "hdgst": ${hdgst:-false}, 00:07:41.156 "ddgst": ${ddgst:-false} 00:07:41.156 }, 00:07:41.156 "method": "bdev_nvme_attach_controller" 00:07:41.156 } 00:07:41.156 EOF 00:07:41.156 )") 00:07:41.156 06:06:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:41.156 06:06:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:41.156 06:06:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:41.156 06:06:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:41.156 "params": { 00:07:41.156 "name": "Nvme0", 00:07:41.156 "trtype": "tcp", 00:07:41.156 "traddr": "10.0.0.2", 00:07:41.156 "adrfam": "ipv4", 00:07:41.156 "trsvcid": "4420", 00:07:41.156 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:41.156 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:41.156 "hdgst": false, 00:07:41.156 "ddgst": false 00:07:41.156 }, 00:07:41.156 "method": "bdev_nvme_attach_controller" 00:07:41.156 }' 00:07:41.156 [2024-12-09 06:06:35.671023] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:07:41.156 [2024-12-09 06:06:35.671074] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid154282 ] 00:07:41.417 [2024-12-09 06:06:35.758556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.417 [2024-12-09 06:06:35.792089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.678 Running I/O for 1 seconds... 00:07:42.619 1472.00 IOPS, 92.00 MiB/s 00:07:42.619 Latency(us) 00:07:42.619 [2024-12-09T05:06:37.207Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:42.620 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:42.620 Verification LBA range: start 0x0 length 0x400 00:07:42.620 Nvme0n1 : 1.02 1498.83 93.68 0.00 0.00 42033.58 10435.35 33272.12 00:07:42.620 [2024-12-09T05:06:37.207Z] =================================================================================================================== 00:07:42.620 [2024-12-09T05:06:37.207Z] Total : 1498.83 93.68 0.00 0.00 42033.58 10435.35 33272.12 00:07:42.881 06:06:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:42.881 06:06:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:42.881 06:06:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:42.881 06:06:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:42.881 06:06:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:42.881 06:06:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:42.881 06:06:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:42.881 06:06:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:42.881 06:06:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:42.881 06:06:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:42.881 06:06:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:42.881 rmmod nvme_tcp 00:07:42.881 rmmod nvme_fabrics 00:07:42.881 rmmod nvme_keyring 00:07:42.881 06:06:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:42.881 06:06:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:42.881 06:06:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:42.881 06:06:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 153745 ']' 00:07:42.881 06:06:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 153745 00:07:42.881 06:06:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 153745 ']' 00:07:42.881 06:06:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 153745 00:07:42.881 06:06:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:07:42.881 06:06:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:42.881 06:06:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 153745 00:07:42.881 06:06:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:42.881 06:06:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:42.881 06:06:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 153745' 00:07:42.881 killing process with pid 153745 00:07:42.881 06:06:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 153745 00:07:42.881 06:06:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 153745 00:07:43.141 [2024-12-09 06:06:37.473864] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:43.141 06:06:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:43.141 06:06:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:43.141 06:06:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:43.141 06:06:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:43.141 06:06:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:07:43.141 06:06:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:43.141 06:06:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:07:43.141 06:06:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:43.141 06:06:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:43.141 06:06:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:43.141 06:06:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:43.141 06:06:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.056 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:45.056 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:45.056 00:07:45.056 real 0m14.464s 00:07:45.056 user 0m23.668s 00:07:45.056 sys 0m6.498s 00:07:45.056 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.056 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:45.056 ************************************ 00:07:45.056 END TEST nvmf_host_management 00:07:45.056 ************************************ 00:07:45.056 06:06:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:45.056 06:06:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:45.056 06:06:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.056 06:06:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:45.318 ************************************ 00:07:45.318 START TEST nvmf_lvol 00:07:45.318 ************************************ 00:07:45.318 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:45.318 * Looking for test storage... 00:07:45.318 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:45.318 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:45.318 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:07:45.318 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:45.318 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:45.319 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:45.319 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:45.319 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:45.319 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:45.319 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:45.319 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:45.319 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:45.319 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:45.319 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:45.319 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:45.319 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:45.319 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:45.319 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:45.319 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:45.319 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:45.319 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:45.319 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:45.319 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:45.319 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:45.319 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:45.319 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:45.319 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:45.319 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:45.319 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:45.319 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:45.319 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:45.319 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:45.319 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:45.319 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:45.319 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:45.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.319 --rc genhtml_branch_coverage=1 00:07:45.319 --rc genhtml_function_coverage=1 00:07:45.319 --rc genhtml_legend=1 00:07:45.319 --rc geninfo_all_blocks=1 00:07:45.319 --rc geninfo_unexecuted_blocks=1 00:07:45.319 00:07:45.319 ' 00:07:45.319 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:45.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.319 --rc genhtml_branch_coverage=1 00:07:45.319 --rc genhtml_function_coverage=1 00:07:45.319 --rc genhtml_legend=1 00:07:45.319 --rc geninfo_all_blocks=1 00:07:45.319 --rc geninfo_unexecuted_blocks=1 00:07:45.319 00:07:45.319 ' 00:07:45.319 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:45.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.319 --rc genhtml_branch_coverage=1 00:07:45.319 --rc genhtml_function_coverage=1 00:07:45.319 --rc genhtml_legend=1 00:07:45.319 --rc geninfo_all_blocks=1 00:07:45.319 --rc geninfo_unexecuted_blocks=1 00:07:45.319 00:07:45.319 ' 00:07:45.319 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:45.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.319 --rc genhtml_branch_coverage=1 00:07:45.319 --rc genhtml_function_coverage=1 00:07:45.319 --rc genhtml_legend=1 00:07:45.319 --rc geninfo_all_blocks=1 00:07:45.319 --rc geninfo_unexecuted_blocks=1 00:07:45.319 00:07:45.319 ' 00:07:45.319 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:45.319 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:45.319 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:45.319 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:45.319 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:45.319 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:45.319 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:45.319 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:45.319 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:45.319 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:45.319 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:45.319 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:45.319 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:07:45.319 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:07:45.319 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:45.319 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:45.319 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:45.319 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:45.319 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:45.319 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:45.319 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:45.319 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:45.319 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:45.319 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.319 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.319 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.319 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:45.320 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.320 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:45.320 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:45.320 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:45.320 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:45.320 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:45.320 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:45.320 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:45.320 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:45.320 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:45.320 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:45.320 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:45.320 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:45.320 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:45.320 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:45.320 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:45.320 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:45.320 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:45.320 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:45.320 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:45.320 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:45.320 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:45.320 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:45.320 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:45.320 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:45.320 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.320 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:45.320 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:45.320 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:07:45.320 06:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:53.465 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:53.465 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:07:53.465 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:53.465 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:53.465 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:53.465 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:53.465 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:53.465 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:07:53.465 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:53.465 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:07:53.465 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:07:53.465 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:07:53.465 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:07:53.465 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:07:53.465 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:07:53.465 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:53.465 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:53.465 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:53.465 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:53.465 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:53.465 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:53.465 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:53.465 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:53.465 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:53.465 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:53.465 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:53.478 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:53.478 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:53.478 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:53.478 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:53.478 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:53.478 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:53.478 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:53.478 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:53.478 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:53.478 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:53.478 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:53.478 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:53.478 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:53.478 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:53.478 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:53.478 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:53.478 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:53.478 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:53.478 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:53.478 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:53.478 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:53.478 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:53.478 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:53.478 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:53.478 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:53.478 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:53.478 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:53.478 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:53.478 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:53.478 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:53.478 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:53.478 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:53.478 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:53.478 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:53.479 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:53.479 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:53.479 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:53.479 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:53.479 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:53.479 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:53.479 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:53.479 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:53.479 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:53.479 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:53.479 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:53.479 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:53.479 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:53.479 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:07:53.479 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:53.479 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:53.479 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:53.479 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:53.479 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:53.479 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:53.479 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:53.479 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:53.479 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:53.479 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:53.479 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:53.479 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:53.479 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:53.479 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:53.479 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:53.479 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:53.479 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:53.479 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:53.479 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:53.479 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:53.479 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:53.479 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:53.479 06:06:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:53.479 06:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:53.479 06:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:53.479 06:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:53.479 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:53.479 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.637 ms 00:07:53.479 00:07:53.479 --- 10.0.0.2 ping statistics --- 00:07:53.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.479 rtt min/avg/max/mdev = 0.637/0.637/0.637/0.000 ms 00:07:53.479 06:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:53.479 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:53.479 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.300 ms 00:07:53.479 00:07:53.479 --- 10.0.0.1 ping statistics --- 00:07:53.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.479 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:07:53.479 06:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:53.479 06:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:07:53.479 06:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:53.479 06:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:53.479 06:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:53.479 06:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:53.479 06:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:53.479 06:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:53.479 06:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:53.479 06:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:53.479 06:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:53.479 06:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:53.479 06:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:53.479 06:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=158641 00:07:53.479 06:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 158641 00:07:53.479 06:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:53.479 06:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 158641 ']' 00:07:53.479 06:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.479 06:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:53.479 06:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.479 06:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:53.479 06:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:53.479 [2024-12-09 06:06:47.143186] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:07:53.479 [2024-12-09 06:06:47.143256] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:53.479 [2024-12-09 06:06:47.239258] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:53.479 [2024-12-09 06:06:47.290268] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:53.479 [2024-12-09 06:06:47.290324] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:53.479 [2024-12-09 06:06:47.290333] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:53.479 [2024-12-09 06:06:47.290340] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:53.479 [2024-12-09 06:06:47.290346] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:53.479 [2024-12-09 06:06:47.292108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:53.479 [2024-12-09 06:06:47.292258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.479 [2024-12-09 06:06:47.292259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:53.479 06:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:53.479 06:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:07:53.479 06:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:53.479 06:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:53.479 06:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:53.479 06:06:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:53.479 06:06:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:53.743 [2024-12-09 06:06:48.183607] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:53.743 06:06:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:54.005 06:06:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:54.005 06:06:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:54.266 06:06:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:54.266 06:06:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:54.266 06:06:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:54.527 06:06:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=581a3561-d345-42bc-a407-0441dd297238 00:07:54.527 06:06:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 581a3561-d345-42bc-a407-0441dd297238 lvol 20 00:07:54.786 06:06:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=97df7496-2dd3-4e1c-84a4-cebb2f296be9 00:07:54.786 06:06:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:55.045 06:06:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 97df7496-2dd3-4e1c-84a4-cebb2f296be9 00:07:55.045 06:06:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:55.305 [2024-12-09 06:06:49.717820] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:55.306 06:06:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:55.565 06:06:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=159012 00:07:55.566 06:06:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:55.566 06:06:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:56.507 06:06:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 97df7496-2dd3-4e1c-84a4-cebb2f296be9 MY_SNAPSHOT 00:07:56.769 06:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=38703a2d-18e8-4fa9-925a-14be274b1df5 00:07:56.769 06:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 97df7496-2dd3-4e1c-84a4-cebb2f296be9 30 00:07:57.031 06:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 38703a2d-18e8-4fa9-925a-14be274b1df5 MY_CLONE 00:07:57.031 06:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=f63f9e37-6488-4f1b-9f22-704a51a1fab8 00:07:57.031 06:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate f63f9e37-6488-4f1b-9f22-704a51a1fab8 00:07:57.602 06:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 159012 00:08:05.750 Initializing NVMe Controllers 00:08:05.750 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:05.750 Controller IO queue size 128, less than required. 00:08:05.750 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:05.750 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:05.750 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:05.750 Initialization complete. Launching workers. 00:08:05.750 ======================================================== 00:08:05.750 Latency(us) 00:08:05.750 Device Information : IOPS MiB/s Average min max 00:08:05.750 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 17018.70 66.48 7524.49 720.17 36294.45 00:08:05.750 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15894.30 62.09 8057.55 1591.81 57813.35 00:08:05.750 ======================================================== 00:08:05.750 Total : 32913.00 128.57 7781.91 720.17 57813.35 00:08:05.750 00:08:05.750 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:06.011 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 97df7496-2dd3-4e1c-84a4-cebb2f296be9 00:08:06.272 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 581a3561-d345-42bc-a407-0441dd297238 00:08:06.534 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:06.534 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:06.534 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:06.534 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:06.534 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:06.534 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:06.534 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:06.534 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:06.534 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:06.534 rmmod nvme_tcp 00:08:06.534 rmmod nvme_fabrics 00:08:06.534 rmmod nvme_keyring 00:08:06.534 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:06.534 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:06.534 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:06.534 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 158641 ']' 00:08:06.534 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 158641 00:08:06.534 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 158641 ']' 00:08:06.534 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 158641 00:08:06.534 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:08:06.534 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:06.534 06:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 158641 00:08:06.534 06:07:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:06.534 06:07:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:06.534 06:07:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 158641' 00:08:06.534 killing process with pid 158641 00:08:06.534 06:07:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 158641 00:08:06.534 06:07:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 158641 00:08:06.796 06:07:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:06.796 06:07:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:06.796 06:07:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:06.796 06:07:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:06.796 06:07:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:06.796 06:07:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:08:06.796 06:07:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:08:06.796 06:07:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:06.796 06:07:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:06.796 06:07:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:06.796 06:07:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:06.796 06:07:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:08.710 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:08.710 00:08:08.710 real 0m23.584s 00:08:08.710 user 1m4.466s 00:08:08.710 sys 0m8.462s 00:08:08.710 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:08.710 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:08.710 ************************************ 00:08:08.710 END TEST nvmf_lvol 00:08:08.710 ************************************ 00:08:08.710 06:07:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:08.710 06:07:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:08.710 06:07:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:08.710 06:07:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:08.972 ************************************ 00:08:08.972 START TEST nvmf_lvs_grow 00:08:08.972 ************************************ 00:08:08.972 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:08.972 * Looking for test storage... 00:08:08.972 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:08.972 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:08.972 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:08:08.972 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:08.972 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:08.972 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:08.972 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:08.972 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:08.972 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:08.972 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:08.972 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:08.972 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:08.972 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:08.972 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:08.972 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:08.972 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:08.972 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:08.972 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:08.972 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:08.972 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:08.972 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:08.972 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:08.972 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:08.972 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:08.972 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:08.972 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:08.972 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:08.972 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:08.972 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:08.972 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:08.972 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:08.972 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:08.972 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:08.972 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:08.972 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:08.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.972 --rc genhtml_branch_coverage=1 00:08:08.972 --rc genhtml_function_coverage=1 00:08:08.972 --rc genhtml_legend=1 00:08:08.972 --rc geninfo_all_blocks=1 00:08:08.972 --rc geninfo_unexecuted_blocks=1 00:08:08.972 00:08:08.972 ' 00:08:08.972 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:08.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.972 --rc genhtml_branch_coverage=1 00:08:08.972 --rc genhtml_function_coverage=1 00:08:08.972 --rc genhtml_legend=1 00:08:08.972 --rc geninfo_all_blocks=1 00:08:08.972 --rc geninfo_unexecuted_blocks=1 00:08:08.972 00:08:08.972 ' 00:08:08.972 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:08.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.972 --rc genhtml_branch_coverage=1 00:08:08.972 --rc genhtml_function_coverage=1 00:08:08.972 --rc genhtml_legend=1 00:08:08.972 --rc geninfo_all_blocks=1 00:08:08.972 --rc geninfo_unexecuted_blocks=1 00:08:08.972 00:08:08.972 ' 00:08:08.972 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:08.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.972 --rc genhtml_branch_coverage=1 00:08:08.972 --rc genhtml_function_coverage=1 00:08:08.972 --rc genhtml_legend=1 00:08:08.972 --rc geninfo_all_blocks=1 00:08:08.972 --rc geninfo_unexecuted_blocks=1 00:08:08.972 00:08:08.972 ' 00:08:08.972 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:08.972 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:08.972 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:08.972 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:08.972 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:08.972 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:08.972 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:08.972 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:08.972 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:08.972 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:08.972 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:08.972 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:08.972 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:08:08.972 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:08:08.972 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:08.972 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:08.972 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:08.972 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:08.972 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:08.972 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:08.972 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:08.972 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:08.972 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:08.972 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.973 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.973 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.973 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:08.973 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.973 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:08.973 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:08.973 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:08.973 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:08.973 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:08.973 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:08.973 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:08.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:08.973 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:08.973 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:08.973 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:08.973 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:08.973 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:08.973 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:08.973 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:08.973 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:08.973 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:08.973 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:08.973 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:08.973 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:08.973 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:08.973 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:08.973 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:08.973 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:08.973 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:08:08.973 06:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:17.116 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:17.117 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:17.117 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:17.117 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:17.117 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:17.117 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:17.117 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.587 ms 00:08:17.117 00:08:17.117 --- 10.0.0.2 ping statistics --- 00:08:17.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:17.117 rtt min/avg/max/mdev = 0.587/0.587/0.587/0.000 ms 00:08:17.117 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:17.117 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:17.117 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:08:17.118 00:08:17.118 --- 10.0.0.1 ping statistics --- 00:08:17.118 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:17.118 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:08:17.118 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:17.118 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:08:17.118 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:17.118 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:17.118 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:17.118 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:17.118 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:17.118 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:17.118 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:17.118 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:17.118 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:17.118 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:17.118 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:17.118 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=165043 00:08:17.118 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 165043 00:08:17.118 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 165043 ']' 00:08:17.118 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.118 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:17.118 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.118 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:17.118 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:17.118 06:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:17.118 [2024-12-09 06:07:10.852966] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:08:17.118 [2024-12-09 06:07:10.853028] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:17.118 [2024-12-09 06:07:10.947465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.118 [2024-12-09 06:07:10.996233] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:17.118 [2024-12-09 06:07:10.996286] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:17.118 [2024-12-09 06:07:10.996296] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:17.118 [2024-12-09 06:07:10.996303] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:17.118 [2024-12-09 06:07:10.996309] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:17.118 [2024-12-09 06:07:10.997020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.118 06:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:17.118 06:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:08:17.118 06:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:17.118 06:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:17.118 06:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:17.380 06:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:17.380 06:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:17.380 [2024-12-09 06:07:11.878777] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:17.380 06:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:17.380 06:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:17.380 06:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:17.380 06:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:17.380 ************************************ 00:08:17.380 START TEST lvs_grow_clean 00:08:17.380 ************************************ 00:08:17.380 06:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:08:17.380 06:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:17.380 06:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:17.380 06:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:17.380 06:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:17.380 06:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:17.380 06:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:17.380 06:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:17.380 06:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:17.380 06:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:17.642 06:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:17.642 06:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:17.904 06:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=47449eed-88a1-400d-9922-5625177f825e 00:08:17.904 06:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47449eed-88a1-400d-9922-5625177f825e 00:08:17.904 06:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:18.166 06:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:18.166 06:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:18.166 06:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 47449eed-88a1-400d-9922-5625177f825e lvol 150 00:08:18.166 06:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=b3219ed2-b9c6-4208-9ca5-14f47d96c02a 00:08:18.166 06:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:18.166 06:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:18.427 [2024-12-09 06:07:12.902873] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:18.427 [2024-12-09 06:07:12.902949] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:18.427 true 00:08:18.427 06:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47449eed-88a1-400d-9922-5625177f825e 00:08:18.427 06:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:18.688 06:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:18.688 06:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:18.949 06:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b3219ed2-b9c6-4208-9ca5-14f47d96c02a 00:08:18.949 06:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:19.211 [2024-12-09 06:07:13.621119] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:19.211 06:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:19.472 06:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=165425 00:08:19.472 06:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:19.472 06:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:19.472 06:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 165425 /var/tmp/bdevperf.sock 00:08:19.472 06:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 165425 ']' 00:08:19.472 06:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:19.472 06:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:19.472 06:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:19.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:19.472 06:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:19.472 06:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:19.472 [2024-12-09 06:07:13.851135] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:08:19.472 [2024-12-09 06:07:13.851203] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165425 ] 00:08:19.472 [2024-12-09 06:07:13.926123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.472 [2024-12-09 06:07:13.977236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:19.734 06:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:19.734 06:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:08:19.734 06:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:19.995 Nvme0n1 00:08:19.995 06:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:19.995 [ 00:08:19.995 { 00:08:19.995 "name": "Nvme0n1", 00:08:19.995 "aliases": [ 00:08:19.995 "b3219ed2-b9c6-4208-9ca5-14f47d96c02a" 00:08:19.995 ], 00:08:19.995 "product_name": "NVMe disk", 00:08:19.995 "block_size": 4096, 00:08:19.995 "num_blocks": 38912, 00:08:19.995 "uuid": "b3219ed2-b9c6-4208-9ca5-14f47d96c02a", 00:08:19.995 "numa_id": 0, 00:08:19.995 "assigned_rate_limits": { 00:08:19.995 "rw_ios_per_sec": 0, 00:08:19.995 "rw_mbytes_per_sec": 0, 00:08:19.995 "r_mbytes_per_sec": 0, 00:08:19.995 "w_mbytes_per_sec": 0 00:08:19.995 }, 00:08:19.995 "claimed": false, 00:08:19.995 "zoned": false, 00:08:19.995 "supported_io_types": { 00:08:19.995 "read": true, 00:08:19.995 "write": true, 00:08:19.995 "unmap": true, 00:08:19.995 "flush": true, 00:08:19.995 "reset": true, 00:08:19.995 "nvme_admin": true, 00:08:19.995 "nvme_io": true, 00:08:19.995 "nvme_io_md": false, 00:08:19.995 "write_zeroes": true, 00:08:19.995 "zcopy": false, 00:08:19.995 "get_zone_info": false, 00:08:19.995 "zone_management": false, 00:08:19.995 "zone_append": false, 00:08:19.995 "compare": true, 00:08:19.995 "compare_and_write": true, 00:08:19.995 "abort": true, 00:08:19.995 "seek_hole": false, 00:08:19.995 "seek_data": false, 00:08:19.995 "copy": true, 00:08:19.995 "nvme_iov_md": false 00:08:19.995 }, 00:08:19.995 "memory_domains": [ 00:08:19.995 { 00:08:19.995 "dma_device_id": "system", 00:08:19.995 "dma_device_type": 1 00:08:19.995 } 00:08:19.995 ], 00:08:19.995 "driver_specific": { 00:08:19.995 "nvme": [ 00:08:19.995 { 00:08:19.995 "trid": { 00:08:19.995 "trtype": "TCP", 00:08:19.995 "adrfam": "IPv4", 00:08:19.995 "traddr": "10.0.0.2", 00:08:19.995 "trsvcid": "4420", 00:08:19.995 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:19.995 }, 00:08:19.995 "ctrlr_data": { 00:08:19.995 "cntlid": 1, 00:08:19.995 "vendor_id": "0x8086", 00:08:19.995 "model_number": "SPDK bdev Controller", 00:08:19.995 "serial_number": "SPDK0", 00:08:19.995 "firmware_revision": "25.01", 00:08:19.995 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:19.995 "oacs": { 00:08:19.996 "security": 0, 00:08:19.996 "format": 0, 00:08:19.996 "firmware": 0, 00:08:19.996 "ns_manage": 0 00:08:19.996 }, 00:08:19.996 "multi_ctrlr": true, 00:08:19.996 "ana_reporting": false 00:08:19.996 }, 00:08:19.996 "vs": { 00:08:19.996 "nvme_version": "1.3" 00:08:19.996 }, 00:08:19.996 "ns_data": { 00:08:19.996 "id": 1, 00:08:19.996 "can_share": true 00:08:19.996 } 00:08:19.996 } 00:08:19.996 ], 00:08:19.996 "mp_policy": "active_passive" 00:08:19.996 } 00:08:19.996 } 00:08:19.996 ] 00:08:19.996 06:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=165710 00:08:19.996 06:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:19.996 06:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:20.257 Running I/O for 10 seconds... 00:08:21.201 Latency(us) 00:08:21.201 [2024-12-09T05:07:15.788Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:21.201 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:21.201 Nvme0n1 : 1.00 19962.00 77.98 0.00 0.00 0.00 0.00 0.00 00:08:21.201 [2024-12-09T05:07:15.788Z] =================================================================================================================== 00:08:21.201 [2024-12-09T05:07:15.788Z] Total : 19962.00 77.98 0.00 0.00 0.00 0.00 0.00 00:08:21.201 00:08:22.143 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 47449eed-88a1-400d-9922-5625177f825e 00:08:22.143 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:22.143 Nvme0n1 : 2.00 22410.50 87.54 0.00 0.00 0.00 0.00 0.00 00:08:22.143 [2024-12-09T05:07:16.730Z] =================================================================================================================== 00:08:22.143 [2024-12-09T05:07:16.730Z] Total : 22410.50 87.54 0.00 0.00 0.00 0.00 0.00 00:08:22.143 00:08:22.403 true 00:08:22.403 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47449eed-88a1-400d-9922-5625177f825e 00:08:22.403 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:22.403 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:22.403 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:22.403 06:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 165710 00:08:23.345 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:23.345 Nvme0n1 : 3.00 23257.67 90.85 0.00 0.00 0.00 0.00 0.00 00:08:23.345 [2024-12-09T05:07:17.932Z] =================================================================================================================== 00:08:23.345 [2024-12-09T05:07:17.932Z] Total : 23257.67 90.85 0.00 0.00 0.00 0.00 0.00 00:08:23.345 00:08:24.289 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:24.289 Nvme0n1 : 4.00 23698.75 92.57 0.00 0.00 0.00 0.00 0.00 00:08:24.289 [2024-12-09T05:07:18.876Z] =================================================================================================================== 00:08:24.289 [2024-12-09T05:07:18.876Z] Total : 23698.75 92.57 0.00 0.00 0.00 0.00 0.00 00:08:24.289 00:08:25.228 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:25.228 Nvme0n1 : 5.00 23963.60 93.61 0.00 0.00 0.00 0.00 0.00 00:08:25.228 [2024-12-09T05:07:19.815Z] =================================================================================================================== 00:08:25.228 [2024-12-09T05:07:19.815Z] Total : 23963.60 93.61 0.00 0.00 0.00 0.00 0.00 00:08:25.228 00:08:26.170 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:26.170 Nvme0n1 : 6.00 24140.00 94.30 0.00 0.00 0.00 0.00 0.00 00:08:26.170 [2024-12-09T05:07:20.757Z] =================================================================================================================== 00:08:26.170 [2024-12-09T05:07:20.757Z] Total : 24140.00 94.30 0.00 0.00 0.00 0.00 0.00 00:08:26.170 00:08:27.114 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:27.114 Nvme0n1 : 7.00 24274.57 94.82 0.00 0.00 0.00 0.00 0.00 00:08:27.114 [2024-12-09T05:07:21.701Z] =================================================================================================================== 00:08:27.114 [2024-12-09T05:07:21.701Z] Total : 24274.57 94.82 0.00 0.00 0.00 0.00 0.00 00:08:27.114 00:08:28.497 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:28.497 Nvme0n1 : 8.00 24375.50 95.22 0.00 0.00 0.00 0.00 0.00 00:08:28.497 [2024-12-09T05:07:23.084Z] =================================================================================================================== 00:08:28.497 [2024-12-09T05:07:23.084Z] Total : 24375.50 95.22 0.00 0.00 0.00 0.00 0.00 00:08:28.497 00:08:29.440 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:29.440 Nvme0n1 : 9.00 24454.00 95.52 0.00 0.00 0.00 0.00 0.00 00:08:29.440 [2024-12-09T05:07:24.027Z] =================================================================================================================== 00:08:29.440 [2024-12-09T05:07:24.027Z] Total : 24454.00 95.52 0.00 0.00 0.00 0.00 0.00 00:08:29.440 00:08:30.385 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:30.385 Nvme0n1 : 10.00 24523.70 95.80 0.00 0.00 0.00 0.00 0.00 00:08:30.385 [2024-12-09T05:07:24.972Z] =================================================================================================================== 00:08:30.385 [2024-12-09T05:07:24.972Z] Total : 24523.70 95.80 0.00 0.00 0.00 0.00 0.00 00:08:30.385 00:08:30.385 00:08:30.385 Latency(us) 00:08:30.385 [2024-12-09T05:07:24.972Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:30.385 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:30.385 Nvme0n1 : 10.01 24523.50 95.79 0.00 0.00 5215.99 2419.79 14619.57 00:08:30.385 [2024-12-09T05:07:24.972Z] =================================================================================================================== 00:08:30.385 [2024-12-09T05:07:24.972Z] Total : 24523.50 95.79 0.00 0.00 5215.99 2419.79 14619.57 00:08:30.385 { 00:08:30.385 "results": [ 00:08:30.385 { 00:08:30.385 "job": "Nvme0n1", 00:08:30.385 "core_mask": "0x2", 00:08:30.385 "workload": "randwrite", 00:08:30.385 "status": "finished", 00:08:30.385 "queue_depth": 128, 00:08:30.385 "io_size": 4096, 00:08:30.385 "runtime": 10.005299, 00:08:30.385 "iops": 24523.504994703308, 00:08:30.385 "mibps": 95.7949413855598, 00:08:30.385 "io_failed": 0, 00:08:30.385 "io_timeout": 0, 00:08:30.385 "avg_latency_us": 5215.990756377077, 00:08:30.385 "min_latency_us": 2419.7907692307695, 00:08:30.385 "max_latency_us": 14619.569230769232 00:08:30.385 } 00:08:30.385 ], 00:08:30.385 "core_count": 1 00:08:30.385 } 00:08:30.385 06:07:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 165425 00:08:30.385 06:07:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 165425 ']' 00:08:30.385 06:07:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 165425 00:08:30.385 06:07:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:08:30.385 06:07:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:30.385 06:07:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 165425 00:08:30.385 06:07:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:30.385 06:07:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:30.385 06:07:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 165425' 00:08:30.385 killing process with pid 165425 00:08:30.385 06:07:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 165425 00:08:30.385 Received shutdown signal, test time was about 10.000000 seconds 00:08:30.385 00:08:30.385 Latency(us) 00:08:30.385 [2024-12-09T05:07:24.972Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:30.385 [2024-12-09T05:07:24.972Z] =================================================================================================================== 00:08:30.385 [2024-12-09T05:07:24.972Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:30.385 06:07:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 165425 00:08:30.385 06:07:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:30.646 06:07:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:30.907 06:07:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47449eed-88a1-400d-9922-5625177f825e 00:08:30.907 06:07:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:30.907 06:07:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:30.907 06:07:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:30.907 06:07:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:31.167 [2024-12-09 06:07:25.584660] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:31.167 06:07:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47449eed-88a1-400d-9922-5625177f825e 00:08:31.167 06:07:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:08:31.167 06:07:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47449eed-88a1-400d-9922-5625177f825e 00:08:31.167 06:07:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:31.167 06:07:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:31.167 06:07:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:31.167 06:07:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:31.167 06:07:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:31.167 06:07:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:31.167 06:07:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:31.167 06:07:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:31.167 06:07:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47449eed-88a1-400d-9922-5625177f825e 00:08:31.427 request: 00:08:31.427 { 00:08:31.427 "uuid": "47449eed-88a1-400d-9922-5625177f825e", 00:08:31.427 "method": "bdev_lvol_get_lvstores", 00:08:31.427 "req_id": 1 00:08:31.427 } 00:08:31.427 Got JSON-RPC error response 00:08:31.427 response: 00:08:31.427 { 00:08:31.427 "code": -19, 00:08:31.427 "message": "No such device" 00:08:31.427 } 00:08:31.427 06:07:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:08:31.427 06:07:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:31.427 06:07:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:31.427 06:07:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:31.427 06:07:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:31.427 aio_bdev 00:08:31.427 06:07:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev b3219ed2-b9c6-4208-9ca5-14f47d96c02a 00:08:31.427 06:07:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=b3219ed2-b9c6-4208-9ca5-14f47d96c02a 00:08:31.428 06:07:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:31.428 06:07:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:08:31.428 06:07:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:31.428 06:07:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:31.428 06:07:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:31.689 06:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b3219ed2-b9c6-4208-9ca5-14f47d96c02a -t 2000 00:08:31.689 [ 00:08:31.689 { 00:08:31.689 "name": "b3219ed2-b9c6-4208-9ca5-14f47d96c02a", 00:08:31.689 "aliases": [ 00:08:31.689 "lvs/lvol" 00:08:31.689 ], 00:08:31.689 "product_name": "Logical Volume", 00:08:31.689 "block_size": 4096, 00:08:31.689 "num_blocks": 38912, 00:08:31.689 "uuid": "b3219ed2-b9c6-4208-9ca5-14f47d96c02a", 00:08:31.689 "assigned_rate_limits": { 00:08:31.689 "rw_ios_per_sec": 0, 00:08:31.689 "rw_mbytes_per_sec": 0, 00:08:31.689 "r_mbytes_per_sec": 0, 00:08:31.689 "w_mbytes_per_sec": 0 00:08:31.689 }, 00:08:31.689 "claimed": false, 00:08:31.689 "zoned": false, 00:08:31.689 "supported_io_types": { 00:08:31.689 "read": true, 00:08:31.689 "write": true, 00:08:31.689 "unmap": true, 00:08:31.689 "flush": false, 00:08:31.689 "reset": true, 00:08:31.689 "nvme_admin": false, 00:08:31.689 "nvme_io": false, 00:08:31.689 "nvme_io_md": false, 00:08:31.689 "write_zeroes": true, 00:08:31.689 "zcopy": false, 00:08:31.689 "get_zone_info": false, 00:08:31.689 "zone_management": false, 00:08:31.689 "zone_append": false, 00:08:31.689 "compare": false, 00:08:31.689 "compare_and_write": false, 00:08:31.689 "abort": false, 00:08:31.689 "seek_hole": true, 00:08:31.689 "seek_data": true, 00:08:31.689 "copy": false, 00:08:31.689 "nvme_iov_md": false 00:08:31.689 }, 00:08:31.689 "driver_specific": { 00:08:31.689 "lvol": { 00:08:31.689 "lvol_store_uuid": "47449eed-88a1-400d-9922-5625177f825e", 00:08:31.689 "base_bdev": "aio_bdev", 00:08:31.689 "thin_provision": false, 00:08:31.689 "num_allocated_clusters": 38, 00:08:31.689 "snapshot": false, 00:08:31.689 "clone": false, 00:08:31.689 "esnap_clone": false 00:08:31.689 } 00:08:31.689 } 00:08:31.689 } 00:08:31.689 ] 00:08:31.951 06:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:08:31.951 06:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47449eed-88a1-400d-9922-5625177f825e 00:08:31.951 06:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:31.951 06:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:31.951 06:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47449eed-88a1-400d-9922-5625177f825e 00:08:31.951 06:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:32.211 06:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:32.211 06:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b3219ed2-b9c6-4208-9ca5-14f47d96c02a 00:08:32.211 06:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 47449eed-88a1-400d-9922-5625177f825e 00:08:32.471 06:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:32.731 06:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:32.731 00:08:32.731 real 0m15.266s 00:08:32.731 user 0m14.784s 00:08:32.731 sys 0m1.385s 00:08:32.731 06:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:32.731 06:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:32.731 ************************************ 00:08:32.731 END TEST lvs_grow_clean 00:08:32.731 ************************************ 00:08:32.731 06:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:32.731 06:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:32.731 06:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:32.731 06:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:32.731 ************************************ 00:08:32.731 START TEST lvs_grow_dirty 00:08:32.731 ************************************ 00:08:32.731 06:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:08:32.731 06:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:32.731 06:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:32.731 06:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:32.731 06:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:32.731 06:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:32.731 06:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:32.731 06:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:32.731 06:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:32.731 06:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:32.991 06:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:32.991 06:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:33.251 06:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=328e8c10-5675-4c0d-b2dc-2d278620fbd7 00:08:33.251 06:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 328e8c10-5675-4c0d-b2dc-2d278620fbd7 00:08:33.251 06:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:33.251 06:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:33.251 06:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:33.251 06:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 328e8c10-5675-4c0d-b2dc-2d278620fbd7 lvol 150 00:08:33.511 06:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=df94f825-b61a-42fd-8fc2-feafec9b823e 00:08:33.511 06:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:33.511 06:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:33.771 [2024-12-09 06:07:28.145106] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:33.771 [2024-12-09 06:07:28.145147] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:33.771 true 00:08:33.771 06:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 328e8c10-5675-4c0d-b2dc-2d278620fbd7 00:08:33.771 06:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:33.771 06:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:33.771 06:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:34.031 06:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 df94f825-b61a-42fd-8fc2-feafec9b823e 00:08:34.292 06:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:34.292 [2024-12-09 06:07:28.811028] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:34.292 06:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:34.552 06:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:34.552 06:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=168200 00:08:34.552 06:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:34.552 06:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 168200 /var/tmp/bdevperf.sock 00:08:34.552 06:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 168200 ']' 00:08:34.552 06:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:34.552 06:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:34.552 06:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:34.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:34.552 06:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:34.552 06:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:34.552 [2024-12-09 06:07:29.022885] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:08:34.552 [2024-12-09 06:07:29.022933] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168200 ] 00:08:34.552 [2024-12-09 06:07:29.080561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.552 [2024-12-09 06:07:29.110322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:34.812 06:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:34.812 06:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:34.812 06:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:35.073 Nvme0n1 00:08:35.073 06:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:35.333 [ 00:08:35.333 { 00:08:35.333 "name": "Nvme0n1", 00:08:35.333 "aliases": [ 00:08:35.333 "df94f825-b61a-42fd-8fc2-feafec9b823e" 00:08:35.333 ], 00:08:35.333 "product_name": "NVMe disk", 00:08:35.333 "block_size": 4096, 00:08:35.333 "num_blocks": 38912, 00:08:35.333 "uuid": "df94f825-b61a-42fd-8fc2-feafec9b823e", 00:08:35.334 "numa_id": 0, 00:08:35.334 "assigned_rate_limits": { 00:08:35.334 "rw_ios_per_sec": 0, 00:08:35.334 "rw_mbytes_per_sec": 0, 00:08:35.334 "r_mbytes_per_sec": 0, 00:08:35.334 "w_mbytes_per_sec": 0 00:08:35.334 }, 00:08:35.334 "claimed": false, 00:08:35.334 "zoned": false, 00:08:35.334 "supported_io_types": { 00:08:35.334 "read": true, 00:08:35.334 "write": true, 00:08:35.334 "unmap": true, 00:08:35.334 "flush": true, 00:08:35.334 "reset": true, 00:08:35.334 "nvme_admin": true, 00:08:35.334 "nvme_io": true, 00:08:35.334 "nvme_io_md": false, 00:08:35.334 "write_zeroes": true, 00:08:35.334 "zcopy": false, 00:08:35.334 "get_zone_info": false, 00:08:35.334 "zone_management": false, 00:08:35.334 "zone_append": false, 00:08:35.334 "compare": true, 00:08:35.334 "compare_and_write": true, 00:08:35.334 "abort": true, 00:08:35.334 "seek_hole": false, 00:08:35.334 "seek_data": false, 00:08:35.334 "copy": true, 00:08:35.334 "nvme_iov_md": false 00:08:35.334 }, 00:08:35.334 "memory_domains": [ 00:08:35.334 { 00:08:35.334 "dma_device_id": "system", 00:08:35.334 "dma_device_type": 1 00:08:35.334 } 00:08:35.334 ], 00:08:35.334 "driver_specific": { 00:08:35.334 "nvme": [ 00:08:35.334 { 00:08:35.334 "trid": { 00:08:35.334 "trtype": "TCP", 00:08:35.334 "adrfam": "IPv4", 00:08:35.334 "traddr": "10.0.0.2", 00:08:35.334 "trsvcid": "4420", 00:08:35.334 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:35.334 }, 00:08:35.334 "ctrlr_data": { 00:08:35.334 "cntlid": 1, 00:08:35.334 "vendor_id": "0x8086", 00:08:35.334 "model_number": "SPDK bdev Controller", 00:08:35.334 "serial_number": "SPDK0", 00:08:35.334 "firmware_revision": "25.01", 00:08:35.334 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:35.334 "oacs": { 00:08:35.334 "security": 0, 00:08:35.334 "format": 0, 00:08:35.334 "firmware": 0, 00:08:35.334 "ns_manage": 0 00:08:35.334 }, 00:08:35.334 "multi_ctrlr": true, 00:08:35.334 "ana_reporting": false 00:08:35.334 }, 00:08:35.334 "vs": { 00:08:35.334 "nvme_version": "1.3" 00:08:35.334 }, 00:08:35.334 "ns_data": { 00:08:35.334 "id": 1, 00:08:35.334 "can_share": true 00:08:35.334 } 00:08:35.334 } 00:08:35.334 ], 00:08:35.334 "mp_policy": "active_passive" 00:08:35.334 } 00:08:35.334 } 00:08:35.334 ] 00:08:35.334 06:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=168246 00:08:35.334 06:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:35.334 06:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:35.334 Running I/O for 10 seconds... 00:08:36.277 Latency(us) 00:08:36.277 [2024-12-09T05:07:30.864Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:36.277 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:36.278 Nvme0n1 : 1.00 24702.00 96.49 0.00 0.00 0.00 0.00 0.00 00:08:36.278 [2024-12-09T05:07:30.865Z] =================================================================================================================== 00:08:36.278 [2024-12-09T05:07:30.865Z] Total : 24702.00 96.49 0.00 0.00 0.00 0.00 0.00 00:08:36.278 00:08:37.219 06:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 328e8c10-5675-4c0d-b2dc-2d278620fbd7 00:08:37.219 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:37.219 Nvme0n1 : 2.00 24798.00 96.87 0.00 0.00 0.00 0.00 0.00 00:08:37.219 [2024-12-09T05:07:31.806Z] =================================================================================================================== 00:08:37.219 [2024-12-09T05:07:31.806Z] Total : 24798.00 96.87 0.00 0.00 0.00 0.00 0.00 00:08:37.219 00:08:37.480 true 00:08:37.480 06:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 328e8c10-5675-4c0d-b2dc-2d278620fbd7 00:08:37.480 06:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:37.480 06:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:37.480 06:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:37.480 06:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 168246 00:08:38.422 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:38.422 Nvme0n1 : 3.00 24852.00 97.08 0.00 0.00 0.00 0.00 0.00 00:08:38.422 [2024-12-09T05:07:33.009Z] =================================================================================================================== 00:08:38.422 [2024-12-09T05:07:33.009Z] Total : 24852.00 97.08 0.00 0.00 0.00 0.00 0.00 00:08:38.422 00:08:39.365 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:39.365 Nvme0n1 : 4.00 24899.00 97.26 0.00 0.00 0.00 0.00 0.00 00:08:39.365 [2024-12-09T05:07:33.952Z] =================================================================================================================== 00:08:39.365 [2024-12-09T05:07:33.952Z] Total : 24899.00 97.26 0.00 0.00 0.00 0.00 0.00 00:08:39.365 00:08:40.307 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:40.307 Nvme0n1 : 5.00 24933.00 97.39 0.00 0.00 0.00 0.00 0.00 00:08:40.307 [2024-12-09T05:07:34.894Z] =================================================================================================================== 00:08:40.307 [2024-12-09T05:07:34.894Z] Total : 24933.00 97.39 0.00 0.00 0.00 0.00 0.00 00:08:40.307 00:08:41.248 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:41.248 Nvme0n1 : 6.00 24959.00 97.50 0.00 0.00 0.00 0.00 0.00 00:08:41.248 [2024-12-09T05:07:35.835Z] =================================================================================================================== 00:08:41.248 [2024-12-09T05:07:35.835Z] Total : 24959.00 97.50 0.00 0.00 0.00 0.00 0.00 00:08:41.248 00:08:42.632 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:42.632 Nvme0n1 : 7.00 24977.29 97.57 0.00 0.00 0.00 0.00 0.00 00:08:42.632 [2024-12-09T05:07:37.219Z] =================================================================================================================== 00:08:42.632 [2024-12-09T05:07:37.219Z] Total : 24977.29 97.57 0.00 0.00 0.00 0.00 0.00 00:08:42.632 00:08:43.201 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:43.201 Nvme0n1 : 8.00 24998.88 97.65 0.00 0.00 0.00 0.00 0.00 00:08:43.201 [2024-12-09T05:07:37.788Z] =================================================================================================================== 00:08:43.201 [2024-12-09T05:07:37.788Z] Total : 24998.88 97.65 0.00 0.00 0.00 0.00 0.00 00:08:43.201 00:08:44.583 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:44.583 Nvme0n1 : 9.00 25012.56 97.71 0.00 0.00 0.00 0.00 0.00 00:08:44.583 [2024-12-09T05:07:39.170Z] =================================================================================================================== 00:08:44.583 [2024-12-09T05:07:39.170Z] Total : 25012.56 97.71 0.00 0.00 0.00 0.00 0.00 00:08:44.583 00:08:45.526 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:45.526 Nvme0n1 : 10.00 25023.10 97.75 0.00 0.00 0.00 0.00 0.00 00:08:45.526 [2024-12-09T05:07:40.113Z] =================================================================================================================== 00:08:45.526 [2024-12-09T05:07:40.113Z] Total : 25023.10 97.75 0.00 0.00 0.00 0.00 0.00 00:08:45.526 00:08:45.526 00:08:45.526 Latency(us) 00:08:45.526 [2024-12-09T05:07:40.113Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:45.526 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:45.526 Nvme0n1 : 10.00 25023.97 97.75 0.00 0.00 5112.07 1537.58 8922.98 00:08:45.526 [2024-12-09T05:07:40.114Z] =================================================================================================================== 00:08:45.527 [2024-12-09T05:07:40.114Z] Total : 25023.97 97.75 0.00 0.00 5112.07 1537.58 8922.98 00:08:45.527 { 00:08:45.527 "results": [ 00:08:45.527 { 00:08:45.527 "job": "Nvme0n1", 00:08:45.527 "core_mask": "0x2", 00:08:45.527 "workload": "randwrite", 00:08:45.527 "status": "finished", 00:08:45.527 "queue_depth": 128, 00:08:45.527 "io_size": 4096, 00:08:45.527 "runtime": 10.004767, 00:08:45.527 "iops": 25023.971072989505, 00:08:45.527 "mibps": 97.74988700386525, 00:08:45.527 "io_failed": 0, 00:08:45.527 "io_timeout": 0, 00:08:45.527 "avg_latency_us": 5112.068527219528, 00:08:45.527 "min_latency_us": 1537.5753846153846, 00:08:45.527 "max_latency_us": 8922.978461538461 00:08:45.527 } 00:08:45.527 ], 00:08:45.527 "core_count": 1 00:08:45.527 } 00:08:45.527 06:07:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 168200 00:08:45.527 06:07:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 168200 ']' 00:08:45.527 06:07:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 168200 00:08:45.527 06:07:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:08:45.527 06:07:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:45.527 06:07:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 168200 00:08:45.527 06:07:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:45.527 06:07:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:45.527 06:07:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 168200' 00:08:45.527 killing process with pid 168200 00:08:45.527 06:07:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 168200 00:08:45.527 Received shutdown signal, test time was about 10.000000 seconds 00:08:45.527 00:08:45.527 Latency(us) 00:08:45.527 [2024-12-09T05:07:40.114Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:45.527 [2024-12-09T05:07:40.114Z] =================================================================================================================== 00:08:45.527 [2024-12-09T05:07:40.114Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:45.527 06:07:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 168200 00:08:45.527 06:07:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:45.787 06:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:45.787 06:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:45.787 06:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 328e8c10-5675-4c0d-b2dc-2d278620fbd7 00:08:46.048 06:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:46.048 06:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:46.048 06:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 165043 00:08:46.048 06:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 165043 00:08:46.048 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 165043 Killed "${NVMF_APP[@]}" "$@" 00:08:46.048 06:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:46.048 06:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:46.048 06:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:46.048 06:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:46.048 06:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:46.048 06:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=170088 00:08:46.048 06:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 170088 00:08:46.048 06:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:46.048 06:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 170088 ']' 00:08:46.048 06:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:46.048 06:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:46.048 06:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:46.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:46.048 06:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:46.048 06:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:46.048 [2024-12-09 06:07:40.610224] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:08:46.048 [2024-12-09 06:07:40.610278] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:46.309 [2024-12-09 06:07:40.697259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.309 [2024-12-09 06:07:40.728112] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:46.309 [2024-12-09 06:07:40.728143] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:46.309 [2024-12-09 06:07:40.728150] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:46.309 [2024-12-09 06:07:40.728155] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:46.309 [2024-12-09 06:07:40.728159] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:46.309 [2024-12-09 06:07:40.728660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.883 06:07:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:46.883 06:07:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:46.883 06:07:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:46.883 06:07:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:46.883 06:07:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:46.883 06:07:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:46.883 06:07:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:47.144 [2024-12-09 06:07:41.607457] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:47.144 [2024-12-09 06:07:41.607532] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:47.144 [2024-12-09 06:07:41.607555] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:47.144 06:07:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:47.144 06:07:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev df94f825-b61a-42fd-8fc2-feafec9b823e 00:08:47.144 06:07:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=df94f825-b61a-42fd-8fc2-feafec9b823e 00:08:47.144 06:07:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:47.144 06:07:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:47.144 06:07:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:47.144 06:07:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:47.144 06:07:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:47.410 06:07:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b df94f825-b61a-42fd-8fc2-feafec9b823e -t 2000 00:08:47.410 [ 00:08:47.410 { 00:08:47.410 "name": "df94f825-b61a-42fd-8fc2-feafec9b823e", 00:08:47.410 "aliases": [ 00:08:47.410 "lvs/lvol" 00:08:47.410 ], 00:08:47.410 "product_name": "Logical Volume", 00:08:47.410 "block_size": 4096, 00:08:47.410 "num_blocks": 38912, 00:08:47.410 "uuid": "df94f825-b61a-42fd-8fc2-feafec9b823e", 00:08:47.410 "assigned_rate_limits": { 00:08:47.410 "rw_ios_per_sec": 0, 00:08:47.410 "rw_mbytes_per_sec": 0, 00:08:47.410 "r_mbytes_per_sec": 0, 00:08:47.410 "w_mbytes_per_sec": 0 00:08:47.410 }, 00:08:47.410 "claimed": false, 00:08:47.410 "zoned": false, 00:08:47.410 "supported_io_types": { 00:08:47.410 "read": true, 00:08:47.410 "write": true, 00:08:47.410 "unmap": true, 00:08:47.410 "flush": false, 00:08:47.410 "reset": true, 00:08:47.410 "nvme_admin": false, 00:08:47.410 "nvme_io": false, 00:08:47.410 "nvme_io_md": false, 00:08:47.410 "write_zeroes": true, 00:08:47.410 "zcopy": false, 00:08:47.410 "get_zone_info": false, 00:08:47.410 "zone_management": false, 00:08:47.410 "zone_append": false, 00:08:47.410 "compare": false, 00:08:47.410 "compare_and_write": false, 00:08:47.411 "abort": false, 00:08:47.411 "seek_hole": true, 00:08:47.411 "seek_data": true, 00:08:47.411 "copy": false, 00:08:47.411 "nvme_iov_md": false 00:08:47.411 }, 00:08:47.411 "driver_specific": { 00:08:47.411 "lvol": { 00:08:47.411 "lvol_store_uuid": "328e8c10-5675-4c0d-b2dc-2d278620fbd7", 00:08:47.411 "base_bdev": "aio_bdev", 00:08:47.411 "thin_provision": false, 00:08:47.411 "num_allocated_clusters": 38, 00:08:47.411 "snapshot": false, 00:08:47.411 "clone": false, 00:08:47.411 "esnap_clone": false 00:08:47.411 } 00:08:47.411 } 00:08:47.411 } 00:08:47.411 ] 00:08:47.411 06:07:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:47.411 06:07:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 328e8c10-5675-4c0d-b2dc-2d278620fbd7 00:08:47.411 06:07:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:47.672 06:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:47.672 06:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 328e8c10-5675-4c0d-b2dc-2d278620fbd7 00:08:47.672 06:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:47.932 06:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:47.932 06:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:47.932 [2024-12-09 06:07:42.452051] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:47.932 06:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 328e8c10-5675-4c0d-b2dc-2d278620fbd7 00:08:47.932 06:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:08:47.932 06:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 328e8c10-5675-4c0d-b2dc-2d278620fbd7 00:08:47.932 06:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:47.932 06:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:47.932 06:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:47.932 06:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:47.932 06:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:47.932 06:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:47.932 06:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:47.932 06:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:47.932 06:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 328e8c10-5675-4c0d-b2dc-2d278620fbd7 00:08:48.192 request: 00:08:48.192 { 00:08:48.192 "uuid": "328e8c10-5675-4c0d-b2dc-2d278620fbd7", 00:08:48.192 "method": "bdev_lvol_get_lvstores", 00:08:48.192 "req_id": 1 00:08:48.192 } 00:08:48.192 Got JSON-RPC error response 00:08:48.192 response: 00:08:48.192 { 00:08:48.192 "code": -19, 00:08:48.192 "message": "No such device" 00:08:48.192 } 00:08:48.192 06:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:08:48.192 06:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:48.193 06:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:48.193 06:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:48.193 06:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:48.453 aio_bdev 00:08:48.453 06:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev df94f825-b61a-42fd-8fc2-feafec9b823e 00:08:48.453 06:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=df94f825-b61a-42fd-8fc2-feafec9b823e 00:08:48.453 06:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:48.453 06:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:48.453 06:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:48.453 06:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:48.453 06:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:48.454 06:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b df94f825-b61a-42fd-8fc2-feafec9b823e -t 2000 00:08:48.715 [ 00:08:48.715 { 00:08:48.715 "name": "df94f825-b61a-42fd-8fc2-feafec9b823e", 00:08:48.715 "aliases": [ 00:08:48.715 "lvs/lvol" 00:08:48.715 ], 00:08:48.715 "product_name": "Logical Volume", 00:08:48.715 "block_size": 4096, 00:08:48.715 "num_blocks": 38912, 00:08:48.715 "uuid": "df94f825-b61a-42fd-8fc2-feafec9b823e", 00:08:48.715 "assigned_rate_limits": { 00:08:48.715 "rw_ios_per_sec": 0, 00:08:48.715 "rw_mbytes_per_sec": 0, 00:08:48.715 "r_mbytes_per_sec": 0, 00:08:48.715 "w_mbytes_per_sec": 0 00:08:48.715 }, 00:08:48.715 "claimed": false, 00:08:48.715 "zoned": false, 00:08:48.715 "supported_io_types": { 00:08:48.715 "read": true, 00:08:48.715 "write": true, 00:08:48.715 "unmap": true, 00:08:48.715 "flush": false, 00:08:48.715 "reset": true, 00:08:48.715 "nvme_admin": false, 00:08:48.715 "nvme_io": false, 00:08:48.715 "nvme_io_md": false, 00:08:48.715 "write_zeroes": true, 00:08:48.715 "zcopy": false, 00:08:48.715 "get_zone_info": false, 00:08:48.715 "zone_management": false, 00:08:48.715 "zone_append": false, 00:08:48.715 "compare": false, 00:08:48.715 "compare_and_write": false, 00:08:48.715 "abort": false, 00:08:48.715 "seek_hole": true, 00:08:48.715 "seek_data": true, 00:08:48.715 "copy": false, 00:08:48.715 "nvme_iov_md": false 00:08:48.715 }, 00:08:48.715 "driver_specific": { 00:08:48.715 "lvol": { 00:08:48.715 "lvol_store_uuid": "328e8c10-5675-4c0d-b2dc-2d278620fbd7", 00:08:48.715 "base_bdev": "aio_bdev", 00:08:48.715 "thin_provision": false, 00:08:48.715 "num_allocated_clusters": 38, 00:08:48.715 "snapshot": false, 00:08:48.715 "clone": false, 00:08:48.715 "esnap_clone": false 00:08:48.715 } 00:08:48.715 } 00:08:48.715 } 00:08:48.715 ] 00:08:48.715 06:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:48.715 06:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 328e8c10-5675-4c0d-b2dc-2d278620fbd7 00:08:48.715 06:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:48.975 06:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:48.976 06:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 328e8c10-5675-4c0d-b2dc-2d278620fbd7 00:08:48.976 06:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:48.976 06:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:48.976 06:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete df94f825-b61a-42fd-8fc2-feafec9b823e 00:08:49.236 06:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 328e8c10-5675-4c0d-b2dc-2d278620fbd7 00:08:49.495 06:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:49.495 06:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:49.756 00:08:49.757 real 0m16.802s 00:08:49.757 user 0m43.698s 00:08:49.757 sys 0m2.989s 00:08:49.757 06:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:49.757 06:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:49.757 ************************************ 00:08:49.757 END TEST lvs_grow_dirty 00:08:49.757 ************************************ 00:08:49.757 06:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:49.757 06:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:08:49.757 06:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:08:49.757 06:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:08:49.757 06:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:49.757 06:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:08:49.757 06:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:08:49.757 06:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:08:49.757 06:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:49.757 nvmf_trace.0 00:08:49.757 06:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:08:49.757 06:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:49.757 06:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:49.757 06:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:49.757 06:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:49.757 06:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:49.757 06:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:49.757 06:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:49.757 rmmod nvme_tcp 00:08:49.757 rmmod nvme_fabrics 00:08:49.757 rmmod nvme_keyring 00:08:49.757 06:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:49.757 06:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:49.757 06:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:49.757 06:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 170088 ']' 00:08:49.757 06:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 170088 00:08:49.757 06:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 170088 ']' 00:08:49.757 06:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 170088 00:08:49.757 06:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:08:49.757 06:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:49.757 06:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 170088 00:08:49.757 06:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:49.757 06:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:49.757 06:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 170088' 00:08:49.757 killing process with pid 170088 00:08:49.757 06:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 170088 00:08:49.757 06:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 170088 00:08:50.018 06:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:50.018 06:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:50.018 06:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:50.018 06:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:50.018 06:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:08:50.018 06:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:50.018 06:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:08:50.018 06:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:50.018 06:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:50.018 06:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:50.018 06:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:50.018 06:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:51.935 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:51.935 00:08:51.935 real 0m43.146s 00:08:51.935 user 1m4.793s 00:08:51.935 sys 0m10.233s 00:08:51.935 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:51.935 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:51.935 ************************************ 00:08:51.935 END TEST nvmf_lvs_grow 00:08:51.935 ************************************ 00:08:51.935 06:07:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:51.935 06:07:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:51.935 06:07:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:51.935 06:07:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:52.196 ************************************ 00:08:52.196 START TEST nvmf_bdev_io_wait 00:08:52.196 ************************************ 00:08:52.196 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:52.196 * Looking for test storage... 00:08:52.196 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:52.196 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:52.196 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:08:52.196 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:52.196 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:52.196 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:52.196 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:52.196 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:52.196 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:52.196 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:52.196 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:52.196 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:52.196 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:52.196 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:52.196 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:52.196 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:52.196 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:52.196 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:52.196 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:52.196 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:52.196 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:52.196 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:52.196 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:52.196 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:52.196 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:52.196 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:52.196 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:52.196 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:52.196 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:52.196 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:52.196 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:52.196 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:52.196 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:52.196 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:52.196 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:52.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.196 --rc genhtml_branch_coverage=1 00:08:52.196 --rc genhtml_function_coverage=1 00:08:52.196 --rc genhtml_legend=1 00:08:52.196 --rc geninfo_all_blocks=1 00:08:52.196 --rc geninfo_unexecuted_blocks=1 00:08:52.196 00:08:52.196 ' 00:08:52.196 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:52.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.196 --rc genhtml_branch_coverage=1 00:08:52.196 --rc genhtml_function_coverage=1 00:08:52.196 --rc genhtml_legend=1 00:08:52.196 --rc geninfo_all_blocks=1 00:08:52.196 --rc geninfo_unexecuted_blocks=1 00:08:52.196 00:08:52.196 ' 00:08:52.196 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:52.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.196 --rc genhtml_branch_coverage=1 00:08:52.196 --rc genhtml_function_coverage=1 00:08:52.196 --rc genhtml_legend=1 00:08:52.196 --rc geninfo_all_blocks=1 00:08:52.196 --rc geninfo_unexecuted_blocks=1 00:08:52.196 00:08:52.196 ' 00:08:52.196 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:52.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.196 --rc genhtml_branch_coverage=1 00:08:52.196 --rc genhtml_function_coverage=1 00:08:52.196 --rc genhtml_legend=1 00:08:52.196 --rc geninfo_all_blocks=1 00:08:52.196 --rc geninfo_unexecuted_blocks=1 00:08:52.196 00:08:52.196 ' 00:08:52.196 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:52.196 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:52.196 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:52.196 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:52.196 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:52.196 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:52.196 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:52.196 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:52.196 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:52.196 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:52.196 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:52.196 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:52.196 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:08:52.196 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:08:52.196 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:52.196 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:52.196 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:52.196 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:52.196 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:52.196 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:52.196 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:52.196 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:52.196 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:52.196 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.196 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.197 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.197 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:52.197 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.197 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:52.197 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:52.197 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:52.197 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:52.197 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:52.197 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:52.197 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:52.197 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:52.197 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:52.197 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:52.197 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:52.197 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:52.197 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:52.197 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:52.197 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:52.197 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:52.197 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:52.197 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:52.197 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:52.197 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:52.197 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:52.197 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:52.458 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:52.458 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:52.458 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:08:52.458 06:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:00.598 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:00.598 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:09:00.598 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:00.598 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:00.598 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:00.598 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:00.598 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:00.598 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:09:00.598 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:00.598 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:09:00.598 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:09:00.598 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:09:00.598 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:09:00.598 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:09:00.598 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:09:00.598 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:00.598 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:00.598 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:00.598 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:00.598 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:00.598 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:00.598 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:00.598 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:00.598 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:00.598 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:00.598 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:00.598 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:00.598 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:00.598 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:00.598 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:00.598 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:00.598 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:00.598 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:00.598 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:00.598 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:00.598 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:00.598 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:00.598 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:00.598 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:00.598 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:00.598 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:00.598 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:00.598 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:00.598 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:00.598 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:00.598 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:00.598 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:00.598 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:00.598 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:00.598 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:00.598 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:00.598 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:00.598 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:00.598 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:00.598 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:00.598 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:00.598 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:00.598 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:00.598 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:00.598 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:00.598 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:00.598 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:00.598 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:00.598 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:00.598 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:00.598 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:00.598 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:00.598 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:00.598 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:00.598 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:00.598 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:00.598 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:00.598 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:00.598 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:09:00.598 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:00.598 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:00.598 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:00.599 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:00.599 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:00.599 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:00.599 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:00.599 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:00.599 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:00.599 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:00.599 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:00.599 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:00.599 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:00.599 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:00.599 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:00.599 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:00.599 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:00.599 06:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:00.599 06:07:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:00.599 06:07:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:00.599 06:07:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:00.599 06:07:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:00.599 06:07:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:00.599 06:07:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:00.599 06:07:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:00.599 06:07:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:00.599 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:00.599 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.482 ms 00:09:00.599 00:09:00.599 --- 10.0.0.2 ping statistics --- 00:09:00.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:00.599 rtt min/avg/max/mdev = 0.482/0.482/0.482/0.000 ms 00:09:00.599 06:07:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:00.599 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:00.599 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.258 ms 00:09:00.599 00:09:00.599 --- 10.0.0.1 ping statistics --- 00:09:00.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:00.599 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:09:00.599 06:07:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:00.599 06:07:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:09:00.599 06:07:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:00.599 06:07:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:00.599 06:07:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:00.599 06:07:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:00.599 06:07:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:00.599 06:07:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:00.599 06:07:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:00.599 06:07:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:00.599 06:07:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:00.599 06:07:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:00.599 06:07:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:00.599 06:07:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=174711 00:09:00.599 06:07:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 174711 00:09:00.599 06:07:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:00.599 06:07:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 174711 ']' 00:09:00.599 06:07:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:00.599 06:07:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:00.599 06:07:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:00.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:00.599 06:07:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:00.599 06:07:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:00.599 [2024-12-09 06:07:54.287430] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:09:00.599 [2024-12-09 06:07:54.287508] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:00.599 [2024-12-09 06:07:54.385141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:00.599 [2024-12-09 06:07:54.437808] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:00.599 [2024-12-09 06:07:54.437866] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:00.599 [2024-12-09 06:07:54.437875] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:00.599 [2024-12-09 06:07:54.437882] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:00.599 [2024-12-09 06:07:54.437888] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:00.599 [2024-12-09 06:07:54.439759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:00.599 [2024-12-09 06:07:54.439919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:00.599 [2024-12-09 06:07:54.440072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.599 [2024-12-09 06:07:54.440072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:00.599 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:00.599 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:09:00.599 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:00.599 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:00.599 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:00.599 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:00.599 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:00.599 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.599 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:00.599 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.599 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:00.599 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.599 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:00.860 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.860 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:00.860 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.860 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:00.860 [2024-12-09 06:07:55.190963] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:00.860 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.860 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:00.860 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.860 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:00.860 Malloc0 00:09:00.860 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.860 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:00.860 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.860 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:00.860 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.860 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:00.860 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.860 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:00.860 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.860 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:00.860 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.860 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:00.860 [2024-12-09 06:07:55.234776] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:00.860 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.860 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=175015 00:09:00.860 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:00.860 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=175017 00:09:00.860 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=175018 00:09:00.860 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:00.860 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=175020 00:09:00.860 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:00.860 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:00.860 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:00.860 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:00.860 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:00.860 { 00:09:00.860 "params": { 00:09:00.860 "name": "Nvme$subsystem", 00:09:00.860 "trtype": "$TEST_TRANSPORT", 00:09:00.860 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:00.860 "adrfam": "ipv4", 00:09:00.860 "trsvcid": "$NVMF_PORT", 00:09:00.860 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:00.860 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:00.860 "hdgst": ${hdgst:-false}, 00:09:00.860 "ddgst": ${ddgst:-false} 00:09:00.860 }, 00:09:00.860 "method": "bdev_nvme_attach_controller" 00:09:00.860 } 00:09:00.860 EOF 00:09:00.860 )") 00:09:00.860 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:00.860 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:00.860 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:00.860 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:00.860 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:00.860 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:00.860 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:00.860 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:00.860 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:00.860 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:00.860 { 00:09:00.860 "params": { 00:09:00.860 "name": "Nvme$subsystem", 00:09:00.860 "trtype": "$TEST_TRANSPORT", 00:09:00.860 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:00.860 "adrfam": "ipv4", 00:09:00.860 "trsvcid": "$NVMF_PORT", 00:09:00.860 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:00.860 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:00.860 "hdgst": ${hdgst:-false}, 00:09:00.860 "ddgst": ${ddgst:-false} 00:09:00.860 }, 00:09:00.860 "method": "bdev_nvme_attach_controller" 00:09:00.860 } 00:09:00.860 EOF 00:09:00.860 )") 00:09:00.860 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:00.860 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:00.860 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:00.860 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:00.860 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:00.860 { 00:09:00.860 "params": { 00:09:00.860 "name": "Nvme$subsystem", 00:09:00.860 "trtype": "$TEST_TRANSPORT", 00:09:00.860 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:00.860 "adrfam": "ipv4", 00:09:00.860 "trsvcid": "$NVMF_PORT", 00:09:00.860 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:00.860 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:00.860 "hdgst": ${hdgst:-false}, 00:09:00.860 "ddgst": ${ddgst:-false} 00:09:00.860 }, 00:09:00.860 "method": "bdev_nvme_attach_controller" 00:09:00.860 } 00:09:00.860 EOF 00:09:00.860 )") 00:09:00.860 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:00.860 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:00.860 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:00.860 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 175015 00:09:00.860 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:00.860 { 00:09:00.860 "params": { 00:09:00.860 "name": "Nvme$subsystem", 00:09:00.860 "trtype": "$TEST_TRANSPORT", 00:09:00.860 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:00.860 "adrfam": "ipv4", 00:09:00.860 "trsvcid": "$NVMF_PORT", 00:09:00.860 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:00.860 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:00.860 "hdgst": ${hdgst:-false}, 00:09:00.860 "ddgst": ${ddgst:-false} 00:09:00.860 }, 00:09:00.860 "method": "bdev_nvme_attach_controller" 00:09:00.860 } 00:09:00.860 EOF 00:09:00.860 )") 00:09:00.860 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:00.860 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:00.860 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:00.860 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:00.860 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:00.860 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:00.860 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:00.860 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:00.860 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:00.860 "params": { 00:09:00.861 "name": "Nvme1", 00:09:00.861 "trtype": "tcp", 00:09:00.861 "traddr": "10.0.0.2", 00:09:00.861 "adrfam": "ipv4", 00:09:00.861 "trsvcid": "4420", 00:09:00.861 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:00.861 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:00.861 "hdgst": false, 00:09:00.861 "ddgst": false 00:09:00.861 }, 00:09:00.861 "method": "bdev_nvme_attach_controller" 00:09:00.861 }' 00:09:00.861 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:00.861 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:00.861 "params": { 00:09:00.861 "name": "Nvme1", 00:09:00.861 "trtype": "tcp", 00:09:00.861 "traddr": "10.0.0.2", 00:09:00.861 "adrfam": "ipv4", 00:09:00.861 "trsvcid": "4420", 00:09:00.861 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:00.861 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:00.861 "hdgst": false, 00:09:00.861 "ddgst": false 00:09:00.861 }, 00:09:00.861 "method": "bdev_nvme_attach_controller" 00:09:00.861 }' 00:09:00.861 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:00.861 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:00.861 "params": { 00:09:00.861 "name": "Nvme1", 00:09:00.861 "trtype": "tcp", 00:09:00.861 "traddr": "10.0.0.2", 00:09:00.861 "adrfam": "ipv4", 00:09:00.861 "trsvcid": "4420", 00:09:00.861 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:00.861 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:00.861 "hdgst": false, 00:09:00.861 "ddgst": false 00:09:00.861 }, 00:09:00.861 "method": "bdev_nvme_attach_controller" 00:09:00.861 }' 00:09:00.861 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:00.861 06:07:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:00.861 "params": { 00:09:00.861 "name": "Nvme1", 00:09:00.861 "trtype": "tcp", 00:09:00.861 "traddr": "10.0.0.2", 00:09:00.861 "adrfam": "ipv4", 00:09:00.861 "trsvcid": "4420", 00:09:00.861 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:00.861 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:00.861 "hdgst": false, 00:09:00.861 "ddgst": false 00:09:00.861 }, 00:09:00.861 "method": "bdev_nvme_attach_controller" 00:09:00.861 }' 00:09:00.861 [2024-12-09 06:07:55.268354] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:09:00.861 [2024-12-09 06:07:55.268460] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:00.861 [2024-12-09 06:07:55.288484] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:09:00.861 [2024-12-09 06:07:55.288534] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:00.861 [2024-12-09 06:07:55.289075] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:09:00.861 [2024-12-09 06:07:55.289117] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:00.861 [2024-12-09 06:07:55.290934] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:09:00.861 [2024-12-09 06:07:55.290978] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:00.861 [2024-12-09 06:07:55.404213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.861 [2024-12-09 06:07:55.431496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:01.121 [2024-12-09 06:07:55.476197] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.121 [2024-12-09 06:07:55.502924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:01.121 [2024-12-09 06:07:55.535207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.121 [2024-12-09 06:07:55.563393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:09:01.121 [2024-12-09 06:07:55.564058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.121 [2024-12-09 06:07:55.591642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:01.121 Running I/O for 1 seconds... 00:09:01.380 Running I/O for 1 seconds... 00:09:01.380 Running I/O for 1 seconds... 00:09:01.380 Running I/O for 1 seconds... 00:09:02.320 17692.00 IOPS, 69.11 MiB/s 00:09:02.320 Latency(us) 00:09:02.320 [2024-12-09T05:07:56.907Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:02.320 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:02.320 Nvme1n1 : 1.01 17749.26 69.33 0.00 0.00 7191.23 3831.34 15930.29 00:09:02.320 [2024-12-09T05:07:56.907Z] =================================================================================================================== 00:09:02.320 [2024-12-09T05:07:56.907Z] Total : 17749.26 69.33 0.00 0.00 7191.23 3831.34 15930.29 00:09:02.320 188080.00 IOPS, 734.69 MiB/s [2024-12-09T05:07:56.907Z] 06:07:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 175017 00:09:02.320 00:09:02.320 Latency(us) 00:09:02.320 [2024-12-09T05:07:56.907Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:02.320 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:02.320 Nvme1n1 : 1.00 187740.30 733.36 0.00 0.00 678.06 270.97 1802.24 00:09:02.320 [2024-12-09T05:07:56.907Z] =================================================================================================================== 00:09:02.320 [2024-12-09T05:07:56.907Z] Total : 187740.30 733.36 0.00 0.00 678.06 270.97 1802.24 00:09:02.320 15508.00 IOPS, 60.58 MiB/s 00:09:02.320 Latency(us) 00:09:02.320 [2024-12-09T05:07:56.907Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:02.320 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:02.320 Nvme1n1 : 1.01 15560.74 60.78 0.00 0.00 8203.79 3478.45 18249.26 00:09:02.320 [2024-12-09T05:07:56.907Z] =================================================================================================================== 00:09:02.320 [2024-12-09T05:07:56.907Z] Total : 15560.74 60.78 0.00 0.00 8203.79 3478.45 18249.26 00:09:02.320 13712.00 IOPS, 53.56 MiB/s 00:09:02.320 Latency(us) 00:09:02.320 [2024-12-09T05:07:56.907Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:02.320 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:02.320 Nvme1n1 : 1.01 13789.44 53.87 0.00 0.00 9255.82 3087.75 15930.29 00:09:02.320 [2024-12-09T05:07:56.907Z] =================================================================================================================== 00:09:02.320 [2024-12-09T05:07:56.907Z] Total : 13789.44 53.87 0.00 0.00 9255.82 3087.75 15930.29 00:09:02.320 06:07:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 175018 00:09:02.320 06:07:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 175020 00:09:02.320 06:07:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:02.320 06:07:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.320 06:07:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:02.320 06:07:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.320 06:07:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:02.320 06:07:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:02.320 06:07:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:02.320 06:07:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:02.320 06:07:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:02.320 06:07:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:02.320 06:07:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:02.320 06:07:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:02.320 rmmod nvme_tcp 00:09:02.581 rmmod nvme_fabrics 00:09:02.581 rmmod nvme_keyring 00:09:02.581 06:07:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:02.581 06:07:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:02.581 06:07:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:02.581 06:07:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 174711 ']' 00:09:02.581 06:07:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 174711 00:09:02.581 06:07:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 174711 ']' 00:09:02.581 06:07:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 174711 00:09:02.581 06:07:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:09:02.581 06:07:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:02.581 06:07:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 174711 00:09:02.581 06:07:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:02.581 06:07:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:02.581 06:07:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 174711' 00:09:02.581 killing process with pid 174711 00:09:02.581 06:07:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 174711 00:09:02.581 06:07:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 174711 00:09:02.581 06:07:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:02.581 06:07:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:02.581 06:07:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:02.581 06:07:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:09:02.581 06:07:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:09:02.581 06:07:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:09:02.581 06:07:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:02.581 06:07:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:02.581 06:07:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:02.581 06:07:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:02.581 06:07:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:02.581 06:07:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:05.130 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:05.130 00:09:05.130 real 0m12.652s 00:09:05.130 user 0m17.982s 00:09:05.130 sys 0m6.855s 00:09:05.130 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:05.130 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:05.130 ************************************ 00:09:05.130 END TEST nvmf_bdev_io_wait 00:09:05.130 ************************************ 00:09:05.130 06:07:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:05.130 06:07:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:05.130 06:07:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:05.130 06:07:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:05.130 ************************************ 00:09:05.130 START TEST nvmf_queue_depth 00:09:05.130 ************************************ 00:09:05.130 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:05.130 * Looking for test storage... 00:09:05.130 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:05.130 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:05.130 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:09:05.130 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:05.130 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:05.130 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:05.130 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:05.130 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:05.130 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:05.130 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:05.130 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:05.130 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:05.130 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:05.130 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:05.130 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:05.130 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:05.130 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:05.130 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:05.130 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:05.130 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:05.130 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:05.130 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:05.130 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:05.130 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:05.130 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:05.130 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:05.130 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:05.131 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:05.131 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:05.131 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:05.131 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:05.131 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:05.131 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:05.131 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:05.131 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:05.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.131 --rc genhtml_branch_coverage=1 00:09:05.131 --rc genhtml_function_coverage=1 00:09:05.131 --rc genhtml_legend=1 00:09:05.131 --rc geninfo_all_blocks=1 00:09:05.131 --rc geninfo_unexecuted_blocks=1 00:09:05.131 00:09:05.131 ' 00:09:05.131 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:05.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.131 --rc genhtml_branch_coverage=1 00:09:05.131 --rc genhtml_function_coverage=1 00:09:05.131 --rc genhtml_legend=1 00:09:05.131 --rc geninfo_all_blocks=1 00:09:05.131 --rc geninfo_unexecuted_blocks=1 00:09:05.131 00:09:05.131 ' 00:09:05.131 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:05.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.131 --rc genhtml_branch_coverage=1 00:09:05.131 --rc genhtml_function_coverage=1 00:09:05.131 --rc genhtml_legend=1 00:09:05.131 --rc geninfo_all_blocks=1 00:09:05.131 --rc geninfo_unexecuted_blocks=1 00:09:05.131 00:09:05.131 ' 00:09:05.131 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:05.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.131 --rc genhtml_branch_coverage=1 00:09:05.131 --rc genhtml_function_coverage=1 00:09:05.131 --rc genhtml_legend=1 00:09:05.131 --rc geninfo_all_blocks=1 00:09:05.131 --rc geninfo_unexecuted_blocks=1 00:09:05.131 00:09:05.131 ' 00:09:05.131 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:05.131 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:05.131 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:05.131 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:05.131 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:05.131 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:05.131 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:05.131 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:05.131 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:05.131 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:05.131 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:05.131 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:05.131 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:09:05.131 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:09:05.131 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:05.131 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:05.131 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:05.131 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:05.131 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:05.131 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:05.131 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:05.131 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:05.131 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:05.131 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.131 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.131 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.131 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:05.131 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.131 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:05.131 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:05.131 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:05.131 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:05.131 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:05.131 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:05.131 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:05.131 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:05.131 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:05.131 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:05.131 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:05.131 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:05.131 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:05.131 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:05.131 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:05.131 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:05.131 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:05.131 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:05.131 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:05.131 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:05.131 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:05.131 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:05.131 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:05.131 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:05.131 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:05.131 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:09:05.131 06:07:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:13.291 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:13.291 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:13.292 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:13.292 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:13.292 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:13.292 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:13.292 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:13.292 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.575 ms 00:09:13.292 00:09:13.292 --- 10.0.0.2 ping statistics --- 00:09:13.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:13.292 rtt min/avg/max/mdev = 0.575/0.575/0.575/0.000 ms 00:09:13.292 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:13.292 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:13.292 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:09:13.292 00:09:13.292 --- 10.0.0.1 ping statistics --- 00:09:13.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:13.292 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:09:13.293 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:13.293 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:09:13.293 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:13.293 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:13.293 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:13.293 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:13.293 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:13.293 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:13.293 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:13.293 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:13.293 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:13.293 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:13.293 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:13.293 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=179266 00:09:13.293 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 179266 00:09:13.293 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 179266 ']' 00:09:13.293 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:13.293 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:13.293 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:13.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:13.293 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:13.293 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:13.293 06:08:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:13.293 [2024-12-09 06:08:06.855966] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:09:13.293 [2024-12-09 06:08:06.856028] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:13.293 [2024-12-09 06:08:06.936039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.293 [2024-12-09 06:08:06.985662] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:13.293 [2024-12-09 06:08:06.985714] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:13.293 [2024-12-09 06:08:06.985724] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:13.293 [2024-12-09 06:08:06.985734] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:13.293 [2024-12-09 06:08:06.985742] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:13.293 [2024-12-09 06:08:06.986434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:13.293 06:08:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:13.293 06:08:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:13.293 06:08:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:13.293 06:08:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:13.293 06:08:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:13.293 06:08:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:13.293 06:08:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:13.293 06:08:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.293 06:08:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:13.293 [2024-12-09 06:08:07.716576] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:13.293 06:08:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.293 06:08:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:13.293 06:08:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.293 06:08:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:13.293 Malloc0 00:09:13.293 06:08:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.293 06:08:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:13.293 06:08:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.293 06:08:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:13.293 06:08:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.293 06:08:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:13.293 06:08:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.293 06:08:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:13.293 06:08:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.293 06:08:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:13.293 06:08:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.293 06:08:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:13.293 [2024-12-09 06:08:07.761596] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:13.293 06:08:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.293 06:08:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=179507 00:09:13.293 06:08:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:13.293 06:08:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 179507 /var/tmp/bdevperf.sock 00:09:13.293 06:08:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 179507 ']' 00:09:13.293 06:08:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:13.293 06:08:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:13.293 06:08:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:13.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:13.293 06:08:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:13.293 06:08:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:13.293 06:08:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:13.293 [2024-12-09 06:08:07.825546] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:09:13.293 [2024-12-09 06:08:07.825623] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid179507 ] 00:09:13.554 [2024-12-09 06:08:07.917275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.554 [2024-12-09 06:08:07.968003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.126 06:08:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:14.126 06:08:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:14.126 06:08:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:14.126 06:08:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.126 06:08:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:14.389 NVMe0n1 00:09:14.389 06:08:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.389 06:08:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:14.389 Running I/O for 10 seconds... 00:09:16.271 10621.00 IOPS, 41.49 MiB/s [2024-12-09T05:08:12.244Z] 11264.00 IOPS, 44.00 MiB/s [2024-12-09T05:08:13.187Z] 11518.33 IOPS, 44.99 MiB/s [2024-12-09T05:08:14.128Z] 11598.00 IOPS, 45.30 MiB/s [2024-12-09T05:08:15.068Z] 11961.60 IOPS, 46.73 MiB/s [2024-12-09T05:08:16.006Z] 12173.67 IOPS, 47.55 MiB/s [2024-12-09T05:08:16.947Z] 12366.00 IOPS, 48.30 MiB/s [2024-12-09T05:08:17.887Z] 12527.00 IOPS, 48.93 MiB/s [2024-12-09T05:08:19.296Z] 12628.00 IOPS, 49.33 MiB/s [2024-12-09T05:08:19.296Z] 12704.20 IOPS, 49.63 MiB/s 00:09:24.709 Latency(us) 00:09:24.709 [2024-12-09T05:08:19.296Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:24.709 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:24.709 Verification LBA range: start 0x0 length 0x4000 00:09:24.709 NVMe0n1 : 10.04 12749.67 49.80 0.00 0.00 80049.38 7864.32 67754.14 00:09:24.709 [2024-12-09T05:08:19.296Z] =================================================================================================================== 00:09:24.709 [2024-12-09T05:08:19.296Z] Total : 12749.67 49.80 0.00 0.00 80049.38 7864.32 67754.14 00:09:24.709 { 00:09:24.709 "results": [ 00:09:24.709 { 00:09:24.709 "job": "NVMe0n1", 00:09:24.709 "core_mask": "0x1", 00:09:24.710 "workload": "verify", 00:09:24.710 "status": "finished", 00:09:24.710 "verify_range": { 00:09:24.710 "start": 0, 00:09:24.710 "length": 16384 00:09:24.710 }, 00:09:24.710 "queue_depth": 1024, 00:09:24.710 "io_size": 4096, 00:09:24.710 "runtime": 10.043479, 00:09:24.710 "iops": 12749.665728379578, 00:09:24.710 "mibps": 49.803381751482725, 00:09:24.710 "io_failed": 0, 00:09:24.710 "io_timeout": 0, 00:09:24.710 "avg_latency_us": 80049.38276405496, 00:09:24.710 "min_latency_us": 7864.32, 00:09:24.710 "max_latency_us": 67754.14153846154 00:09:24.710 } 00:09:24.710 ], 00:09:24.710 "core_count": 1 00:09:24.710 } 00:09:24.710 06:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 179507 00:09:24.710 06:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 179507 ']' 00:09:24.710 06:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 179507 00:09:24.710 06:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:24.710 06:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:24.710 06:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 179507 00:09:24.710 06:08:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:24.710 06:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:24.710 06:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 179507' 00:09:24.710 killing process with pid 179507 00:09:24.710 06:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 179507 00:09:24.710 Received shutdown signal, test time was about 10.000000 seconds 00:09:24.710 00:09:24.710 Latency(us) 00:09:24.710 [2024-12-09T05:08:19.297Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:24.710 [2024-12-09T05:08:19.297Z] =================================================================================================================== 00:09:24.710 [2024-12-09T05:08:19.297Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:24.710 06:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 179507 00:09:24.710 06:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:24.710 06:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:24.710 06:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:24.710 06:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:24.710 06:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:24.710 06:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:24.710 06:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:24.710 06:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:24.710 rmmod nvme_tcp 00:09:24.710 rmmod nvme_fabrics 00:09:24.710 rmmod nvme_keyring 00:09:24.710 06:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:24.710 06:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:24.710 06:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:24.710 06:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 179266 ']' 00:09:24.710 06:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 179266 00:09:24.710 06:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 179266 ']' 00:09:24.710 06:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 179266 00:09:24.710 06:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:24.710 06:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:24.710 06:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 179266 00:09:24.710 06:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:24.710 06:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:24.710 06:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 179266' 00:09:24.710 killing process with pid 179266 00:09:24.710 06:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 179266 00:09:24.710 06:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 179266 00:09:24.972 06:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:24.972 06:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:24.972 06:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:24.972 06:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:24.972 06:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:09:24.972 06:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:24.972 06:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:09:24.972 06:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:24.972 06:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:24.972 06:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:24.972 06:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:24.972 06:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:26.889 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:26.889 00:09:26.889 real 0m22.149s 00:09:26.889 user 0m25.511s 00:09:26.889 sys 0m6.876s 00:09:26.889 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:26.889 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:26.889 ************************************ 00:09:26.889 END TEST nvmf_queue_depth 00:09:26.889 ************************************ 00:09:26.889 06:08:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:26.889 06:08:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:26.889 06:08:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:26.889 06:08:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:27.152 ************************************ 00:09:27.152 START TEST nvmf_target_multipath 00:09:27.152 ************************************ 00:09:27.152 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:27.152 * Looking for test storage... 00:09:27.152 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:27.152 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:27.152 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:09:27.152 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:27.152 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:27.152 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:27.152 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:27.152 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:27.152 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:27.152 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:27.152 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:27.152 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:27.152 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:27.152 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:27.152 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:27.152 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:27.152 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:27.152 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:27.152 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:27.152 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:27.152 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:27.152 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:27.152 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:27.152 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:27.152 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:27.152 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:27.152 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:27.152 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:27.152 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:27.152 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:27.152 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:27.152 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:27.152 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:27.152 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:27.152 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:27.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.152 --rc genhtml_branch_coverage=1 00:09:27.152 --rc genhtml_function_coverage=1 00:09:27.152 --rc genhtml_legend=1 00:09:27.152 --rc geninfo_all_blocks=1 00:09:27.152 --rc geninfo_unexecuted_blocks=1 00:09:27.152 00:09:27.152 ' 00:09:27.152 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:27.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.152 --rc genhtml_branch_coverage=1 00:09:27.152 --rc genhtml_function_coverage=1 00:09:27.152 --rc genhtml_legend=1 00:09:27.152 --rc geninfo_all_blocks=1 00:09:27.152 --rc geninfo_unexecuted_blocks=1 00:09:27.152 00:09:27.152 ' 00:09:27.152 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:27.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.152 --rc genhtml_branch_coverage=1 00:09:27.152 --rc genhtml_function_coverage=1 00:09:27.152 --rc genhtml_legend=1 00:09:27.152 --rc geninfo_all_blocks=1 00:09:27.152 --rc geninfo_unexecuted_blocks=1 00:09:27.152 00:09:27.152 ' 00:09:27.152 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:27.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.152 --rc genhtml_branch_coverage=1 00:09:27.152 --rc genhtml_function_coverage=1 00:09:27.152 --rc genhtml_legend=1 00:09:27.152 --rc geninfo_all_blocks=1 00:09:27.152 --rc geninfo_unexecuted_blocks=1 00:09:27.152 00:09:27.152 ' 00:09:27.152 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:27.152 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:27.152 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:27.152 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:27.152 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:27.152 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:27.152 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:27.152 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:27.152 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:27.152 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:27.152 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:27.152 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:27.152 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:09:27.152 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:09:27.152 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:27.152 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:27.152 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:27.152 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:27.152 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:27.152 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:27.152 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:27.152 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:27.152 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:27.152 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.153 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.153 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.153 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:27.153 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.153 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:27.153 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:27.153 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:27.153 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:27.153 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:27.153 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:27.153 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:27.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:27.153 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:27.153 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:27.153 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:27.153 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:27.153 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:27.153 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:27.153 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:27.153 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:27.153 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:27.153 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:27.153 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:27.153 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:27.153 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:27.153 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:27.153 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:27.153 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:27.153 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:27.153 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:27.153 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:09:27.153 06:08:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:35.300 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:35.300 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:09:35.300 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:35.300 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:35.300 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:35.300 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:35.300 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:35.300 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:09:35.300 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:35.300 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:09:35.300 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:09:35.300 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:09:35.300 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:09:35.300 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:09:35.300 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:09:35.300 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:35.300 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:35.300 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:35.300 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:35.300 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:35.300 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:35.300 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:35.300 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:35.300 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:35.300 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:35.300 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:35.300 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:35.300 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:35.300 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:35.300 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:35.300 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:35.300 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:35.300 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:35.300 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:35.300 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:35.300 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:35.300 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:35.300 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:35.300 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:35.300 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:35.300 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:35.300 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:35.300 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:35.300 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:35.300 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:35.300 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:35.300 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:35.300 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:35.300 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:35.300 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:35.300 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:35.300 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:35.300 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:35.300 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:35.300 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:35.300 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:35.300 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:35.300 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:35.300 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:35.300 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:35.300 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:35.300 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:35.300 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:35.300 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:35.300 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:35.300 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:35.300 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:35.300 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:35.300 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:35.300 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:35.300 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:35.300 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:35.300 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:35.300 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:09:35.301 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:35.301 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:35.301 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:35.301 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:35.301 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:35.301 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:35.301 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:35.301 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:35.301 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:35.301 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:35.301 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:35.301 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:35.301 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:35.301 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:35.301 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:35.301 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:35.301 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:35.301 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:35.301 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:35.301 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:35.301 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:35.301 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:35.301 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:35.301 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:35.301 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:35.301 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:35.301 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:35.301 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.624 ms 00:09:35.301 00:09:35.301 --- 10.0.0.2 ping statistics --- 00:09:35.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:35.301 rtt min/avg/max/mdev = 0.624/0.624/0.624/0.000 ms 00:09:35.301 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:35.301 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:35.301 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:09:35.301 00:09:35.301 --- 10.0.0.1 ping statistics --- 00:09:35.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:35.301 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:09:35.301 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:35.301 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:09:35.301 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:35.301 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:35.301 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:35.301 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:35.301 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:35.301 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:35.301 06:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:35.301 06:08:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:35.301 06:08:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:35.301 only one NIC for nvmf test 00:09:35.301 06:08:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:35.301 06:08:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:35.301 06:08:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:35.301 06:08:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:35.301 06:08:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:35.301 06:08:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:35.301 06:08:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:35.301 rmmod nvme_tcp 00:09:35.301 rmmod nvme_fabrics 00:09:35.301 rmmod nvme_keyring 00:09:35.301 06:08:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:35.301 06:08:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:35.301 06:08:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:35.301 06:08:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:35.301 06:08:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:35.301 06:08:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:35.301 06:08:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:35.301 06:08:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:35.301 06:08:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:35.301 06:08:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:35.301 06:08:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:35.301 06:08:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:35.301 06:08:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:35.301 06:08:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:35.301 06:08:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:35.301 06:08:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:36.689 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:36.689 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:36.689 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:36.689 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:36.689 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:36.689 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:36.689 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:36.689 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:36.689 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:36.689 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:36.689 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:36.689 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:36.689 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:36.689 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:36.689 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:36.689 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:36.689 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:36.689 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:36.689 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:36.689 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:36.689 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:36.689 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:36.689 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:36.689 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:36.689 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:36.689 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:36.689 00:09:36.689 real 0m9.729s 00:09:36.689 user 0m2.108s 00:09:36.689 sys 0m5.560s 00:09:36.689 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:36.689 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:36.689 ************************************ 00:09:36.689 END TEST nvmf_target_multipath 00:09:36.689 ************************************ 00:09:36.951 06:08:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:36.951 06:08:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:36.951 06:08:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:36.951 06:08:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:36.951 ************************************ 00:09:36.951 START TEST nvmf_zcopy 00:09:36.951 ************************************ 00:09:36.951 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:36.951 * Looking for test storage... 00:09:36.951 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:36.951 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:36.951 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:09:36.951 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:36.951 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:36.951 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:36.951 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:36.951 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:36.951 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:36.951 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:36.951 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:36.951 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:36.951 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:36.951 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:36.951 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:36.951 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:36.951 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:36.951 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:36.951 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:36.951 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:36.951 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:36.951 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:36.951 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:36.951 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:36.951 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:36.951 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:36.951 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:36.951 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:36.951 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:36.951 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:36.951 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:36.951 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:36.951 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:36.951 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:36.951 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:36.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.951 --rc genhtml_branch_coverage=1 00:09:36.951 --rc genhtml_function_coverage=1 00:09:36.951 --rc genhtml_legend=1 00:09:36.951 --rc geninfo_all_blocks=1 00:09:36.951 --rc geninfo_unexecuted_blocks=1 00:09:36.951 00:09:36.951 ' 00:09:36.951 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:36.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.951 --rc genhtml_branch_coverage=1 00:09:36.951 --rc genhtml_function_coverage=1 00:09:36.951 --rc genhtml_legend=1 00:09:36.951 --rc geninfo_all_blocks=1 00:09:36.951 --rc geninfo_unexecuted_blocks=1 00:09:36.951 00:09:36.951 ' 00:09:36.952 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:36.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.952 --rc genhtml_branch_coverage=1 00:09:36.952 --rc genhtml_function_coverage=1 00:09:36.952 --rc genhtml_legend=1 00:09:36.952 --rc geninfo_all_blocks=1 00:09:36.952 --rc geninfo_unexecuted_blocks=1 00:09:36.952 00:09:36.952 ' 00:09:36.952 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:36.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.952 --rc genhtml_branch_coverage=1 00:09:36.952 --rc genhtml_function_coverage=1 00:09:36.952 --rc genhtml_legend=1 00:09:36.952 --rc geninfo_all_blocks=1 00:09:36.952 --rc geninfo_unexecuted_blocks=1 00:09:36.952 00:09:36.952 ' 00:09:36.952 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:36.952 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:36.952 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:36.952 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:36.952 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:36.952 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:36.952 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:36.952 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:36.952 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:36.952 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:36.952 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:36.952 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:37.213 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:09:37.213 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:09:37.213 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:37.213 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:37.213 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:37.213 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:37.213 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:37.213 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:37.213 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:37.213 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:37.213 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:37.213 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.213 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.213 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.213 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:37.214 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.214 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:37.214 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:37.214 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:37.214 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:37.214 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:37.214 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:37.214 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:37.214 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:37.214 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:37.214 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:37.214 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:37.214 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:37.214 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:37.214 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:37.214 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:37.214 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:37.214 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:37.214 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:37.214 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:37.214 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:37.214 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:37.214 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:37.214 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:09:37.214 06:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:45.351 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:45.351 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:09:45.351 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:45.351 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:45.351 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:45.351 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:45.351 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:45.351 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:09:45.351 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:45.351 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:09:45.351 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:09:45.351 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:09:45.351 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:09:45.351 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:09:45.351 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:09:45.351 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:45.351 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:45.351 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:45.351 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:45.351 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:45.351 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:45.351 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:45.351 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:45.351 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:45.351 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:45.351 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:45.351 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:45.351 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:45.351 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:45.351 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:45.351 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:45.351 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:45.351 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:45.351 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:45.351 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:45.351 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:45.351 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:45.351 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:45.351 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:45.351 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:45.351 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:45.351 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:45.351 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:45.351 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:45.351 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:45.351 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:45.351 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:45.351 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:45.351 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:45.351 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:45.351 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:45.351 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:45.351 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:45.351 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:45.351 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:45.351 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:45.351 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:45.351 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:45.351 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:45.351 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:45.351 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:45.351 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:45.351 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:45.351 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:45.351 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:45.351 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:45.351 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:45.351 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:45.351 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:45.351 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:45.351 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:45.351 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:45.351 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:45.351 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:09:45.351 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:45.351 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:45.351 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:45.351 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:45.351 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:45.352 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:45.352 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:45.352 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:45.352 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:45.352 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:45.352 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:45.352 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:45.352 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:45.352 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:45.352 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:45.352 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:45.352 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:45.352 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:45.352 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:45.352 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:45.352 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:45.352 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:45.352 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:45.352 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:45.352 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:45.352 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:45.352 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:45.352 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.571 ms 00:09:45.352 00:09:45.352 --- 10.0.0.2 ping statistics --- 00:09:45.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.352 rtt min/avg/max/mdev = 0.571/0.571/0.571/0.000 ms 00:09:45.352 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:45.352 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:45.352 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:09:45.352 00:09:45.352 --- 10.0.0.1 ping statistics --- 00:09:45.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.352 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:09:45.352 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:45.352 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:09:45.352 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:45.352 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:45.352 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:45.352 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:45.352 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:45.352 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:45.352 06:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:45.352 06:08:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:45.352 06:08:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:45.352 06:08:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:45.352 06:08:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:45.352 06:08:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=189548 00:09:45.352 06:08:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 189548 00:09:45.352 06:08:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:45.352 06:08:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 189548 ']' 00:09:45.352 06:08:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:45.352 06:08:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:45.352 06:08:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:45.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:45.352 06:08:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:45.352 06:08:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:45.352 [2024-12-09 06:08:39.106370] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:09:45.352 [2024-12-09 06:08:39.106433] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:45.352 [2024-12-09 06:08:39.185391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.352 [2024-12-09 06:08:39.234416] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:45.352 [2024-12-09 06:08:39.234480] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:45.352 [2024-12-09 06:08:39.234488] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:45.352 [2024-12-09 06:08:39.234500] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:45.352 [2024-12-09 06:08:39.234505] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:45.352 [2024-12-09 06:08:39.235252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:45.352 06:08:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:45.352 06:08:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:09:45.352 06:08:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:45.352 06:08:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:45.352 06:08:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:45.613 06:08:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:45.613 06:08:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:45.613 06:08:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:45.613 06:08:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.613 06:08:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:45.613 [2024-12-09 06:08:39.980177] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:45.613 06:08:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.613 06:08:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:45.613 06:08:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.613 06:08:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:45.613 06:08:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.613 06:08:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:45.613 06:08:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.613 06:08:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:45.613 [2024-12-09 06:08:39.996435] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:45.613 06:08:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.613 06:08:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:45.613 06:08:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.613 06:08:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:45.613 06:08:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.613 06:08:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:45.613 06:08:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.613 06:08:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:45.613 malloc0 00:09:45.613 06:08:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.613 06:08:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:45.613 06:08:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.613 06:08:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:45.613 06:08:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.613 06:08:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:45.614 06:08:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:45.614 06:08:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:45.614 06:08:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:45.614 06:08:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:45.614 06:08:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:45.614 { 00:09:45.614 "params": { 00:09:45.614 "name": "Nvme$subsystem", 00:09:45.614 "trtype": "$TEST_TRANSPORT", 00:09:45.614 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:45.614 "adrfam": "ipv4", 00:09:45.614 "trsvcid": "$NVMF_PORT", 00:09:45.614 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:45.614 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:45.614 "hdgst": ${hdgst:-false}, 00:09:45.614 "ddgst": ${ddgst:-false} 00:09:45.614 }, 00:09:45.614 "method": "bdev_nvme_attach_controller" 00:09:45.614 } 00:09:45.614 EOF 00:09:45.614 )") 00:09:45.614 06:08:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:45.614 06:08:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:45.614 06:08:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:45.614 06:08:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:45.614 "params": { 00:09:45.614 "name": "Nvme1", 00:09:45.614 "trtype": "tcp", 00:09:45.614 "traddr": "10.0.0.2", 00:09:45.614 "adrfam": "ipv4", 00:09:45.614 "trsvcid": "4420", 00:09:45.614 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:45.614 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:45.614 "hdgst": false, 00:09:45.614 "ddgst": false 00:09:45.614 }, 00:09:45.614 "method": "bdev_nvme_attach_controller" 00:09:45.614 }' 00:09:45.614 [2024-12-09 06:08:40.083208] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:09:45.614 [2024-12-09 06:08:40.083269] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid189597 ] 00:09:45.614 [2024-12-09 06:08:40.173255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.875 [2024-12-09 06:08:40.224505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.875 Running I/O for 10 seconds... 00:09:48.197 9152.00 IOPS, 71.50 MiB/s [2024-12-09T05:08:43.725Z] 9356.00 IOPS, 73.09 MiB/s [2024-12-09T05:08:44.666Z] 9427.67 IOPS, 73.65 MiB/s [2024-12-09T05:08:45.608Z] 9460.50 IOPS, 73.91 MiB/s [2024-12-09T05:08:46.548Z] 9480.00 IOPS, 74.06 MiB/s [2024-12-09T05:08:47.491Z] 9492.83 IOPS, 74.16 MiB/s [2024-12-09T05:08:48.874Z] 9504.14 IOPS, 74.25 MiB/s [2024-12-09T05:08:49.814Z] 9508.88 IOPS, 74.29 MiB/s [2024-12-09T05:08:50.757Z] 9512.89 IOPS, 74.32 MiB/s [2024-12-09T05:08:50.757Z] 9514.30 IOPS, 74.33 MiB/s 00:09:56.170 Latency(us) 00:09:56.170 [2024-12-09T05:08:50.757Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:56.170 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:56.170 Verification LBA range: start 0x0 length 0x1000 00:09:56.170 Nvme1n1 : 10.01 9514.46 74.33 0.00 0.00 13406.53 1197.29 25710.28 00:09:56.170 [2024-12-09T05:08:50.757Z] =================================================================================================================== 00:09:56.170 [2024-12-09T05:08:50.757Z] Total : 9514.46 74.33 0.00 0.00 13406.53 1197.29 25710.28 00:09:56.170 06:08:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=191407 00:09:56.170 06:08:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:56.170 06:08:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:56.170 06:08:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:56.170 06:08:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:56.170 06:08:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:56.170 06:08:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:56.170 06:08:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:56.170 [2024-12-09 06:08:50.576668] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.170 [2024-12-09 06:08:50.576705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.170 06:08:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:56.170 { 00:09:56.170 "params": { 00:09:56.170 "name": "Nvme$subsystem", 00:09:56.170 "trtype": "$TEST_TRANSPORT", 00:09:56.170 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:56.170 "adrfam": "ipv4", 00:09:56.170 "trsvcid": "$NVMF_PORT", 00:09:56.170 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:56.170 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:56.170 "hdgst": ${hdgst:-false}, 00:09:56.170 "ddgst": ${ddgst:-false} 00:09:56.170 }, 00:09:56.170 "method": "bdev_nvme_attach_controller" 00:09:56.170 } 00:09:56.170 EOF 00:09:56.170 )") 00:09:56.170 06:08:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:56.170 [2024-12-09 06:08:50.584655] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.170 06:08:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:56.170 [2024-12-09 06:08:50.584664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.170 06:08:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:56.170 06:08:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:56.170 "params": { 00:09:56.170 "name": "Nvme1", 00:09:56.170 "trtype": "tcp", 00:09:56.170 "traddr": "10.0.0.2", 00:09:56.170 "adrfam": "ipv4", 00:09:56.170 "trsvcid": "4420", 00:09:56.170 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:56.170 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:56.170 "hdgst": false, 00:09:56.170 "ddgst": false 00:09:56.170 }, 00:09:56.170 "method": "bdev_nvme_attach_controller" 00:09:56.170 }' 00:09:56.170 [2024-12-09 06:08:50.592673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.170 [2024-12-09 06:08:50.592681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.170 [2024-12-09 06:08:50.600694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.170 [2024-12-09 06:08:50.600702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.170 [2024-12-09 06:08:50.602875] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:09:56.170 [2024-12-09 06:08:50.602919] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid191407 ] 00:09:56.170 [2024-12-09 06:08:50.608715] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.170 [2024-12-09 06:08:50.608722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.170 [2024-12-09 06:08:50.616735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.170 [2024-12-09 06:08:50.616742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.170 [2024-12-09 06:08:50.624755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.170 [2024-12-09 06:08:50.624762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.170 [2024-12-09 06:08:50.632776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.170 [2024-12-09 06:08:50.632783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.170 [2024-12-09 06:08:50.640795] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.170 [2024-12-09 06:08:50.640802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.170 [2024-12-09 06:08:50.648816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.170 [2024-12-09 06:08:50.648823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.170 [2024-12-09 06:08:50.656837] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.170 [2024-12-09 06:08:50.656844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.170 [2024-12-09 06:08:50.664857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.171 [2024-12-09 06:08:50.664863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.171 [2024-12-09 06:08:50.672878] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.171 [2024-12-09 06:08:50.672888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.171 [2024-12-09 06:08:50.680898] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.171 [2024-12-09 06:08:50.680905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.171 [2024-12-09 06:08:50.683888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:56.171 [2024-12-09 06:08:50.688920] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.171 [2024-12-09 06:08:50.688928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.171 [2024-12-09 06:08:50.696942] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.171 [2024-12-09 06:08:50.696951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.171 [2024-12-09 06:08:50.704961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.171 [2024-12-09 06:08:50.704971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.171 [2024-12-09 06:08:50.712981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.171 [2024-12-09 06:08:50.712990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.171 [2024-12-09 06:08:50.713690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.171 [2024-12-09 06:08:50.721001] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.171 [2024-12-09 06:08:50.721009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.171 [2024-12-09 06:08:50.729028] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.171 [2024-12-09 06:08:50.729041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.171 [2024-12-09 06:08:50.737046] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.171 [2024-12-09 06:08:50.737058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.171 [2024-12-09 06:08:50.745065] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.171 [2024-12-09 06:08:50.745074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.171 [2024-12-09 06:08:50.753084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.171 [2024-12-09 06:08:50.753092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.432 [2024-12-09 06:08:50.761105] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.432 [2024-12-09 06:08:50.761114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.432 [2024-12-09 06:08:50.769123] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.432 [2024-12-09 06:08:50.769130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.432 [2024-12-09 06:08:50.777145] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.432 [2024-12-09 06:08:50.777151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.432 [2024-12-09 06:08:50.785176] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.432 [2024-12-09 06:08:50.785190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.432 [2024-12-09 06:08:50.793194] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.432 [2024-12-09 06:08:50.793204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.432 [2024-12-09 06:08:50.801211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.432 [2024-12-09 06:08:50.801220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.432 [2024-12-09 06:08:50.809233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.432 [2024-12-09 06:08:50.809241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.432 [2024-12-09 06:08:50.817256] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.432 [2024-12-09 06:08:50.817270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.432 [2024-12-09 06:08:50.825276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.432 [2024-12-09 06:08:50.825283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.432 [2024-12-09 06:08:50.833298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.432 [2024-12-09 06:08:50.833305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.432 [2024-12-09 06:08:50.841320] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.432 [2024-12-09 06:08:50.841329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.432 [2024-12-09 06:08:50.849340] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.432 [2024-12-09 06:08:50.849347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.432 [2024-12-09 06:08:50.857362] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.432 [2024-12-09 06:08:50.857371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.432 [2024-12-09 06:08:50.865385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.432 [2024-12-09 06:08:50.865394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.432 [2024-12-09 06:08:50.873407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.432 [2024-12-09 06:08:50.873416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.432 [2024-12-09 06:08:50.881428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.432 [2024-12-09 06:08:50.881435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.432 [2024-12-09 06:08:50.889460] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.432 [2024-12-09 06:08:50.889474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.432 [2024-12-09 06:08:50.897475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.432 [2024-12-09 06:08:50.897482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.432 Running I/O for 5 seconds... 00:09:56.432 [2024-12-09 06:08:50.907470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.432 [2024-12-09 06:08:50.907485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.432 [2024-12-09 06:08:50.916256] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.433 [2024-12-09 06:08:50.916273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.433 [2024-12-09 06:08:50.924914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.433 [2024-12-09 06:08:50.924930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.433 [2024-12-09 06:08:50.933535] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.433 [2024-12-09 06:08:50.933551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.433 [2024-12-09 06:08:50.942640] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.433 [2024-12-09 06:08:50.942656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.433 [2024-12-09 06:08:50.950664] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.433 [2024-12-09 06:08:50.950679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.433 [2024-12-09 06:08:50.959759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.433 [2024-12-09 06:08:50.959775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.433 [2024-12-09 06:08:50.968803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.433 [2024-12-09 06:08:50.968818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.433 [2024-12-09 06:08:50.977905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.433 [2024-12-09 06:08:50.977928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.433 [2024-12-09 06:08:50.986018] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.433 [2024-12-09 06:08:50.986033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.433 [2024-12-09 06:08:50.994855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.433 [2024-12-09 06:08:50.994870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.433 [2024-12-09 06:08:51.004316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.433 [2024-12-09 06:08:51.004332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.433 [2024-12-09 06:08:51.013283] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.433 [2024-12-09 06:08:51.013297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.693 [2024-12-09 06:08:51.021390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.693 [2024-12-09 06:08:51.021404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.693 [2024-12-09 06:08:51.030323] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.693 [2024-12-09 06:08:51.030337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.693 [2024-12-09 06:08:51.039460] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.693 [2024-12-09 06:08:51.039475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.693 [2024-12-09 06:08:51.048318] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.693 [2024-12-09 06:08:51.048333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.693 [2024-12-09 06:08:51.057028] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.693 [2024-12-09 06:08:51.057043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.693 [2024-12-09 06:08:51.065953] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.693 [2024-12-09 06:08:51.065967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.693 [2024-12-09 06:08:51.074830] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.693 [2024-12-09 06:08:51.074845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.693 [2024-12-09 06:08:51.083923] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.693 [2024-12-09 06:08:51.083938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.693 [2024-12-09 06:08:51.092811] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.693 [2024-12-09 06:08:51.092826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.693 [2024-12-09 06:08:51.101823] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.693 [2024-12-09 06:08:51.101838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.693 [2024-12-09 06:08:51.110812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.693 [2024-12-09 06:08:51.110826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.693 [2024-12-09 06:08:51.119276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.693 [2024-12-09 06:08:51.119291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.693 [2024-12-09 06:08:51.127737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.693 [2024-12-09 06:08:51.127752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.693 [2024-12-09 06:08:51.136299] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.693 [2024-12-09 06:08:51.136313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.693 [2024-12-09 06:08:51.144994] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.693 [2024-12-09 06:08:51.145008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.693 [2024-12-09 06:08:51.153117] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.693 [2024-12-09 06:08:51.153132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.693 [2024-12-09 06:08:51.162273] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.693 [2024-12-09 06:08:51.162288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.693 [2024-12-09 06:08:51.170377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.693 [2024-12-09 06:08:51.170392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.693 [2024-12-09 06:08:51.178431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.693 [2024-12-09 06:08:51.178446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.693 [2024-12-09 06:08:51.187480] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.693 [2024-12-09 06:08:51.187495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.693 [2024-12-09 06:08:51.195753] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.693 [2024-12-09 06:08:51.195768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.693 [2024-12-09 06:08:51.204139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.693 [2024-12-09 06:08:51.204154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.693 [2024-12-09 06:08:51.212567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.693 [2024-12-09 06:08:51.212583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.693 [2024-12-09 06:08:51.221619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.693 [2024-12-09 06:08:51.221634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.693 [2024-12-09 06:08:51.230621] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.693 [2024-12-09 06:08:51.230636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.693 [2024-12-09 06:08:51.239198] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.693 [2024-12-09 06:08:51.239212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.694 [2024-12-09 06:08:51.248006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.694 [2024-12-09 06:08:51.248021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.694 [2024-12-09 06:08:51.256596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.694 [2024-12-09 06:08:51.256610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.694 [2024-12-09 06:08:51.265135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.694 [2024-12-09 06:08:51.265149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.694 [2024-12-09 06:08:51.274493] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.694 [2024-12-09 06:08:51.274507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.954 [2024-12-09 06:08:51.283211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.954 [2024-12-09 06:08:51.283227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.954 [2024-12-09 06:08:51.292172] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.954 [2024-12-09 06:08:51.292187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.954 [2024-12-09 06:08:51.301251] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.954 [2024-12-09 06:08:51.301266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.954 [2024-12-09 06:08:51.309326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.954 [2024-12-09 06:08:51.309340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.954 [2024-12-09 06:08:51.318243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.954 [2024-12-09 06:08:51.318258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.954 [2024-12-09 06:08:51.327091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.954 [2024-12-09 06:08:51.327107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.954 [2024-12-09 06:08:51.335961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.954 [2024-12-09 06:08:51.335975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.954 [2024-12-09 06:08:51.344863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.954 [2024-12-09 06:08:51.344877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.954 [2024-12-09 06:08:51.353147] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.954 [2024-12-09 06:08:51.353162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.954 [2024-12-09 06:08:51.362281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.954 [2024-12-09 06:08:51.362296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.954 [2024-12-09 06:08:51.371987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.954 [2024-12-09 06:08:51.372001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.954 [2024-12-09 06:08:51.380153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.954 [2024-12-09 06:08:51.380167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.954 [2024-12-09 06:08:51.389065] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.954 [2024-12-09 06:08:51.389081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.955 [2024-12-09 06:08:51.397313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.955 [2024-12-09 06:08:51.397327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.955 [2024-12-09 06:08:51.405719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.955 [2024-12-09 06:08:51.405734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.955 [2024-12-09 06:08:51.414856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.955 [2024-12-09 06:08:51.414871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.955 [2024-12-09 06:08:51.422929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.955 [2024-12-09 06:08:51.422943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.955 [2024-12-09 06:08:51.432077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.955 [2024-12-09 06:08:51.432092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.955 [2024-12-09 06:08:51.441334] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.955 [2024-12-09 06:08:51.441349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.955 [2024-12-09 06:08:51.450098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.955 [2024-12-09 06:08:51.450113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.955 [2024-12-09 06:08:51.458972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.955 [2024-12-09 06:08:51.458987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.955 [2024-12-09 06:08:51.468044] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.955 [2024-12-09 06:08:51.468059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.955 [2024-12-09 06:08:51.476478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.955 [2024-12-09 06:08:51.476493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.955 [2024-12-09 06:08:51.485309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.955 [2024-12-09 06:08:51.485324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.955 [2024-12-09 06:08:51.494227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.955 [2024-12-09 06:08:51.494242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.955 [2024-12-09 06:08:51.502245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.955 [2024-12-09 06:08:51.502259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.955 [2024-12-09 06:08:51.511278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.955 [2024-12-09 06:08:51.511293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.955 [2024-12-09 06:08:51.520445] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.955 [2024-12-09 06:08:51.520465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.955 [2024-12-09 06:08:51.529569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.955 [2024-12-09 06:08:51.529584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.955 [2024-12-09 06:08:51.538881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.955 [2024-12-09 06:08:51.538896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.216 [2024-12-09 06:08:51.546997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.216 [2024-12-09 06:08:51.547012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.216 [2024-12-09 06:08:51.556368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.216 [2024-12-09 06:08:51.556383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.216 [2024-12-09 06:08:51.564900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.216 [2024-12-09 06:08:51.564915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.216 [2024-12-09 06:08:51.573493] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.216 [2024-12-09 06:08:51.573507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.216 [2024-12-09 06:08:51.582130] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.216 [2024-12-09 06:08:51.582145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.216 [2024-12-09 06:08:51.590803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.216 [2024-12-09 06:08:51.590818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.216 [2024-12-09 06:08:51.600025] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.216 [2024-12-09 06:08:51.600040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.216 [2024-12-09 06:08:51.608762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.216 [2024-12-09 06:08:51.608777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.216 [2024-12-09 06:08:51.618066] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.216 [2024-12-09 06:08:51.618081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.216 [2024-12-09 06:08:51.626817] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.216 [2024-12-09 06:08:51.626831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.216 [2024-12-09 06:08:51.635722] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.216 [2024-12-09 06:08:51.635737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.216 [2024-12-09 06:08:51.644910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.216 [2024-12-09 06:08:51.644925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.216 [2024-12-09 06:08:51.653790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.216 [2024-12-09 06:08:51.653805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.216 [2024-12-09 06:08:51.662281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.216 [2024-12-09 06:08:51.662296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.216 [2024-12-09 06:08:51.671416] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.216 [2024-12-09 06:08:51.671431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.216 [2024-12-09 06:08:51.680150] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.216 [2024-12-09 06:08:51.680165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.216 [2024-12-09 06:08:51.688786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.216 [2024-12-09 06:08:51.688801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.216 [2024-12-09 06:08:51.697246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.216 [2024-12-09 06:08:51.697260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.216 [2024-12-09 06:08:51.706260] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.216 [2024-12-09 06:08:51.706275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.216 [2024-12-09 06:08:51.715191] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.216 [2024-12-09 06:08:51.715206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.216 [2024-12-09 06:08:51.723748] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.216 [2024-12-09 06:08:51.723762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.216 [2024-12-09 06:08:51.733026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.216 [2024-12-09 06:08:51.733041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.216 [2024-12-09 06:08:51.741722] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.216 [2024-12-09 06:08:51.741737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.216 [2024-12-09 06:08:51.750544] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.216 [2024-12-09 06:08:51.750560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.216 [2024-12-09 06:08:51.759101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.216 [2024-12-09 06:08:51.759115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.216 [2024-12-09 06:08:51.767949] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.216 [2024-12-09 06:08:51.767963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.216 [2024-12-09 06:08:51.776923] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.216 [2024-12-09 06:08:51.776937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.216 [2024-12-09 06:08:51.785541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.216 [2024-12-09 06:08:51.785556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.216 [2024-12-09 06:08:51.794423] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.216 [2024-12-09 06:08:51.794437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.477 [2024-12-09 06:08:51.803446] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.477 [2024-12-09 06:08:51.803469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.477 [2024-12-09 06:08:51.812464] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.477 [2024-12-09 06:08:51.812478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.477 [2024-12-09 06:08:51.820634] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.477 [2024-12-09 06:08:51.820649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.477 [2024-12-09 06:08:51.829051] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.477 [2024-12-09 06:08:51.829066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.477 [2024-12-09 06:08:51.838035] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.477 [2024-12-09 06:08:51.838050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.477 [2024-12-09 06:08:51.846723] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.477 [2024-12-09 06:08:51.846737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.477 [2024-12-09 06:08:51.855749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.477 [2024-12-09 06:08:51.855764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.478 [2024-12-09 06:08:51.864236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.478 [2024-12-09 06:08:51.864251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.478 [2024-12-09 06:08:51.872847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.478 [2024-12-09 06:08:51.872862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.478 [2024-12-09 06:08:51.881506] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.478 [2024-12-09 06:08:51.881521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.478 [2024-12-09 06:08:51.890584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.478 [2024-12-09 06:08:51.890598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.478 [2024-12-09 06:08:51.899744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.478 [2024-12-09 06:08:51.899759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.478 18528.00 IOPS, 144.75 MiB/s [2024-12-09T05:08:52.065Z] [2024-12-09 06:08:51.908557] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.478 [2024-12-09 06:08:51.908572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.478 [2024-12-09 06:08:51.917535] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.478 [2024-12-09 06:08:51.917549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.478 [2024-12-09 06:08:51.926184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.478 [2024-12-09 06:08:51.926199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.478 [2024-12-09 06:08:51.935072] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.478 [2024-12-09 06:08:51.935086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.478 [2024-12-09 06:08:51.944111] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.478 [2024-12-09 06:08:51.944125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.478 [2024-12-09 06:08:51.952520] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.478 [2024-12-09 06:08:51.952534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.478 [2024-12-09 06:08:51.961580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.478 [2024-12-09 06:08:51.961595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.478 [2024-12-09 06:08:51.970004] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.478 [2024-12-09 06:08:51.970022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.478 [2024-12-09 06:08:51.979326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.478 [2024-12-09 06:08:51.979340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.478 [2024-12-09 06:08:51.987881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.478 [2024-12-09 06:08:51.987896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.478 [2024-12-09 06:08:51.996672] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.478 [2024-12-09 06:08:51.996686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.478 [2024-12-09 06:08:52.005800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.478 [2024-12-09 06:08:52.005814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.478 [2024-12-09 06:08:52.014629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.478 [2024-12-09 06:08:52.014643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.478 [2024-12-09 06:08:52.023482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.478 [2024-12-09 06:08:52.023497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.478 [2024-12-09 06:08:52.032671] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.478 [2024-12-09 06:08:52.032686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.478 [2024-12-09 06:08:52.041434] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.478 [2024-12-09 06:08:52.041452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.478 [2024-12-09 06:08:52.050390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.478 [2024-12-09 06:08:52.050405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.478 [2024-12-09 06:08:52.059065] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.478 [2024-12-09 06:08:52.059080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.739 [2024-12-09 06:08:52.067944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.739 [2024-12-09 06:08:52.067958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.739 [2024-12-09 06:08:52.076634] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.739 [2024-12-09 06:08:52.076648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.739 [2024-12-09 06:08:52.086061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.739 [2024-12-09 06:08:52.086075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.739 [2024-12-09 06:08:52.094747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.739 [2024-12-09 06:08:52.094762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.739 [2024-12-09 06:08:52.103702] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.739 [2024-12-09 06:08:52.103717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.739 [2024-12-09 06:08:52.112969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.739 [2024-12-09 06:08:52.112984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.739 [2024-12-09 06:08:52.121494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.739 [2024-12-09 06:08:52.121508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.739 [2024-12-09 06:08:52.129882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.739 [2024-12-09 06:08:52.129896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.739 [2024-12-09 06:08:52.138295] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.739 [2024-12-09 06:08:52.138313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.739 [2024-12-09 06:08:52.147309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.739 [2024-12-09 06:08:52.147324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.739 [2024-12-09 06:08:52.155974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.739 [2024-12-09 06:08:52.155988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.739 [2024-12-09 06:08:52.164964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.739 [2024-12-09 06:08:52.164978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.739 [2024-12-09 06:08:52.174079] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.739 [2024-12-09 06:08:52.174093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.739 [2024-12-09 06:08:52.183184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.739 [2024-12-09 06:08:52.183199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.739 [2024-12-09 06:08:52.192130] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.739 [2024-12-09 06:08:52.192144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.739 [2024-12-09 06:08:52.201386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.739 [2024-12-09 06:08:52.201401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.739 [2024-12-09 06:08:52.209962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.739 [2024-12-09 06:08:52.209976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.739 [2024-12-09 06:08:52.218953] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.739 [2024-12-09 06:08:52.218967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.739 [2024-12-09 06:08:52.227053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.739 [2024-12-09 06:08:52.227067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.739 [2024-12-09 06:08:52.236058] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.739 [2024-12-09 06:08:52.236072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.739 [2024-12-09 06:08:52.244877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.739 [2024-12-09 06:08:52.244892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.739 [2024-12-09 06:08:52.253836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.739 [2024-12-09 06:08:52.253850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.739 [2024-12-09 06:08:52.262825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.739 [2024-12-09 06:08:52.262840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.739 [2024-12-09 06:08:52.271858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.739 [2024-12-09 06:08:52.271872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.739 [2024-12-09 06:08:52.280556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.739 [2024-12-09 06:08:52.280571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.739 [2024-12-09 06:08:52.289470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.739 [2024-12-09 06:08:52.289485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.739 [2024-12-09 06:08:52.298245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.739 [2024-12-09 06:08:52.298259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.739 [2024-12-09 06:08:52.307402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.739 [2024-12-09 06:08:52.307416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.740 [2024-12-09 06:08:52.316322] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.740 [2024-12-09 06:08:52.316336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.001 [2024-12-09 06:08:52.324691] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.001 [2024-12-09 06:08:52.324707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.001 [2024-12-09 06:08:52.333894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.001 [2024-12-09 06:08:52.333909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.001 [2024-12-09 06:08:52.342769] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.001 [2024-12-09 06:08:52.342784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.001 [2024-12-09 06:08:52.351622] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.001 [2024-12-09 06:08:52.351636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.001 [2024-12-09 06:08:52.360788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.001 [2024-12-09 06:08:52.360803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.001 [2024-12-09 06:08:52.369170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.001 [2024-12-09 06:08:52.369185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.001 [2024-12-09 06:08:52.377969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.001 [2024-12-09 06:08:52.377983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.001 [2024-12-09 06:08:52.386507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.001 [2024-12-09 06:08:52.386522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.001 [2024-12-09 06:08:52.394796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.001 [2024-12-09 06:08:52.394811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.001 [2024-12-09 06:08:52.403410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.001 [2024-12-09 06:08:52.403425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.001 [2024-12-09 06:08:52.412188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.001 [2024-12-09 06:08:52.412202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.001 [2024-12-09 06:08:52.420871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.001 [2024-12-09 06:08:52.420885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.001 [2024-12-09 06:08:52.429718] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.001 [2024-12-09 06:08:52.429732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.001 [2024-12-09 06:08:52.438528] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.001 [2024-12-09 06:08:52.438543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.001 [2024-12-09 06:08:52.447235] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.001 [2024-12-09 06:08:52.447250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.001 [2024-12-09 06:08:52.456000] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.001 [2024-12-09 06:08:52.456014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.001 [2024-12-09 06:08:52.464844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.001 [2024-12-09 06:08:52.464859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.001 [2024-12-09 06:08:52.473595] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.001 [2024-12-09 06:08:52.473610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.001 [2024-12-09 06:08:52.482307] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.001 [2024-12-09 06:08:52.482322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.001 [2024-12-09 06:08:52.490974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.001 [2024-12-09 06:08:52.490989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.001 [2024-12-09 06:08:52.499934] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.001 [2024-12-09 06:08:52.499948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.001 [2024-12-09 06:08:52.508922] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.001 [2024-12-09 06:08:52.508936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.001 [2024-12-09 06:08:52.517605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.001 [2024-12-09 06:08:52.517619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.001 [2024-12-09 06:08:52.526360] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.001 [2024-12-09 06:08:52.526375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.001 [2024-12-09 06:08:52.535367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.001 [2024-12-09 06:08:52.535381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.001 [2024-12-09 06:08:52.544075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.001 [2024-12-09 06:08:52.544089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.001 [2024-12-09 06:08:52.553056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.001 [2024-12-09 06:08:52.553070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.001 [2024-12-09 06:08:52.561672] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.001 [2024-12-09 06:08:52.561687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.001 [2024-12-09 06:08:52.570278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.001 [2024-12-09 06:08:52.570292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.001 [2024-12-09 06:08:52.579022] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.001 [2024-12-09 06:08:52.579036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.262 [2024-12-09 06:08:52.588191] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.262 [2024-12-09 06:08:52.588206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.262 [2024-12-09 06:08:52.596633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.262 [2024-12-09 06:08:52.596647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.262 [2024-12-09 06:08:52.605665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.262 [2024-12-09 06:08:52.605679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.262 [2024-12-09 06:08:52.614516] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.262 [2024-12-09 06:08:52.614530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.262 [2024-12-09 06:08:52.623135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.262 [2024-12-09 06:08:52.623149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.262 [2024-12-09 06:08:52.632379] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.262 [2024-12-09 06:08:52.632393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.262 [2024-12-09 06:08:52.641095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.262 [2024-12-09 06:08:52.641110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.262 [2024-12-09 06:08:52.650130] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.262 [2024-12-09 06:08:52.650144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.262 [2024-12-09 06:08:52.658844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.262 [2024-12-09 06:08:52.658858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.262 [2024-12-09 06:08:52.667637] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.262 [2024-12-09 06:08:52.667651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.262 [2024-12-09 06:08:52.676493] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.262 [2024-12-09 06:08:52.676508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.262 [2024-12-09 06:08:52.685655] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.262 [2024-12-09 06:08:52.685670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.262 [2024-12-09 06:08:52.694413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.262 [2024-12-09 06:08:52.694427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.262 [2024-12-09 06:08:52.702861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.262 [2024-12-09 06:08:52.702876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.262 [2024-12-09 06:08:52.711183] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.262 [2024-12-09 06:08:52.711198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.262 [2024-12-09 06:08:52.719853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.262 [2024-12-09 06:08:52.719867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.262 [2024-12-09 06:08:52.728577] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.262 [2024-12-09 06:08:52.728592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.262 [2024-12-09 06:08:52.737469] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.262 [2024-12-09 06:08:52.737484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.262 [2024-12-09 06:08:52.746707] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.262 [2024-12-09 06:08:52.746722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.262 [2024-12-09 06:08:52.755361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.262 [2024-12-09 06:08:52.755375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.262 [2024-12-09 06:08:52.764356] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.262 [2024-12-09 06:08:52.764371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.262 [2024-12-09 06:08:52.773563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.262 [2024-12-09 06:08:52.773578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.262 [2024-12-09 06:08:52.781961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.262 [2024-12-09 06:08:52.781975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.262 [2024-12-09 06:08:52.791061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.262 [2024-12-09 06:08:52.791076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.262 [2024-12-09 06:08:52.800395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.262 [2024-12-09 06:08:52.800413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.262 [2024-12-09 06:08:52.809104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.262 [2024-12-09 06:08:52.809118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.262 [2024-12-09 06:08:52.817743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.262 [2024-12-09 06:08:52.817758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.262 [2024-12-09 06:08:52.826754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.262 [2024-12-09 06:08:52.826768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.262 [2024-12-09 06:08:52.835521] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.262 [2024-12-09 06:08:52.835536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.262 [2024-12-09 06:08:52.844168] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.262 [2024-12-09 06:08:52.844182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.523 [2024-12-09 06:08:52.852818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.523 [2024-12-09 06:08:52.852833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.523 [2024-12-09 06:08:52.861804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.523 [2024-12-09 06:08:52.861819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.523 [2024-12-09 06:08:52.870670] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.523 [2024-12-09 06:08:52.870684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.523 [2024-12-09 06:08:52.879270] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.523 [2024-12-09 06:08:52.879285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.523 [2024-12-09 06:08:52.887675] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.523 [2024-12-09 06:08:52.887690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.523 [2024-12-09 06:08:52.896662] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.523 [2024-12-09 06:08:52.896676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.523 18671.00 IOPS, 145.87 MiB/s [2024-12-09T05:08:53.110Z] [2024-12-09 06:08:52.905876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.523 [2024-12-09 06:08:52.905890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.523 [2024-12-09 06:08:52.914483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.523 [2024-12-09 06:08:52.914497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.523 [2024-12-09 06:08:52.923353] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.523 [2024-12-09 06:08:52.923368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.523 [2024-12-09 06:08:52.931773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.523 [2024-12-09 06:08:52.931788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.523 [2024-12-09 06:08:52.939790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.523 [2024-12-09 06:08:52.939804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.523 [2024-12-09 06:08:52.948752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.523 [2024-12-09 06:08:52.948766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.523 [2024-12-09 06:08:52.958144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.523 [2024-12-09 06:08:52.958159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.523 [2024-12-09 06:08:52.966870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.523 [2024-12-09 06:08:52.966888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.523 [2024-12-09 06:08:52.975810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.523 [2024-12-09 06:08:52.975825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.523 [2024-12-09 06:08:52.984436] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.523 [2024-12-09 06:08:52.984456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.523 [2024-12-09 06:08:52.992994] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.523 [2024-12-09 06:08:52.993009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.523 [2024-12-09 06:08:53.001427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.523 [2024-12-09 06:08:53.001442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.523 [2024-12-09 06:08:53.010475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.523 [2024-12-09 06:08:53.010489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.523 [2024-12-09 06:08:53.019219] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.523 [2024-12-09 06:08:53.019234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.523 [2024-12-09 06:08:53.028354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.523 [2024-12-09 06:08:53.028369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.523 [2024-12-09 06:08:53.036951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.523 [2024-12-09 06:08:53.036965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.523 [2024-12-09 06:08:53.045927] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.523 [2024-12-09 06:08:53.045941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.523 [2024-12-09 06:08:53.054515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.523 [2024-12-09 06:08:53.054529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.523 [2024-12-09 06:08:53.063019] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.523 [2024-12-09 06:08:53.063034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.523 [2024-12-09 06:08:53.072095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.523 [2024-12-09 06:08:53.072109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.523 [2024-12-09 06:08:53.080197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.523 [2024-12-09 06:08:53.080211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.523 [2024-12-09 06:08:53.089436] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.523 [2024-12-09 06:08:53.089456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.523 [2024-12-09 06:08:53.098108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.523 [2024-12-09 06:08:53.098122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.523 [2024-12-09 06:08:53.106624] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.523 [2024-12-09 06:08:53.106639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.783 [2024-12-09 06:08:53.115774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.783 [2024-12-09 06:08:53.115789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.783 [2024-12-09 06:08:53.124317] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.783 [2024-12-09 06:08:53.124332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.783 [2024-12-09 06:08:53.133368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.783 [2024-12-09 06:08:53.133386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.783 [2024-12-09 06:08:53.142229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.783 [2024-12-09 06:08:53.142243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.783 [2024-12-09 06:08:53.150903] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.783 [2024-12-09 06:08:53.150918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.783 [2024-12-09 06:08:53.159879] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.783 [2024-12-09 06:08:53.159894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.783 [2024-12-09 06:08:53.169359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.783 [2024-12-09 06:08:53.169373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.783 [2024-12-09 06:08:53.177352] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.783 [2024-12-09 06:08:53.177366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.783 [2024-12-09 06:08:53.186179] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.783 [2024-12-09 06:08:53.186193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.783 [2024-12-09 06:08:53.194692] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.783 [2024-12-09 06:08:53.194706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.783 [2024-12-09 06:08:53.203322] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.784 [2024-12-09 06:08:53.203337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.784 [2024-12-09 06:08:53.212300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.784 [2024-12-09 06:08:53.212315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.784 [2024-12-09 06:08:53.220743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.784 [2024-12-09 06:08:53.220758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.784 [2024-12-09 06:08:53.229145] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.784 [2024-12-09 06:08:53.229160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.784 [2024-12-09 06:08:53.238160] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.784 [2024-12-09 06:08:53.238175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.784 [2024-12-09 06:08:53.247175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.784 [2024-12-09 06:08:53.247190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.784 [2024-12-09 06:08:53.256125] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.784 [2024-12-09 06:08:53.256140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.784 [2024-12-09 06:08:53.264188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.784 [2024-12-09 06:08:53.264203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.784 [2024-12-09 06:08:53.273226] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.784 [2024-12-09 06:08:53.273240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.784 [2024-12-09 06:08:53.281632] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.784 [2024-12-09 06:08:53.281647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.784 [2024-12-09 06:08:53.290377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.784 [2024-12-09 06:08:53.290392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.784 [2024-12-09 06:08:53.299418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.784 [2024-12-09 06:08:53.299436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.784 [2024-12-09 06:08:53.307881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.784 [2024-12-09 06:08:53.307896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.784 [2024-12-09 06:08:53.316838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.784 [2024-12-09 06:08:53.316852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.784 [2024-12-09 06:08:53.325603] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.784 [2024-12-09 06:08:53.325618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.784 [2024-12-09 06:08:53.334504] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.784 [2024-12-09 06:08:53.334519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.784 [2024-12-09 06:08:53.343384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.784 [2024-12-09 06:08:53.343398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.784 [2024-12-09 06:08:53.351945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.784 [2024-12-09 06:08:53.351959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.784 [2024-12-09 06:08:53.360925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.784 [2024-12-09 06:08:53.360940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.044 [2024-12-09 06:08:53.369695] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.044 [2024-12-09 06:08:53.369710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.044 [2024-12-09 06:08:53.378637] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.044 [2024-12-09 06:08:53.378652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.044 [2024-12-09 06:08:53.387391] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.044 [2024-12-09 06:08:53.387406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.044 [2024-12-09 06:08:53.396590] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.044 [2024-12-09 06:08:53.396605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.044 [2024-12-09 06:08:53.404680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.044 [2024-12-09 06:08:53.404695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.044 [2024-12-09 06:08:53.413580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.044 [2024-12-09 06:08:53.413595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.044 [2024-12-09 06:08:53.422236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.044 [2024-12-09 06:08:53.422251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.044 [2024-12-09 06:08:53.431304] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.044 [2024-12-09 06:08:53.431319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.044 [2024-12-09 06:08:53.439857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.044 [2024-12-09 06:08:53.439872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.044 [2024-12-09 06:08:53.448621] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.044 [2024-12-09 06:08:53.448636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.044 [2024-12-09 06:08:53.457525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.044 [2024-12-09 06:08:53.457548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.044 [2024-12-09 06:08:53.465975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.044 [2024-12-09 06:08:53.465990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.044 [2024-12-09 06:08:53.474598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.044 [2024-12-09 06:08:53.474613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.044 [2024-12-09 06:08:53.483180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.044 [2024-12-09 06:08:53.483195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.044 [2024-12-09 06:08:53.491982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.044 [2024-12-09 06:08:53.491997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.044 [2024-12-09 06:08:53.501070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.044 [2024-12-09 06:08:53.501085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.044 [2024-12-09 06:08:53.510041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.044 [2024-12-09 06:08:53.510055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.044 [2024-12-09 06:08:53.518660] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.044 [2024-12-09 06:08:53.518674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.044 [2024-12-09 06:08:53.527593] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.044 [2024-12-09 06:08:53.527607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.044 [2024-12-09 06:08:53.536223] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.044 [2024-12-09 06:08:53.536237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.044 [2024-12-09 06:08:53.545353] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.044 [2024-12-09 06:08:53.545367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.044 [2024-12-09 06:08:53.554149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.044 [2024-12-09 06:08:53.554163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.044 [2024-12-09 06:08:53.562799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.044 [2024-12-09 06:08:53.562813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.044 [2024-12-09 06:08:53.571576] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.044 [2024-12-09 06:08:53.571590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.044 [2024-12-09 06:08:53.580179] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.044 [2024-12-09 06:08:53.580193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.044 [2024-12-09 06:08:53.588575] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.044 [2024-12-09 06:08:53.588589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.044 [2024-12-09 06:08:53.597497] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.044 [2024-12-09 06:08:53.597511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.044 [2024-12-09 06:08:53.605924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.044 [2024-12-09 06:08:53.605938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.044 [2024-12-09 06:08:53.615364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.044 [2024-12-09 06:08:53.615379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.044 [2024-12-09 06:08:53.623482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.044 [2024-12-09 06:08:53.623496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.303 [2024-12-09 06:08:53.632361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.303 [2024-12-09 06:08:53.632375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.303 [2024-12-09 06:08:53.640469] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.303 [2024-12-09 06:08:53.640483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.303 [2024-12-09 06:08:53.649411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.303 [2024-12-09 06:08:53.649425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.303 [2024-12-09 06:08:53.658055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.303 [2024-12-09 06:08:53.658069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.304 [2024-12-09 06:08:53.666897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.304 [2024-12-09 06:08:53.666911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.304 [2024-12-09 06:08:53.675656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.304 [2024-12-09 06:08:53.675670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.304 [2024-12-09 06:08:53.684590] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.304 [2024-12-09 06:08:53.684604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.304 [2024-12-09 06:08:53.693286] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.304 [2024-12-09 06:08:53.693300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.304 [2024-12-09 06:08:53.702474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.304 [2024-12-09 06:08:53.702488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.304 [2024-12-09 06:08:53.711000] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.304 [2024-12-09 06:08:53.711015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.304 [2024-12-09 06:08:53.720182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.304 [2024-12-09 06:08:53.720197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.304 [2024-12-09 06:08:53.728752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.304 [2024-12-09 06:08:53.728766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.304 [2024-12-09 06:08:53.738259] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.304 [2024-12-09 06:08:53.738273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.304 [2024-12-09 06:08:53.746937] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.304 [2024-12-09 06:08:53.746952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.304 [2024-12-09 06:08:53.755455] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.304 [2024-12-09 06:08:53.755469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.304 [2024-12-09 06:08:53.764692] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.304 [2024-12-09 06:08:53.764707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.304 [2024-12-09 06:08:53.773347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.304 [2024-12-09 06:08:53.773361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.304 [2024-12-09 06:08:53.782708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.304 [2024-12-09 06:08:53.782722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.304 [2024-12-09 06:08:53.790792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.304 [2024-12-09 06:08:53.790806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.304 [2024-12-09 06:08:53.799818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.304 [2024-12-09 06:08:53.799832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.304 [2024-12-09 06:08:53.808652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.304 [2024-12-09 06:08:53.808667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.304 [2024-12-09 06:08:53.817217] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.304 [2024-12-09 06:08:53.817232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.304 [2024-12-09 06:08:53.826220] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.304 [2024-12-09 06:08:53.826234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.304 [2024-12-09 06:08:53.835334] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.304 [2024-12-09 06:08:53.835406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.304 [2024-12-09 06:08:53.844583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.304 [2024-12-09 06:08:53.844597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.304 [2024-12-09 06:08:53.853147] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.304 [2024-12-09 06:08:53.853162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.304 [2024-12-09 06:08:53.861859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.304 [2024-12-09 06:08:53.861873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.304 [2024-12-09 06:08:53.870325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.304 [2024-12-09 06:08:53.870339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.304 [2024-12-09 06:08:53.879049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.304 [2024-12-09 06:08:53.879063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.304 [2024-12-09 06:08:53.887241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.304 [2024-12-09 06:08:53.887255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.563 [2024-12-09 06:08:53.895748] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.563 [2024-12-09 06:08:53.895763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.563 [2024-12-09 06:08:53.904757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.563 [2024-12-09 06:08:53.904772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.563 18679.67 IOPS, 145.93 MiB/s [2024-12-09T05:08:54.150Z] [2024-12-09 06:08:53.912956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.563 [2024-12-09 06:08:53.912971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.563 [2024-12-09 06:08:53.921845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.563 [2024-12-09 06:08:53.921859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.563 [2024-12-09 06:08:53.930868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.563 [2024-12-09 06:08:53.930883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.563 [2024-12-09 06:08:53.939676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.563 [2024-12-09 06:08:53.939690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.563 [2024-12-09 06:08:53.948881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.563 [2024-12-09 06:08:53.948896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.563 [2024-12-09 06:08:53.957507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.563 [2024-12-09 06:08:53.957525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.563 [2024-12-09 06:08:53.966548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.563 [2024-12-09 06:08:53.966562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.563 [2024-12-09 06:08:53.975282] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.563 [2024-12-09 06:08:53.975296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.563 [2024-12-09 06:08:53.984759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.563 [2024-12-09 06:08:53.984774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.563 [2024-12-09 06:08:53.992740] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.563 [2024-12-09 06:08:53.992754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.563 [2024-12-09 06:08:54.001832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.563 [2024-12-09 06:08:54.001846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.563 [2024-12-09 06:08:54.010785] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.563 [2024-12-09 06:08:54.010799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.563 [2024-12-09 06:08:54.019233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.563 [2024-12-09 06:08:54.019247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.563 [2024-12-09 06:08:54.028347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.563 [2024-12-09 06:08:54.028361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.563 [2024-12-09 06:08:54.036899] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.563 [2024-12-09 06:08:54.036913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.563 [2024-12-09 06:08:54.044880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.563 [2024-12-09 06:08:54.044895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.563 [2024-12-09 06:08:54.053662] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.563 [2024-12-09 06:08:54.053676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.563 [2024-12-09 06:08:54.062162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.563 [2024-12-09 06:08:54.062176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.563 [2024-12-09 06:08:54.071185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.563 [2024-12-09 06:08:54.071200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.563 [2024-12-09 06:08:54.079031] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.563 [2024-12-09 06:08:54.079045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.563 [2024-12-09 06:08:54.087933] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.563 [2024-12-09 06:08:54.087947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.563 [2024-12-09 06:08:54.097053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.563 [2024-12-09 06:08:54.097067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.563 [2024-12-09 06:08:54.106308] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.563 [2024-12-09 06:08:54.106322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.563 [2024-12-09 06:08:54.114895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.563 [2024-12-09 06:08:54.114909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.563 [2024-12-09 06:08:54.124089] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.563 [2024-12-09 06:08:54.124107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.563 [2024-12-09 06:08:54.133076] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.563 [2024-12-09 06:08:54.133090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.563 [2024-12-09 06:08:54.141887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.563 [2024-12-09 06:08:54.141901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.822 [2024-12-09 06:08:54.150541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.822 [2024-12-09 06:08:54.150556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.822 [2024-12-09 06:08:54.158992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.822 [2024-12-09 06:08:54.159006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.822 [2024-12-09 06:08:54.167435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.822 [2024-12-09 06:08:54.167453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.822 [2024-12-09 06:08:54.176115] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.822 [2024-12-09 06:08:54.176129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.822 [2024-12-09 06:08:54.184996] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.822 [2024-12-09 06:08:54.185010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.822 [2024-12-09 06:08:54.194085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.822 [2024-12-09 06:08:54.194100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.822 [2024-12-09 06:08:54.202954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.822 [2024-12-09 06:08:54.202968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.822 [2024-12-09 06:08:54.211632] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.822 [2024-12-09 06:08:54.211646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.822 [2024-12-09 06:08:54.220806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.822 [2024-12-09 06:08:54.220821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.823 [2024-12-09 06:08:54.229765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.823 [2024-12-09 06:08:54.229780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.823 [2024-12-09 06:08:54.238674] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.823 [2024-12-09 06:08:54.238689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.823 [2024-12-09 06:08:54.247329] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.823 [2024-12-09 06:08:54.247343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.823 [2024-12-09 06:08:54.256225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.823 [2024-12-09 06:08:54.256238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.823 [2024-12-09 06:08:54.265472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.823 [2024-12-09 06:08:54.265487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.823 [2024-12-09 06:08:54.273661] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.823 [2024-12-09 06:08:54.273675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.823 [2024-12-09 06:08:54.282253] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.823 [2024-12-09 06:08:54.282268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.823 [2024-12-09 06:08:54.291318] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.823 [2024-12-09 06:08:54.291337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.823 [2024-12-09 06:08:54.299844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.823 [2024-12-09 06:08:54.299858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.823 [2024-12-09 06:08:54.308476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.823 [2024-12-09 06:08:54.308490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.823 [2024-12-09 06:08:54.316899] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.823 [2024-12-09 06:08:54.316912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.823 [2024-12-09 06:08:54.326014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.823 [2024-12-09 06:08:54.326029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.823 [2024-12-09 06:08:54.334583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.823 [2024-12-09 06:08:54.334599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.823 [2024-12-09 06:08:54.342524] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.823 [2024-12-09 06:08:54.342538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.823 [2024-12-09 06:08:54.351616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.823 [2024-12-09 06:08:54.351630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.823 [2024-12-09 06:08:54.361012] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.823 [2024-12-09 06:08:54.361026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.823 [2024-12-09 06:08:54.369710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.823 [2024-12-09 06:08:54.369725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.823 [2024-12-09 06:08:54.378907] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.823 [2024-12-09 06:08:54.378921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.823 [2024-12-09 06:08:54.387569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.823 [2024-12-09 06:08:54.387584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.823 [2024-12-09 06:08:54.396840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.823 [2024-12-09 06:08:54.396854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.823 [2024-12-09 06:08:54.405704] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.823 [2024-12-09 06:08:54.405718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.082 [2024-12-09 06:08:54.414267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.082 [2024-12-09 06:08:54.414282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.082 [2024-12-09 06:08:54.422763] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.082 [2024-12-09 06:08:54.422777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.082 [2024-12-09 06:08:54.431894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.082 [2024-12-09 06:08:54.431908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.082 [2024-12-09 06:08:54.441133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.082 [2024-12-09 06:08:54.441147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.082 [2024-12-09 06:08:54.449496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.082 [2024-12-09 06:08:54.449510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.082 [2024-12-09 06:08:54.458041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.082 [2024-12-09 06:08:54.458055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.082 [2024-12-09 06:08:54.466576] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.082 [2024-12-09 06:08:54.466592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.082 [2024-12-09 06:08:54.475473] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.082 [2024-12-09 06:08:54.475488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.082 [2024-12-09 06:08:54.484413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.082 [2024-12-09 06:08:54.484428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.083 [2024-12-09 06:08:54.493326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.083 [2024-12-09 06:08:54.493340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.083 [2024-12-09 06:08:54.501906] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.083 [2024-12-09 06:08:54.501921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.083 [2024-12-09 06:08:54.510812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.083 [2024-12-09 06:08:54.510826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.083 [2024-12-09 06:08:54.519288] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.083 [2024-12-09 06:08:54.519303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.083 [2024-12-09 06:08:54.527980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.083 [2024-12-09 06:08:54.527994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.083 [2024-12-09 06:08:54.537148] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.083 [2024-12-09 06:08:54.537163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.083 [2024-12-09 06:08:54.545937] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.083 [2024-12-09 06:08:54.545951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.083 [2024-12-09 06:08:54.554045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.083 [2024-12-09 06:08:54.554059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.083 [2024-12-09 06:08:54.562423] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.083 [2024-12-09 06:08:54.562438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.083 [2024-12-09 06:08:54.570730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.083 [2024-12-09 06:08:54.570744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.083 [2024-12-09 06:08:54.579805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.083 [2024-12-09 06:08:54.579820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.083 [2024-12-09 06:08:54.588431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.083 [2024-12-09 06:08:54.588446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.083 [2024-12-09 06:08:54.596783] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.083 [2024-12-09 06:08:54.596797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.083 [2024-12-09 06:08:54.605581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.083 [2024-12-09 06:08:54.605596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.083 [2024-12-09 06:08:54.614625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.083 [2024-12-09 06:08:54.614639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.083 [2024-12-09 06:08:54.623272] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.083 [2024-12-09 06:08:54.623286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.083 [2024-12-09 06:08:54.632634] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.083 [2024-12-09 06:08:54.632648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.083 [2024-12-09 06:08:54.641313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.083 [2024-12-09 06:08:54.641327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.083 [2024-12-09 06:08:54.650527] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.083 [2024-12-09 06:08:54.650541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.083 [2024-12-09 06:08:54.659583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.083 [2024-12-09 06:08:54.659598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.343 [2024-12-09 06:08:54.672690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.343 [2024-12-09 06:08:54.672705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.343 [2024-12-09 06:08:54.680708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.343 [2024-12-09 06:08:54.680723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.343 [2024-12-09 06:08:54.689611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.343 [2024-12-09 06:08:54.689625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.343 [2024-12-09 06:08:54.698524] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.343 [2024-12-09 06:08:54.698538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.343 [2024-12-09 06:08:54.706917] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.343 [2024-12-09 06:08:54.706931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.343 [2024-12-09 06:08:54.715593] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.343 [2024-12-09 06:08:54.715607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.343 [2024-12-09 06:08:54.723786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.343 [2024-12-09 06:08:54.723800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.343 [2024-12-09 06:08:54.732976] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.343 [2024-12-09 06:08:54.732990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.343 [2024-12-09 06:08:54.741706] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.343 [2024-12-09 06:08:54.741721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.343 [2024-12-09 06:08:54.750711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.343 [2024-12-09 06:08:54.750725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.343 [2024-12-09 06:08:54.758878] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.343 [2024-12-09 06:08:54.758892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.343 [2024-12-09 06:08:54.767520] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.343 [2024-12-09 06:08:54.767534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.343 [2024-12-09 06:08:54.776534] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.343 [2024-12-09 06:08:54.776549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.343 [2024-12-09 06:08:54.785494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.343 [2024-12-09 06:08:54.785510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.343 [2024-12-09 06:08:54.794393] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.343 [2024-12-09 06:08:54.794408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.343 [2024-12-09 06:08:54.803124] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.343 [2024-12-09 06:08:54.803138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.343 [2024-12-09 06:08:54.812221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.343 [2024-12-09 06:08:54.812236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.343 [2024-12-09 06:08:54.821418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.343 [2024-12-09 06:08:54.821432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.343 [2024-12-09 06:08:54.829763] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.343 [2024-12-09 06:08:54.829778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.343 [2024-12-09 06:08:54.838499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.343 [2024-12-09 06:08:54.838514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.343 [2024-12-09 06:08:54.847124] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.343 [2024-12-09 06:08:54.847139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.343 [2024-12-09 06:08:54.860855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.343 [2024-12-09 06:08:54.860871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.343 [2024-12-09 06:08:54.867064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.343 [2024-12-09 06:08:54.867079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.343 [2024-12-09 06:08:54.877156] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.343 [2024-12-09 06:08:54.877171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.343 [2024-12-09 06:08:54.885559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.343 [2024-12-09 06:08:54.885574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.343 [2024-12-09 06:08:54.894796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.343 [2024-12-09 06:08:54.894811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.343 [2024-12-09 06:08:54.903682] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.343 [2024-12-09 06:08:54.903697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.343 18698.75 IOPS, 146.08 MiB/s [2024-12-09T05:08:54.930Z] [2024-12-09 06:08:54.912219] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.343 [2024-12-09 06:08:54.912233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.343 [2024-12-09 06:08:54.920764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.343 [2024-12-09 06:08:54.920778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.603 [2024-12-09 06:08:54.929390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.603 [2024-12-09 06:08:54.929404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.603 [2024-12-09 06:08:54.938595] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.603 [2024-12-09 06:08:54.938610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.603 [2024-12-09 06:08:54.947079] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.603 [2024-12-09 06:08:54.947094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.603 [2024-12-09 06:08:54.956021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.603 [2024-12-09 06:08:54.956040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.603 [2024-12-09 06:08:54.964887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.603 [2024-12-09 06:08:54.964901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.603 [2024-12-09 06:08:54.973698] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.603 [2024-12-09 06:08:54.973712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.603 [2024-12-09 06:08:54.982911] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.603 [2024-12-09 06:08:54.982925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.603 [2024-12-09 06:08:54.991074] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.603 [2024-12-09 06:08:54.991089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.603 [2024-12-09 06:08:54.999906] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.603 [2024-12-09 06:08:54.999921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.603 [2024-12-09 06:08:55.008467] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.603 [2024-12-09 06:08:55.008483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.603 [2024-12-09 06:08:55.017323] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.603 [2024-12-09 06:08:55.017338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.603 [2024-12-09 06:08:55.025326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.603 [2024-12-09 06:08:55.025340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.603 [2024-12-09 06:08:55.034484] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.603 [2024-12-09 06:08:55.034499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.604 [2024-12-09 06:08:55.042503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.604 [2024-12-09 06:08:55.042518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.604 [2024-12-09 06:08:55.051435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.604 [2024-12-09 06:08:55.051455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.604 [2024-12-09 06:08:55.060623] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.604 [2024-12-09 06:08:55.060638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.604 [2024-12-09 06:08:55.068772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.604 [2024-12-09 06:08:55.068786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.604 [2024-12-09 06:08:55.077337] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.604 [2024-12-09 06:08:55.077352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.604 [2024-12-09 06:08:55.086572] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.604 [2024-12-09 06:08:55.086587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.604 [2024-12-09 06:08:55.095161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.604 [2024-12-09 06:08:55.095176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.604 [2024-12-09 06:08:55.103764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.604 [2024-12-09 06:08:55.103778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.604 [2024-12-09 06:08:55.111741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.604 [2024-12-09 06:08:55.111756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.604 [2024-12-09 06:08:55.120833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.604 [2024-12-09 06:08:55.120852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.604 [2024-12-09 06:08:55.130165] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.604 [2024-12-09 06:08:55.130180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.604 [2024-12-09 06:08:55.139011] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.604 [2024-12-09 06:08:55.139026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.604 [2024-12-09 06:08:55.147626] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.604 [2024-12-09 06:08:55.147641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.604 [2024-12-09 06:08:55.156856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.604 [2024-12-09 06:08:55.156870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.604 [2024-12-09 06:08:55.165323] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.604 [2024-12-09 06:08:55.165338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.604 [2024-12-09 06:08:55.174203] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.604 [2024-12-09 06:08:55.174218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.604 [2024-12-09 06:08:55.182492] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.604 [2024-12-09 06:08:55.182507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.864 [2024-12-09 06:08:55.191481] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.864 [2024-12-09 06:08:55.191496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.864 [2024-12-09 06:08:55.200574] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.864 [2024-12-09 06:08:55.200588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.864 [2024-12-09 06:08:55.209011] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.864 [2024-12-09 06:08:55.209025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.864 [2024-12-09 06:08:55.218155] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.864 [2024-12-09 06:08:55.218171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.864 [2024-12-09 06:08:55.226231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.864 [2024-12-09 06:08:55.226245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.864 [2024-12-09 06:08:55.235176] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.864 [2024-12-09 06:08:55.235191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.864 [2024-12-09 06:08:55.244275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.864 [2024-12-09 06:08:55.244290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.864 [2024-12-09 06:08:55.253009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.864 [2024-12-09 06:08:55.253024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.864 [2024-12-09 06:08:55.261571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.864 [2024-12-09 06:08:55.261586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.864 [2024-12-09 06:08:55.269793] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.864 [2024-12-09 06:08:55.269808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.864 [2024-12-09 06:08:55.278756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.864 [2024-12-09 06:08:55.278770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.864 [2024-12-09 06:08:55.287538] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.864 [2024-12-09 06:08:55.287556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.864 [2024-12-09 06:08:55.296327] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.864 [2024-12-09 06:08:55.296341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.864 [2024-12-09 06:08:55.305751] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.864 [2024-12-09 06:08:55.305766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.864 [2024-12-09 06:08:55.313881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.864 [2024-12-09 06:08:55.313896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.864 [2024-12-09 06:08:55.322839] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.864 [2024-12-09 06:08:55.322853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.864 [2024-12-09 06:08:55.331363] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.864 [2024-12-09 06:08:55.331377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.864 [2024-12-09 06:08:55.339714] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.864 [2024-12-09 06:08:55.339729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.864 [2024-12-09 06:08:55.348334] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.864 [2024-12-09 06:08:55.348348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.864 [2024-12-09 06:08:55.357377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.864 [2024-12-09 06:08:55.357391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.864 [2024-12-09 06:08:55.365806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.864 [2024-12-09 06:08:55.365820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.864 [2024-12-09 06:08:55.375221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.864 [2024-12-09 06:08:55.375236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.864 [2024-12-09 06:08:55.383350] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.864 [2024-12-09 06:08:55.383365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.864 [2024-12-09 06:08:55.392433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.864 [2024-12-09 06:08:55.392452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.864 [2024-12-09 06:08:55.401402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.864 [2024-12-09 06:08:55.401418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.864 [2024-12-09 06:08:55.409800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.864 [2024-12-09 06:08:55.409814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.864 [2024-12-09 06:08:55.418572] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.864 [2024-12-09 06:08:55.418586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.864 [2024-12-09 06:08:55.427799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.864 [2024-12-09 06:08:55.427813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.864 [2024-12-09 06:08:55.436072] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.864 [2024-12-09 06:08:55.436087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.864 [2024-12-09 06:08:55.445037] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.864 [2024-12-09 06:08:55.445052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.125 [2024-12-09 06:08:55.454093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.125 [2024-12-09 06:08:55.454111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.125 [2024-12-09 06:08:55.463008] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.125 [2024-12-09 06:08:55.463022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.125 [2024-12-09 06:08:55.471822] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.125 [2024-12-09 06:08:55.471837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.125 [2024-12-09 06:08:55.480604] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.126 [2024-12-09 06:08:55.480619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.126 [2024-12-09 06:08:55.489310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.126 [2024-12-09 06:08:55.489324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.126 [2024-12-09 06:08:55.497863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.126 [2024-12-09 06:08:55.497877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.126 [2024-12-09 06:08:55.506735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.126 [2024-12-09 06:08:55.506750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.126 [2024-12-09 06:08:55.516041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.126 [2024-12-09 06:08:55.516055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.126 [2024-12-09 06:08:55.524650] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.126 [2024-12-09 06:08:55.524664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.126 [2024-12-09 06:08:55.533397] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.126 [2024-12-09 06:08:55.533411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.126 [2024-12-09 06:08:55.542383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.126 [2024-12-09 06:08:55.542397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.126 [2024-12-09 06:08:55.551405] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.126 [2024-12-09 06:08:55.551419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.126 [2024-12-09 06:08:55.559891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.126 [2024-12-09 06:08:55.559905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.126 [2024-12-09 06:08:55.568804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.126 [2024-12-09 06:08:55.568818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.126 [2024-12-09 06:08:55.577814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.126 [2024-12-09 06:08:55.577829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.126 [2024-12-09 06:08:55.585980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.126 [2024-12-09 06:08:55.585994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.126 [2024-12-09 06:08:55.594987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.126 [2024-12-09 06:08:55.595001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.126 [2024-12-09 06:08:55.603968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.126 [2024-12-09 06:08:55.603982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.126 [2024-12-09 06:08:55.613149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.126 [2024-12-09 06:08:55.613163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.126 [2024-12-09 06:08:55.621548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.126 [2024-12-09 06:08:55.621562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.126 [2024-12-09 06:08:55.630637] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.126 [2024-12-09 06:08:55.630651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.126 [2024-12-09 06:08:55.639578] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.126 [2024-12-09 06:08:55.639593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.126 [2024-12-09 06:08:55.647939] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.126 [2024-12-09 06:08:55.647954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.126 [2024-12-09 06:08:55.657216] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.126 [2024-12-09 06:08:55.657230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.126 [2024-12-09 06:08:55.665231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.126 [2024-12-09 06:08:55.665245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.126 [2024-12-09 06:08:55.674415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.126 [2024-12-09 06:08:55.674430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.126 [2024-12-09 06:08:55.683088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.126 [2024-12-09 06:08:55.683103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.126 [2024-12-09 06:08:55.692487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.126 [2024-12-09 06:08:55.692502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.126 [2024-12-09 06:08:55.701234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.126 [2024-12-09 06:08:55.701249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.126 [2024-12-09 06:08:55.710217] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.126 [2024-12-09 06:08:55.710231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.386 [2024-12-09 06:08:55.719156] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.386 [2024-12-09 06:08:55.719170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.386 [2024-12-09 06:08:55.727862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.386 [2024-12-09 06:08:55.727877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.386 [2024-12-09 06:08:55.735939] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.386 [2024-12-09 06:08:55.735954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.386 [2024-12-09 06:08:55.744860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.386 [2024-12-09 06:08:55.744875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.386 [2024-12-09 06:08:55.753655] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.386 [2024-12-09 06:08:55.753669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.386 [2024-12-09 06:08:55.762600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.386 [2024-12-09 06:08:55.762615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.386 [2024-12-09 06:08:55.771115] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.386 [2024-12-09 06:08:55.771129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.386 [2024-12-09 06:08:55.780329] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.386 [2024-12-09 06:08:55.780344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.386 [2024-12-09 06:08:55.788930] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.386 [2024-12-09 06:08:55.788945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.386 [2024-12-09 06:08:55.797311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.386 [2024-12-09 06:08:55.797325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.386 [2024-12-09 06:08:55.805687] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.386 [2024-12-09 06:08:55.805701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.386 [2024-12-09 06:08:55.814088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.386 [2024-12-09 06:08:55.814102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.386 [2024-12-09 06:08:55.822983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.386 [2024-12-09 06:08:55.822997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.386 [2024-12-09 06:08:55.832000] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.386 [2024-12-09 06:08:55.832015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.386 [2024-12-09 06:08:55.840000] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.386 [2024-12-09 06:08:55.840014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.387 [2024-12-09 06:08:55.848894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.387 [2024-12-09 06:08:55.848909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.387 [2024-12-09 06:08:55.857336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.387 [2024-12-09 06:08:55.857350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.387 [2024-12-09 06:08:55.866255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.387 [2024-12-09 06:08:55.866270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.387 [2024-12-09 06:08:55.874586] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.387 [2024-12-09 06:08:55.874600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.387 [2024-12-09 06:08:55.883228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.387 [2024-12-09 06:08:55.883243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.387 [2024-12-09 06:08:55.892199] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.387 [2024-12-09 06:08:55.892213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.387 [2024-12-09 06:08:55.900922] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.387 [2024-12-09 06:08:55.900936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.387 [2024-12-09 06:08:55.909803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.387 [2024-12-09 06:08:55.909818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.387 18692.60 IOPS, 146.04 MiB/s 00:10:01.387 Latency(us) 00:10:01.387 [2024-12-09T05:08:55.974Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:01.387 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:01.387 Nvme1n1 : 5.01 18698.84 146.08 0.00 0.00 6840.24 2848.30 13308.85 00:10:01.387 [2024-12-09T05:08:55.974Z] =================================================================================================================== 00:10:01.387 [2024-12-09T05:08:55.974Z] Total : 18698.84 146.08 0.00 0.00 6840.24 2848.30 13308.85 00:10:01.387 [2024-12-09 06:08:55.918637] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.387 [2024-12-09 06:08:55.918658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.387 [2024-12-09 06:08:55.924057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.387 [2024-12-09 06:08:55.924067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.387 [2024-12-09 06:08:55.932077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.387 [2024-12-09 06:08:55.932087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.387 [2024-12-09 06:08:55.940102] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.387 [2024-12-09 06:08:55.940116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.387 [2024-12-09 06:08:55.948118] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.387 [2024-12-09 06:08:55.948128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.387 [2024-12-09 06:08:55.956140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.387 [2024-12-09 06:08:55.956150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.387 [2024-12-09 06:08:55.964161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.387 [2024-12-09 06:08:55.964173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.647 [2024-12-09 06:08:55.972181] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.647 [2024-12-09 06:08:55.972191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.647 [2024-12-09 06:08:55.980202] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.647 [2024-12-09 06:08:55.980213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.647 [2024-12-09 06:08:55.988221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.647 [2024-12-09 06:08:55.988231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.647 [2024-12-09 06:08:55.996241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.647 [2024-12-09 06:08:55.996249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.647 [2024-12-09 06:08:56.004261] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.647 [2024-12-09 06:08:56.004271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.647 [2024-12-09 06:08:56.012282] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.647 [2024-12-09 06:08:56.012289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.647 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (191407) - No such process 00:10:01.647 06:08:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 191407 00:10:01.647 06:08:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:01.647 06:08:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.647 06:08:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:01.647 06:08:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.647 06:08:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:01.647 06:08:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.647 06:08:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:01.647 delay0 00:10:01.647 06:08:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.647 06:08:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:01.647 06:08:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.647 06:08:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:01.647 06:08:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.647 06:08:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:01.647 [2024-12-09 06:08:56.152527] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:08.237 Initializing NVMe Controllers 00:10:08.237 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:08.237 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:08.237 Initialization complete. Launching workers. 00:10:08.237 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 425 00:10:08.237 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 712, failed to submit 33 00:10:08.237 success 518, unsuccessful 194, failed 0 00:10:08.237 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:08.237 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:08.237 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:08.237 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:08.237 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:08.237 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:08.237 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:08.237 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:08.237 rmmod nvme_tcp 00:10:08.237 rmmod nvme_fabrics 00:10:08.237 rmmod nvme_keyring 00:10:08.237 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:08.237 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:08.237 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:08.237 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 189548 ']' 00:10:08.237 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 189548 00:10:08.237 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 189548 ']' 00:10:08.237 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 189548 00:10:08.237 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:10:08.237 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:08.237 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 189548 00:10:08.237 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:08.237 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:08.237 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 189548' 00:10:08.237 killing process with pid 189548 00:10:08.237 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 189548 00:10:08.237 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 189548 00:10:08.237 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:08.237 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:08.237 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:08.237 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:10:08.237 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:10:08.237 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:08.237 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:10:08.237 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:08.237 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:08.237 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:08.237 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:08.237 06:09:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:10.146 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:10.146 00:10:10.146 real 0m33.305s 00:10:10.146 user 0m44.669s 00:10:10.146 sys 0m9.888s 00:10:10.146 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:10.146 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:10.146 ************************************ 00:10:10.146 END TEST nvmf_zcopy 00:10:10.146 ************************************ 00:10:10.146 06:09:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:10.146 06:09:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:10.146 06:09:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:10.146 06:09:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:10.146 ************************************ 00:10:10.146 START TEST nvmf_nmic 00:10:10.146 ************************************ 00:10:10.146 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:10.406 * Looking for test storage... 00:10:10.406 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:10.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.406 --rc genhtml_branch_coverage=1 00:10:10.406 --rc genhtml_function_coverage=1 00:10:10.406 --rc genhtml_legend=1 00:10:10.406 --rc geninfo_all_blocks=1 00:10:10.406 --rc geninfo_unexecuted_blocks=1 00:10:10.406 00:10:10.406 ' 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:10.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.406 --rc genhtml_branch_coverage=1 00:10:10.406 --rc genhtml_function_coverage=1 00:10:10.406 --rc genhtml_legend=1 00:10:10.406 --rc geninfo_all_blocks=1 00:10:10.406 --rc geninfo_unexecuted_blocks=1 00:10:10.406 00:10:10.406 ' 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:10.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.406 --rc genhtml_branch_coverage=1 00:10:10.406 --rc genhtml_function_coverage=1 00:10:10.406 --rc genhtml_legend=1 00:10:10.406 --rc geninfo_all_blocks=1 00:10:10.406 --rc geninfo_unexecuted_blocks=1 00:10:10.406 00:10:10.406 ' 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:10.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.406 --rc genhtml_branch_coverage=1 00:10:10.406 --rc genhtml_function_coverage=1 00:10:10.406 --rc genhtml_legend=1 00:10:10.406 --rc geninfo_all_blocks=1 00:10:10.406 --rc geninfo_unexecuted_blocks=1 00:10:10.406 00:10:10.406 ' 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:10.406 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:10.406 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:10.407 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:10.407 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:10:10.407 06:09:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:18.546 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:18.546 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:18.546 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:18.546 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:18.546 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:18.547 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:18.547 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:18.547 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:18.547 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:18.547 06:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:18.547 06:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:18.547 06:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:18.547 06:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:18.547 06:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:18.547 06:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:18.547 06:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:18.547 06:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:18.547 06:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:18.547 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:18.547 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.586 ms 00:10:18.547 00:10:18.547 --- 10.0.0.2 ping statistics --- 00:10:18.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:18.547 rtt min/avg/max/mdev = 0.586/0.586/0.586/0.000 ms 00:10:18.547 06:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:18.547 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:18.547 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.296 ms 00:10:18.547 00:10:18.547 --- 10.0.0.1 ping statistics --- 00:10:18.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:18.547 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:10:18.547 06:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:18.547 06:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:10:18.547 06:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:18.547 06:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:18.547 06:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:18.547 06:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:18.547 06:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:18.547 06:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:18.547 06:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:18.547 06:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:18.547 06:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:18.547 06:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:18.547 06:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:18.547 06:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=198025 00:10:18.547 06:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 198025 00:10:18.547 06:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:18.547 06:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 198025 ']' 00:10:18.547 06:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:18.547 06:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:18.547 06:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:18.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:18.547 06:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:18.547 06:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:18.547 [2024-12-09 06:09:12.338196] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:10:18.547 [2024-12-09 06:09:12.338259] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:18.547 [2024-12-09 06:09:12.435280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:18.547 [2024-12-09 06:09:12.487779] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:18.547 [2024-12-09 06:09:12.487833] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:18.547 [2024-12-09 06:09:12.487842] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:18.547 [2024-12-09 06:09:12.487849] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:18.547 [2024-12-09 06:09:12.487855] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:18.547 [2024-12-09 06:09:12.489933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:18.547 [2024-12-09 06:09:12.490097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:18.547 [2024-12-09 06:09:12.490252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.547 [2024-12-09 06:09:12.490252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:18.808 06:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:18.808 06:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:10:18.808 06:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:18.808 06:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:18.808 06:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:18.808 06:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:18.808 06:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:18.808 06:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.808 06:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:18.808 [2024-12-09 06:09:13.205738] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:18.808 06:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.808 06:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:18.808 06:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.808 06:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:18.808 Malloc0 00:10:18.808 06:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.808 06:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:18.808 06:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.808 06:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:18.809 06:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.809 06:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:18.809 06:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.809 06:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:18.809 06:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.809 06:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:18.809 06:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.809 06:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:18.809 [2024-12-09 06:09:13.281358] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:18.809 06:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.809 06:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:18.809 test case1: single bdev can't be used in multiple subsystems 00:10:18.809 06:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:18.809 06:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.809 06:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:18.809 06:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.809 06:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:18.809 06:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.809 06:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:18.809 06:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.809 06:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:18.809 06:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:18.809 06:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.809 06:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:18.809 [2024-12-09 06:09:13.317303] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:18.809 [2024-12-09 06:09:13.317323] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:18.809 [2024-12-09 06:09:13.317331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.809 request: 00:10:18.809 { 00:10:18.809 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:18.809 "namespace": { 00:10:18.809 "bdev_name": "Malloc0", 00:10:18.809 "no_auto_visible": false, 00:10:18.809 "hide_metadata": false 00:10:18.809 }, 00:10:18.809 "method": "nvmf_subsystem_add_ns", 00:10:18.809 "req_id": 1 00:10:18.809 } 00:10:18.809 Got JSON-RPC error response 00:10:18.809 response: 00:10:18.809 { 00:10:18.809 "code": -32602, 00:10:18.809 "message": "Invalid parameters" 00:10:18.809 } 00:10:18.809 06:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:18.809 06:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:18.809 06:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:18.809 06:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:18.809 Adding namespace failed - expected result. 00:10:18.809 06:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:18.809 test case2: host connect to nvmf target in multiple paths 00:10:18.809 06:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:18.809 06:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.809 06:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:18.809 [2024-12-09 06:09:13.329457] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:18.809 06:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.809 06:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:20.724 06:09:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:22.118 06:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:22.118 06:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:10:22.118 06:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:22.118 06:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:22.118 06:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:10:24.030 06:09:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:24.030 06:09:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:24.030 06:09:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:24.030 06:09:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:24.030 06:09:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:24.030 06:09:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:10:24.030 06:09:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:24.030 [global] 00:10:24.030 thread=1 00:10:24.030 invalidate=1 00:10:24.030 rw=write 00:10:24.030 time_based=1 00:10:24.030 runtime=1 00:10:24.030 ioengine=libaio 00:10:24.030 direct=1 00:10:24.030 bs=4096 00:10:24.030 iodepth=1 00:10:24.030 norandommap=0 00:10:24.030 numjobs=1 00:10:24.030 00:10:24.030 verify_dump=1 00:10:24.030 verify_backlog=512 00:10:24.030 verify_state_save=0 00:10:24.030 do_verify=1 00:10:24.030 verify=crc32c-intel 00:10:24.030 [job0] 00:10:24.030 filename=/dev/nvme0n1 00:10:24.030 Could not set queue depth (nvme0n1) 00:10:24.606 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:24.606 fio-3.35 00:10:24.606 Starting 1 thread 00:10:25.995 00:10:25.996 job0: (groupid=0, jobs=1): err= 0: pid=199431: Mon Dec 9 06:09:20 2024 00:10:25.996 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:10:25.996 slat (nsec): min=7172, max=52167, avg=25608.71, stdev=2702.22 00:10:25.996 clat (usec): min=441, max=1176, avg=947.18, stdev=85.11 00:10:25.996 lat (usec): min=449, max=1202, avg=972.79, stdev=85.50 00:10:25.996 clat percentiles (usec): 00:10:25.996 | 1.00th=[ 644], 5.00th=[ 791], 10.00th=[ 857], 20.00th=[ 906], 00:10:25.996 | 30.00th=[ 930], 40.00th=[ 947], 50.00th=[ 963], 60.00th=[ 979], 00:10:25.996 | 70.00th=[ 996], 80.00th=[ 1004], 90.00th=[ 1029], 95.00th=[ 1045], 00:10:25.996 | 99.00th=[ 1074], 99.50th=[ 1139], 99.90th=[ 1172], 99.95th=[ 1172], 00:10:25.996 | 99.99th=[ 1172] 00:10:25.996 write: IOPS=858, BW=3433KiB/s (3515kB/s)(3436KiB/1001msec); 0 zone resets 00:10:25.996 slat (nsec): min=9225, max=59484, avg=28258.90, stdev=9755.68 00:10:25.996 clat (usec): min=170, max=2076, avg=545.14, stdev=104.78 00:10:25.996 lat (usec): min=182, max=2109, avg=573.40, stdev=109.12 00:10:25.996 clat percentiles (usec): 00:10:25.996 | 1.00th=[ 322], 5.00th=[ 383], 10.00th=[ 416], 20.00th=[ 465], 00:10:25.996 | 30.00th=[ 502], 40.00th=[ 537], 50.00th=[ 545], 60.00th=[ 562], 00:10:25.996 | 70.00th=[ 594], 80.00th=[ 627], 90.00th=[ 652], 95.00th=[ 676], 00:10:25.996 | 99.00th=[ 717], 99.50th=[ 725], 99.90th=[ 2073], 99.95th=[ 2073], 00:10:25.996 | 99.99th=[ 2073] 00:10:25.996 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:25.996 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:25.996 lat (usec) : 250=0.07%, 500=17.94%, 750=46.02%, 1000=27.13% 00:10:25.996 lat (msec) : 2=8.75%, 4=0.07% 00:10:25.996 cpu : usr=3.20%, sys=4.60%, ctx=1371, majf=0, minf=1 00:10:25.996 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:25.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.996 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.996 issued rwts: total=512,859,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:25.996 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:25.996 00:10:25.996 Run status group 0 (all jobs): 00:10:25.996 READ: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:10:25.996 WRITE: bw=3433KiB/s (3515kB/s), 3433KiB/s-3433KiB/s (3515kB/s-3515kB/s), io=3436KiB (3518kB), run=1001-1001msec 00:10:25.996 00:10:25.996 Disk stats (read/write): 00:10:25.996 nvme0n1: ios=562/685, merge=0/0, ticks=526/297, in_queue=823, util=93.79% 00:10:25.996 06:09:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:25.996 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:25.996 06:09:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:25.996 06:09:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:10:25.996 06:09:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:25.996 06:09:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:25.996 06:09:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:25.996 06:09:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:25.996 06:09:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:10:25.996 06:09:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:25.996 06:09:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:25.996 06:09:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:25.996 06:09:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:25.996 06:09:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:25.996 06:09:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:25.996 06:09:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:25.996 06:09:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:25.996 rmmod nvme_tcp 00:10:25.996 rmmod nvme_fabrics 00:10:25.996 rmmod nvme_keyring 00:10:25.996 06:09:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:25.996 06:09:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:25.996 06:09:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:25.996 06:09:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 198025 ']' 00:10:25.996 06:09:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 198025 00:10:25.996 06:09:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 198025 ']' 00:10:25.996 06:09:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 198025 00:10:25.996 06:09:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:10:25.996 06:09:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:25.996 06:09:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 198025 00:10:25.996 06:09:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:25.996 06:09:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:25.996 06:09:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 198025' 00:10:25.996 killing process with pid 198025 00:10:25.996 06:09:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 198025 00:10:25.996 06:09:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 198025 00:10:26.258 06:09:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:26.258 06:09:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:26.258 06:09:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:26.258 06:09:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:26.258 06:09:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:26.258 06:09:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:10:26.258 06:09:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:10:26.258 06:09:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:26.258 06:09:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:26.258 06:09:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:26.258 06:09:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:26.258 06:09:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:28.171 06:09:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:28.171 00:10:28.171 real 0m18.008s 00:10:28.171 user 0m43.935s 00:10:28.171 sys 0m6.639s 00:10:28.171 06:09:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:28.171 06:09:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:28.171 ************************************ 00:10:28.171 END TEST nvmf_nmic 00:10:28.171 ************************************ 00:10:28.171 06:09:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:28.171 06:09:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:28.171 06:09:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:28.171 06:09:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:28.433 ************************************ 00:10:28.433 START TEST nvmf_fio_target 00:10:28.433 ************************************ 00:10:28.433 06:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:28.433 * Looking for test storage... 00:10:28.433 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:28.433 06:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:28.433 06:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:10:28.434 06:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:28.434 06:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:28.434 06:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:28.434 06:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:28.434 06:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:28.434 06:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:28.434 06:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:28.434 06:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:28.434 06:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:28.434 06:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:28.434 06:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:28.434 06:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:28.434 06:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:28.434 06:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:28.434 06:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:28.434 06:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:28.434 06:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:28.434 06:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:28.434 06:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:28.434 06:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:28.434 06:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:28.434 06:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:28.434 06:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:28.434 06:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:28.434 06:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:28.434 06:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:28.434 06:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:28.434 06:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:28.434 06:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:28.434 06:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:28.434 06:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:28.434 06:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:28.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.434 --rc genhtml_branch_coverage=1 00:10:28.434 --rc genhtml_function_coverage=1 00:10:28.434 --rc genhtml_legend=1 00:10:28.434 --rc geninfo_all_blocks=1 00:10:28.434 --rc geninfo_unexecuted_blocks=1 00:10:28.434 00:10:28.434 ' 00:10:28.434 06:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:28.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.434 --rc genhtml_branch_coverage=1 00:10:28.434 --rc genhtml_function_coverage=1 00:10:28.434 --rc genhtml_legend=1 00:10:28.434 --rc geninfo_all_blocks=1 00:10:28.434 --rc geninfo_unexecuted_blocks=1 00:10:28.434 00:10:28.434 ' 00:10:28.434 06:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:28.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.434 --rc genhtml_branch_coverage=1 00:10:28.434 --rc genhtml_function_coverage=1 00:10:28.434 --rc genhtml_legend=1 00:10:28.434 --rc geninfo_all_blocks=1 00:10:28.434 --rc geninfo_unexecuted_blocks=1 00:10:28.434 00:10:28.434 ' 00:10:28.434 06:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:28.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.434 --rc genhtml_branch_coverage=1 00:10:28.434 --rc genhtml_function_coverage=1 00:10:28.434 --rc genhtml_legend=1 00:10:28.434 --rc geninfo_all_blocks=1 00:10:28.434 --rc geninfo_unexecuted_blocks=1 00:10:28.434 00:10:28.434 ' 00:10:28.434 06:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:28.434 06:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:28.434 06:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:28.434 06:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:28.434 06:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:28.434 06:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:28.434 06:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:28.434 06:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:28.434 06:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:28.434 06:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:28.434 06:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:28.434 06:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:28.434 06:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:10:28.434 06:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:10:28.434 06:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:28.434 06:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:28.434 06:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:28.434 06:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:28.434 06:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:28.434 06:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:28.434 06:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:28.434 06:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:28.434 06:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:28.434 06:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.434 06:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.434 06:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.434 06:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:28.434 06:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.434 06:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:28.434 06:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:28.434 06:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:28.434 06:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:28.434 06:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:28.434 06:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:28.434 06:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:28.434 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:28.434 06:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:28.435 06:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:28.435 06:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:28.435 06:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:28.435 06:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:28.435 06:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:28.696 06:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:28.696 06:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:28.696 06:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:28.696 06:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:28.696 06:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:28.696 06:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:28.696 06:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:28.696 06:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:28.696 06:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:28.696 06:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:28.696 06:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:28.696 06:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:28.696 06:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.862 06:09:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:36.862 06:09:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:36.862 06:09:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:36.862 06:09:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:36.862 06:09:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:36.862 06:09:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:36.862 06:09:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:36.862 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:36.862 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:36.862 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:36.862 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:36.862 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:36.862 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:36.862 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:36.862 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:36.862 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:36.862 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:36.862 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:36.862 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:36.862 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:36.862 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:36.862 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:36.862 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:36.862 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:36.862 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:36.862 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:36.862 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:36.862 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:36.862 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:36.862 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:36.862 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:36.862 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:36.862 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:36.862 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:36.862 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:36.862 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:36.862 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:36.862 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:36.862 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:36.862 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:36.862 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:36.862 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:36.862 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:36.862 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:36.862 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:36.862 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:36.862 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:36.862 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:36.862 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:36.862 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:36.862 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:36.862 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:36.862 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:36.862 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:36.862 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:36.863 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:36.863 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:36.863 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:36.863 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:36.863 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:36.863 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:36.863 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:36.863 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:36.863 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:36.863 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:36.863 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:36.863 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:36.863 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:36.863 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:36.863 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:36.863 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:36.863 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:36.863 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:36.863 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:10:36.863 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:36.863 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:36.863 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:36.863 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:36.863 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:36.863 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:36.863 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:36.863 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:36.863 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:36.863 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:36.863 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:36.863 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:36.863 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:36.863 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:36.863 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:36.863 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:36.863 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:36.863 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:36.863 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:36.863 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:36.863 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:36.863 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:36.863 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:36.863 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:36.863 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:36.863 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:36.863 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:36.863 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.617 ms 00:10:36.863 00:10:36.863 --- 10.0.0.2 ping statistics --- 00:10:36.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:36.863 rtt min/avg/max/mdev = 0.617/0.617/0.617/0.000 ms 00:10:36.863 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:36.863 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:36.863 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:10:36.863 00:10:36.863 --- 10.0.0.1 ping statistics --- 00:10:36.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:36.863 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:10:36.863 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:36.863 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:10:36.863 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:36.863 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:36.863 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:36.863 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:36.863 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:36.863 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:36.863 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:36.863 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:36.863 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:36.863 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:36.863 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.863 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=203655 00:10:36.863 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 203655 00:10:36.863 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:36.863 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 203655 ']' 00:10:36.863 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:36.863 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:36.863 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:36.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:36.863 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:36.863 06:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.863 [2024-12-09 06:09:30.433351] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:10:36.863 [2024-12-09 06:09:30.433416] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:36.863 [2024-12-09 06:09:30.532937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:36.863 [2024-12-09 06:09:30.585414] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:36.863 [2024-12-09 06:09:30.585483] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:36.863 [2024-12-09 06:09:30.585493] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:36.863 [2024-12-09 06:09:30.585519] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:36.863 [2024-12-09 06:09:30.585525] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:36.863 [2024-12-09 06:09:30.587408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:36.863 [2024-12-09 06:09:30.587465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:36.863 [2024-12-09 06:09:30.587615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:36.863 [2024-12-09 06:09:30.587730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.863 06:09:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:36.863 06:09:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:10:36.863 06:09:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:36.863 06:09:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:36.863 06:09:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.863 06:09:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:36.863 06:09:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:37.125 [2024-12-09 06:09:31.472895] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:37.125 06:09:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:37.387 06:09:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:37.387 06:09:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:37.387 06:09:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:37.387 06:09:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:37.647 06:09:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:37.647 06:09:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:37.908 06:09:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:37.908 06:09:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:38.169 06:09:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:38.169 06:09:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:38.169 06:09:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:38.428 06:09:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:38.428 06:09:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:38.688 06:09:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:38.688 06:09:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:38.688 06:09:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:38.948 06:09:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:38.948 06:09:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:39.209 06:09:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:39.209 06:09:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:39.470 06:09:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:39.470 [2024-12-09 06:09:33.964462] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:39.470 06:09:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:39.731 06:09:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:39.992 06:09:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:41.376 06:09:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:41.376 06:09:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:10:41.376 06:09:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:41.376 06:09:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:10:41.376 06:09:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:10:41.376 06:09:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:10:43.287 06:09:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:43.287 06:09:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:43.287 06:09:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:43.546 06:09:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:10:43.546 06:09:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:43.547 06:09:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:10:43.547 06:09:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:43.547 [global] 00:10:43.547 thread=1 00:10:43.547 invalidate=1 00:10:43.547 rw=write 00:10:43.547 time_based=1 00:10:43.547 runtime=1 00:10:43.547 ioengine=libaio 00:10:43.547 direct=1 00:10:43.547 bs=4096 00:10:43.547 iodepth=1 00:10:43.547 norandommap=0 00:10:43.547 numjobs=1 00:10:43.547 00:10:43.547 verify_dump=1 00:10:43.547 verify_backlog=512 00:10:43.547 verify_state_save=0 00:10:43.547 do_verify=1 00:10:43.547 verify=crc32c-intel 00:10:43.547 [job0] 00:10:43.547 filename=/dev/nvme0n1 00:10:43.547 [job1] 00:10:43.547 filename=/dev/nvme0n2 00:10:43.547 [job2] 00:10:43.547 filename=/dev/nvme0n3 00:10:43.547 [job3] 00:10:43.547 filename=/dev/nvme0n4 00:10:43.547 Could not set queue depth (nvme0n1) 00:10:43.547 Could not set queue depth (nvme0n2) 00:10:43.547 Could not set queue depth (nvme0n3) 00:10:43.547 Could not set queue depth (nvme0n4) 00:10:43.806 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:43.806 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:43.806 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:43.806 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:43.806 fio-3.35 00:10:43.806 Starting 4 threads 00:10:45.191 00:10:45.191 job0: (groupid=0, jobs=1): err= 0: pid=205121: Mon Dec 9 06:09:39 2024 00:10:45.191 read: IOPS=17, BW=71.4KiB/s (73.1kB/s)(72.0KiB/1008msec) 00:10:45.191 slat (nsec): min=10292, max=27151, avg=25863.28, stdev=3890.06 00:10:45.191 clat (usec): min=915, max=42969, avg=39632.19, stdev=9687.32 00:10:45.191 lat (usec): min=942, max=42996, avg=39658.05, stdev=9687.14 00:10:45.191 clat percentiles (usec): 00:10:45.191 | 1.00th=[ 914], 5.00th=[ 914], 10.00th=[41157], 20.00th=[41157], 00:10:45.191 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:10:45.191 | 70.00th=[42206], 80.00th=[42730], 90.00th=[42730], 95.00th=[42730], 00:10:45.191 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:10:45.191 | 99.99th=[42730] 00:10:45.191 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:10:45.191 slat (nsec): min=9123, max=59144, avg=30942.94, stdev=10786.12 00:10:45.191 clat (usec): min=124, max=1149, avg=534.28, stdev=134.86 00:10:45.191 lat (usec): min=136, max=1184, avg=565.22, stdev=139.45 00:10:45.191 clat percentiles (usec): 00:10:45.191 | 1.00th=[ 253], 5.00th=[ 334], 10.00th=[ 359], 20.00th=[ 416], 00:10:45.191 | 30.00th=[ 474], 40.00th=[ 502], 50.00th=[ 537], 60.00th=[ 570], 00:10:45.191 | 70.00th=[ 603], 80.00th=[ 644], 90.00th=[ 701], 95.00th=[ 742], 00:10:45.191 | 99.00th=[ 857], 99.50th=[ 979], 99.90th=[ 1156], 99.95th=[ 1156], 00:10:45.191 | 99.99th=[ 1156] 00:10:45.191 bw ( KiB/s): min= 4096, max= 4096, per=32.55%, avg=4096.00, stdev= 0.00, samples=1 00:10:45.191 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:45.191 lat (usec) : 250=0.94%, 500=37.36%, 750=54.34%, 1000=3.77% 00:10:45.191 lat (msec) : 2=0.38%, 50=3.21% 00:10:45.191 cpu : usr=1.19%, sys=1.79%, ctx=532, majf=0, minf=1 00:10:45.191 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:45.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.191 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.191 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:45.191 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:45.191 job1: (groupid=0, jobs=1): err= 0: pid=205122: Mon Dec 9 06:09:39 2024 00:10:45.191 read: IOPS=600, BW=2402KiB/s (2459kB/s)(2404KiB/1001msec) 00:10:45.191 slat (nsec): min=6782, max=59196, avg=23522.74, stdev=7129.42 00:10:45.191 clat (usec): min=281, max=1140, avg=769.91, stdev=93.49 00:10:45.191 lat (usec): min=307, max=1166, avg=793.43, stdev=95.19 00:10:45.191 clat percentiles (usec): 00:10:45.191 | 1.00th=[ 537], 5.00th=[ 627], 10.00th=[ 660], 20.00th=[ 709], 00:10:45.191 | 30.00th=[ 742], 40.00th=[ 758], 50.00th=[ 775], 60.00th=[ 783], 00:10:45.191 | 70.00th=[ 799], 80.00th=[ 824], 90.00th=[ 873], 95.00th=[ 938], 00:10:45.191 | 99.00th=[ 1004], 99.50th=[ 1074], 99.90th=[ 1139], 99.95th=[ 1139], 00:10:45.191 | 99.99th=[ 1139] 00:10:45.191 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:10:45.191 slat (nsec): min=9381, max=53360, avg=27837.10, stdev=9628.73 00:10:45.191 clat (usec): min=201, max=806, avg=472.29, stdev=109.79 00:10:45.191 lat (usec): min=234, max=826, avg=500.13, stdev=113.15 00:10:45.191 clat percentiles (usec): 00:10:45.191 | 1.00th=[ 247], 5.00th=[ 318], 10.00th=[ 338], 20.00th=[ 371], 00:10:45.191 | 30.00th=[ 424], 40.00th=[ 445], 50.00th=[ 461], 60.00th=[ 482], 00:10:45.191 | 70.00th=[ 506], 80.00th=[ 562], 90.00th=[ 644], 95.00th=[ 685], 00:10:45.191 | 99.00th=[ 734], 99.50th=[ 750], 99.90th=[ 791], 99.95th=[ 807], 00:10:45.191 | 99.99th=[ 807] 00:10:45.192 bw ( KiB/s): min= 4096, max= 4096, per=32.55%, avg=4096.00, stdev= 0.00, samples=1 00:10:45.192 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:45.192 lat (usec) : 250=0.68%, 500=41.54%, 750=33.42%, 1000=23.82% 00:10:45.192 lat (msec) : 2=0.55% 00:10:45.192 cpu : usr=2.30%, sys=4.60%, ctx=1626, majf=0, minf=1 00:10:45.192 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:45.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.192 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.192 issued rwts: total=601,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:45.192 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:45.192 job2: (groupid=0, jobs=1): err= 0: pid=205133: Mon Dec 9 06:09:39 2024 00:10:45.192 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:10:45.192 slat (nsec): min=6491, max=43937, avg=25985.58, stdev=2481.46 00:10:45.192 clat (usec): min=701, max=1984, avg=948.71, stdev=94.85 00:10:45.192 lat (usec): min=727, max=2010, avg=974.69, stdev=94.99 00:10:45.192 clat percentiles (usec): 00:10:45.192 | 1.00th=[ 734], 5.00th=[ 799], 10.00th=[ 848], 20.00th=[ 898], 00:10:45.192 | 30.00th=[ 922], 40.00th=[ 938], 50.00th=[ 955], 60.00th=[ 971], 00:10:45.192 | 70.00th=[ 988], 80.00th=[ 996], 90.00th=[ 1029], 95.00th=[ 1057], 00:10:45.192 | 99.00th=[ 1123], 99.50th=[ 1156], 99.90th=[ 1991], 99.95th=[ 1991], 00:10:45.192 | 99.99th=[ 1991] 00:10:45.192 write: IOPS=819, BW=3277KiB/s (3355kB/s)(3280KiB/1001msec); 0 zone resets 00:10:45.192 slat (nsec): min=9154, max=65179, avg=29586.36, stdev=9576.88 00:10:45.192 clat (usec): min=165, max=873, avg=569.98, stdev=109.26 00:10:45.192 lat (usec): min=178, max=922, avg=599.56, stdev=112.96 00:10:45.192 clat percentiles (usec): 00:10:45.192 | 1.00th=[ 306], 5.00th=[ 392], 10.00th=[ 429], 20.00th=[ 482], 00:10:45.192 | 30.00th=[ 515], 40.00th=[ 545], 50.00th=[ 570], 60.00th=[ 603], 00:10:45.192 | 70.00th=[ 627], 80.00th=[ 660], 90.00th=[ 709], 95.00th=[ 742], 00:10:45.192 | 99.00th=[ 816], 99.50th=[ 840], 99.90th=[ 873], 99.95th=[ 873], 00:10:45.192 | 99.99th=[ 873] 00:10:45.192 bw ( KiB/s): min= 4087, max= 4087, per=32.48%, avg=4087.00, stdev= 0.00, samples=1 00:10:45.192 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:10:45.192 lat (usec) : 250=0.15%, 500=15.24%, 750=44.07%, 1000=33.18% 00:10:45.192 lat (msec) : 2=7.36% 00:10:45.192 cpu : usr=3.10%, sys=4.60%, ctx=1332, majf=0, minf=2 00:10:45.192 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:45.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.192 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.192 issued rwts: total=512,820,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:45.192 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:45.192 job3: (groupid=0, jobs=1): err= 0: pid=205139: Mon Dec 9 06:09:39 2024 00:10:45.192 read: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec) 00:10:45.192 slat (nsec): min=7440, max=45604, avg=26594.12, stdev=2289.51 00:10:45.192 clat (usec): min=566, max=1150, avg=955.59, stdev=65.18 00:10:45.192 lat (usec): min=592, max=1176, avg=982.18, stdev=65.44 00:10:45.192 clat percentiles (usec): 00:10:45.192 | 1.00th=[ 725], 5.00th=[ 848], 10.00th=[ 889], 20.00th=[ 914], 00:10:45.192 | 30.00th=[ 938], 40.00th=[ 955], 50.00th=[ 963], 60.00th=[ 979], 00:10:45.192 | 70.00th=[ 988], 80.00th=[ 1004], 90.00th=[ 1020], 95.00th=[ 1037], 00:10:45.192 | 99.00th=[ 1090], 99.50th=[ 1106], 99.90th=[ 1156], 99.95th=[ 1156], 00:10:45.192 | 99.99th=[ 1156] 00:10:45.192 write: IOPS=813, BW=3253KiB/s (3332kB/s)(3260KiB/1002msec); 0 zone resets 00:10:45.192 slat (nsec): min=9277, max=61133, avg=31297.35, stdev=9677.27 00:10:45.192 clat (usec): min=134, max=885, avg=566.78, stdev=115.69 00:10:45.192 lat (usec): min=167, max=922, avg=598.08, stdev=118.91 00:10:45.192 clat percentiles (usec): 00:10:45.192 | 1.00th=[ 310], 5.00th=[ 375], 10.00th=[ 416], 20.00th=[ 465], 00:10:45.192 | 30.00th=[ 506], 40.00th=[ 537], 50.00th=[ 570], 60.00th=[ 594], 00:10:45.192 | 70.00th=[ 635], 80.00th=[ 668], 90.00th=[ 701], 95.00th=[ 758], 00:10:45.192 | 99.00th=[ 848], 99.50th=[ 865], 99.90th=[ 889], 99.95th=[ 889], 00:10:45.192 | 99.99th=[ 889] 00:10:45.192 bw ( KiB/s): min= 4096, max= 4096, per=32.55%, avg=4096.00, stdev= 0.00, samples=1 00:10:45.192 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:45.192 lat (usec) : 250=0.23%, 500=17.11%, 750=41.30%, 1000=33.61% 00:10:45.192 lat (msec) : 2=7.76% 00:10:45.192 cpu : usr=2.60%, sys=5.29%, ctx=1328, majf=0, minf=1 00:10:45.192 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:45.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.192 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.192 issued rwts: total=512,815,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:45.192 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:45.192 00:10:45.192 Run status group 0 (all jobs): 00:10:45.192 READ: bw=6520KiB/s (6676kB/s), 71.4KiB/s-2402KiB/s (73.1kB/s-2459kB/s), io=6572KiB (6730kB), run=1001-1008msec 00:10:45.192 WRITE: bw=12.3MiB/s (12.9MB/s), 2032KiB/s-4092KiB/s (2081kB/s-4190kB/s), io=12.4MiB (13.0MB), run=1001-1008msec 00:10:45.192 00:10:45.192 Disk stats (read/write): 00:10:45.192 nvme0n1: ios=64/512, merge=0/0, ticks=896/218, in_queue=1114, util=86.77% 00:10:45.192 nvme0n2: ios=562/795, merge=0/0, ticks=464/350, in_queue=814, util=84.96% 00:10:45.192 nvme0n3: ios=526/512, merge=0/0, ticks=499/237, in_queue=736, util=90.21% 00:10:45.192 nvme0n4: ios=527/512, merge=0/0, ticks=526/252, in_queue=778, util=92.15% 00:10:45.192 06:09:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:45.192 [global] 00:10:45.192 thread=1 00:10:45.192 invalidate=1 00:10:45.192 rw=randwrite 00:10:45.192 time_based=1 00:10:45.192 runtime=1 00:10:45.192 ioengine=libaio 00:10:45.192 direct=1 00:10:45.192 bs=4096 00:10:45.192 iodepth=1 00:10:45.192 norandommap=0 00:10:45.192 numjobs=1 00:10:45.192 00:10:45.192 verify_dump=1 00:10:45.192 verify_backlog=512 00:10:45.192 verify_state_save=0 00:10:45.192 do_verify=1 00:10:45.192 verify=crc32c-intel 00:10:45.192 [job0] 00:10:45.192 filename=/dev/nvme0n1 00:10:45.192 [job1] 00:10:45.192 filename=/dev/nvme0n2 00:10:45.192 [job2] 00:10:45.192 filename=/dev/nvme0n3 00:10:45.192 [job3] 00:10:45.192 filename=/dev/nvme0n4 00:10:45.192 Could not set queue depth (nvme0n1) 00:10:45.192 Could not set queue depth (nvme0n2) 00:10:45.192 Could not set queue depth (nvme0n3) 00:10:45.192 Could not set queue depth (nvme0n4) 00:10:45.453 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:45.453 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:45.453 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:45.453 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:45.453 fio-3.35 00:10:45.453 Starting 4 threads 00:10:46.839 00:10:46.839 job0: (groupid=0, jobs=1): err= 0: pid=205593: Mon Dec 9 06:09:41 2024 00:10:46.839 read: IOPS=31, BW=126KiB/s (129kB/s)(128KiB/1014msec) 00:10:46.839 slat (nsec): min=24963, max=26439, avg=25585.00, stdev=330.52 00:10:46.839 clat (usec): min=840, max=42994, avg=25232.64, stdev=20395.45 00:10:46.839 lat (usec): min=866, max=43020, avg=25258.23, stdev=20395.50 00:10:46.839 clat percentiles (usec): 00:10:46.839 | 1.00th=[ 840], 5.00th=[ 840], 10.00th=[ 955], 20.00th=[ 988], 00:10:46.839 | 30.00th=[ 1020], 40.00th=[ 1090], 50.00th=[41157], 60.00th=[41681], 00:10:46.839 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:10:46.839 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:10:46.839 | 99.99th=[43254] 00:10:46.839 write: IOPS=504, BW=2020KiB/s (2068kB/s)(2048KiB/1014msec); 0 zone resets 00:10:46.839 slat (nsec): min=9345, max=80268, avg=16498.97, stdev=10693.04 00:10:46.839 clat (usec): min=132, max=951, avg=379.80, stdev=157.39 00:10:46.839 lat (usec): min=141, max=983, avg=396.30, stdev=165.36 00:10:46.839 clat percentiles (usec): 00:10:46.839 | 1.00th=[ 180], 5.00th=[ 196], 10.00th=[ 204], 20.00th=[ 237], 00:10:46.839 | 30.00th=[ 265], 40.00th=[ 306], 50.00th=[ 343], 60.00th=[ 379], 00:10:46.839 | 70.00th=[ 445], 80.00th=[ 537], 90.00th=[ 627], 95.00th=[ 685], 00:10:46.839 | 99.00th=[ 750], 99.50th=[ 816], 99.90th=[ 955], 99.95th=[ 955], 00:10:46.839 | 99.99th=[ 955] 00:10:46.839 bw ( KiB/s): min= 4096, max= 4096, per=52.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:46.839 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:46.839 lat (usec) : 250=23.16%, 500=49.63%, 750=20.04%, 1000=2.94% 00:10:46.839 lat (msec) : 2=0.74%, 50=3.49% 00:10:46.839 cpu : usr=0.69%, sys=0.59%, ctx=549, majf=0, minf=1 00:10:46.839 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:46.839 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.839 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.839 issued rwts: total=32,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:46.839 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:46.839 job1: (groupid=0, jobs=1): err= 0: pid=205594: Mon Dec 9 06:09:41 2024 00:10:46.839 read: IOPS=17, BW=71.1KiB/s (72.9kB/s)(72.0KiB/1012msec) 00:10:46.839 slat (nsec): min=25483, max=26293, avg=25769.33, stdev=175.60 00:10:46.839 clat (usec): min=1107, max=42057, avg=39600.21, stdev=9610.16 00:10:46.839 lat (usec): min=1133, max=42083, avg=39625.98, stdev=9610.13 00:10:46.839 clat percentiles (usec): 00:10:46.839 | 1.00th=[ 1106], 5.00th=[ 1106], 10.00th=[41157], 20.00th=[41681], 00:10:46.839 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:10:46.839 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:46.839 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:46.839 | 99.99th=[42206] 00:10:46.839 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:10:46.839 slat (nsec): min=9682, max=60377, avg=29725.59, stdev=11238.70 00:10:46.839 clat (usec): min=203, max=848, avg=545.97, stdev=117.88 00:10:46.839 lat (usec): min=213, max=887, avg=575.70, stdev=122.97 00:10:46.839 clat percentiles (usec): 00:10:46.839 | 1.00th=[ 221], 5.00th=[ 330], 10.00th=[ 396], 20.00th=[ 457], 00:10:46.839 | 30.00th=[ 498], 40.00th=[ 529], 50.00th=[ 553], 60.00th=[ 586], 00:10:46.839 | 70.00th=[ 611], 80.00th=[ 644], 90.00th=[ 685], 95.00th=[ 717], 00:10:46.839 | 99.00th=[ 791], 99.50th=[ 824], 99.90th=[ 848], 99.95th=[ 848], 00:10:46.839 | 99.99th=[ 848] 00:10:46.839 bw ( KiB/s): min= 4096, max= 4096, per=52.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:46.839 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:46.839 lat (usec) : 250=2.08%, 500=26.98%, 750=65.28%, 1000=2.26% 00:10:46.839 lat (msec) : 2=0.19%, 50=3.21% 00:10:46.839 cpu : usr=0.79%, sys=1.29%, ctx=531, majf=0, minf=2 00:10:46.839 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:46.839 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.839 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.839 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:46.839 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:46.839 job2: (groupid=0, jobs=1): err= 0: pid=205595: Mon Dec 9 06:09:41 2024 00:10:46.839 read: IOPS=157, BW=631KiB/s (646kB/s)(656KiB/1040msec) 00:10:46.839 slat (nsec): min=10615, max=61016, avg=28228.07, stdev=5011.90 00:10:46.839 clat (usec): min=834, max=42172, avg=4527.28, stdev=11414.66 00:10:46.839 lat (usec): min=862, max=42199, avg=4555.50, stdev=11414.10 00:10:46.839 clat percentiles (usec): 00:10:46.839 | 1.00th=[ 889], 5.00th=[ 947], 10.00th=[ 963], 20.00th=[ 1004], 00:10:46.839 | 30.00th=[ 1020], 40.00th=[ 1037], 50.00th=[ 1057], 60.00th=[ 1074], 00:10:46.839 | 70.00th=[ 1090], 80.00th=[ 1123], 90.00th=[ 1172], 95.00th=[41681], 00:10:46.839 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:46.839 | 99.99th=[42206] 00:10:46.839 write: IOPS=492, BW=1969KiB/s (2016kB/s)(2048KiB/1040msec); 0 zone resets 00:10:46.839 slat (nsec): min=9165, max=80236, avg=30629.49, stdev=11955.97 00:10:46.839 clat (usec): min=121, max=1126, avg=529.70, stdev=146.45 00:10:46.839 lat (usec): min=131, max=1136, avg=560.33, stdev=152.63 00:10:46.839 clat percentiles (usec): 00:10:46.839 | 1.00th=[ 145], 5.00th=[ 249], 10.00th=[ 318], 20.00th=[ 429], 00:10:46.839 | 30.00th=[ 478], 40.00th=[ 523], 50.00th=[ 545], 60.00th=[ 570], 00:10:46.839 | 70.00th=[ 611], 80.00th=[ 635], 90.00th=[ 693], 95.00th=[ 742], 00:10:46.839 | 99.00th=[ 824], 99.50th=[ 930], 99.90th=[ 1123], 99.95th=[ 1123], 00:10:46.839 | 99.99th=[ 1123] 00:10:46.839 bw ( KiB/s): min= 4096, max= 4096, per=52.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:46.839 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:46.839 lat (usec) : 250=3.85%, 500=22.78%, 750=46.01%, 1000=7.54% 00:10:46.839 lat (msec) : 2=17.60%, 4=0.15%, 50=2.07% 00:10:46.839 cpu : usr=1.64%, sys=2.21%, ctx=677, majf=0, minf=1 00:10:46.839 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:46.839 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.839 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.839 issued rwts: total=164,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:46.839 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:46.839 job3: (groupid=0, jobs=1): err= 0: pid=205596: Mon Dec 9 06:09:41 2024 00:10:46.839 read: IOPS=21, BW=86.6KiB/s (88.7kB/s)(88.0KiB/1016msec) 00:10:46.839 slat (nsec): min=25961, max=27117, avg=26493.77, stdev=381.26 00:10:46.839 clat (usec): min=897, max=42994, avg=32542.61, stdev=17551.94 00:10:46.839 lat (usec): min=924, max=43021, avg=32569.11, stdev=17551.90 00:10:46.839 clat percentiles (usec): 00:10:46.839 | 1.00th=[ 898], 5.00th=[ 898], 10.00th=[ 930], 20.00th=[ 996], 00:10:46.839 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[41681], 00:10:46.839 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:10:46.839 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:10:46.839 | 99.99th=[43254] 00:10:46.839 write: IOPS=503, BW=2016KiB/s (2064kB/s)(2048KiB/1016msec); 0 zone resets 00:10:46.839 slat (nsec): min=8994, max=52502, avg=28082.37, stdev=9965.99 00:10:46.839 clat (usec): min=212, max=1078, avg=548.53, stdev=131.53 00:10:46.839 lat (usec): min=222, max=1087, avg=576.61, stdev=136.63 00:10:46.839 clat percentiles (usec): 00:10:46.839 | 1.00th=[ 249], 5.00th=[ 318], 10.00th=[ 359], 20.00th=[ 445], 00:10:46.839 | 30.00th=[ 486], 40.00th=[ 529], 50.00th=[ 553], 60.00th=[ 594], 00:10:46.839 | 70.00th=[ 627], 80.00th=[ 652], 90.00th=[ 693], 95.00th=[ 734], 00:10:46.839 | 99.00th=[ 848], 99.50th=[ 906], 99.90th=[ 1074], 99.95th=[ 1074], 00:10:46.839 | 99.99th=[ 1074] 00:10:46.839 bw ( KiB/s): min= 4096, max= 4096, per=52.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:46.839 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:46.839 lat (usec) : 250=1.12%, 500=31.27%, 750=59.36%, 1000=4.87% 00:10:46.839 lat (msec) : 2=0.19%, 50=3.18% 00:10:46.839 cpu : usr=1.28%, sys=1.58%, ctx=534, majf=0, minf=2 00:10:46.839 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:46.839 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.839 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.839 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:46.839 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:46.839 00:10:46.839 Run status group 0 (all jobs): 00:10:46.839 READ: bw=908KiB/s (929kB/s), 71.1KiB/s-631KiB/s (72.9kB/s-646kB/s), io=944KiB (967kB), run=1012-1040msec 00:10:46.839 WRITE: bw=7877KiB/s (8066kB/s), 1969KiB/s-2024KiB/s (2016kB/s-2072kB/s), io=8192KiB (8389kB), run=1012-1040msec 00:10:46.839 00:10:46.839 Disk stats (read/write): 00:10:46.839 nvme0n1: ios=64/512, merge=0/0, ticks=965/196, in_queue=1161, util=98.70% 00:10:46.839 nvme0n2: ios=56/512, merge=0/0, ticks=682/255, in_queue=937, util=92.28% 00:10:46.839 nvme0n3: ios=136/512, merge=0/0, ticks=1440/219, in_queue=1659, util=93.41% 00:10:46.839 nvme0n4: ios=75/512, merge=0/0, ticks=632/219, in_queue=851, util=95.46% 00:10:46.839 06:09:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:46.839 [global] 00:10:46.840 thread=1 00:10:46.840 invalidate=1 00:10:46.840 rw=write 00:10:46.840 time_based=1 00:10:46.840 runtime=1 00:10:46.840 ioengine=libaio 00:10:46.840 direct=1 00:10:46.840 bs=4096 00:10:46.840 iodepth=128 00:10:46.840 norandommap=0 00:10:46.840 numjobs=1 00:10:46.840 00:10:46.840 verify_dump=1 00:10:46.840 verify_backlog=512 00:10:46.840 verify_state_save=0 00:10:46.840 do_verify=1 00:10:46.840 verify=crc32c-intel 00:10:46.840 [job0] 00:10:46.840 filename=/dev/nvme0n1 00:10:46.840 [job1] 00:10:46.840 filename=/dev/nvme0n2 00:10:46.840 [job2] 00:10:46.840 filename=/dev/nvme0n3 00:10:46.840 [job3] 00:10:46.840 filename=/dev/nvme0n4 00:10:46.840 Could not set queue depth (nvme0n1) 00:10:46.840 Could not set queue depth (nvme0n2) 00:10:46.840 Could not set queue depth (nvme0n3) 00:10:46.840 Could not set queue depth (nvme0n4) 00:10:47.099 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:47.099 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:47.099 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:47.099 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:47.099 fio-3.35 00:10:47.099 Starting 4 threads 00:10:48.483 00:10:48.483 job0: (groupid=0, jobs=1): err= 0: pid=206075: Mon Dec 9 06:09:42 2024 00:10:48.483 read: IOPS=8151, BW=31.8MiB/s (33.4MB/s)(32.0MiB/1005msec) 00:10:48.483 slat (nsec): min=955, max=8075.1k, avg=63795.34, stdev=421770.13 00:10:48.483 clat (usec): min=4887, max=21904, avg=8039.22, stdev=1568.85 00:10:48.483 lat (usec): min=4893, max=21937, avg=8103.02, stdev=1606.06 00:10:48.483 clat percentiles (usec): 00:10:48.483 | 1.00th=[ 5211], 5.00th=[ 5866], 10.00th=[ 6456], 20.00th=[ 7373], 00:10:48.483 | 30.00th=[ 7570], 40.00th=[ 7767], 50.00th=[ 7898], 60.00th=[ 8029], 00:10:48.483 | 70.00th=[ 8225], 80.00th=[ 8586], 90.00th=[ 9241], 95.00th=[10028], 00:10:48.483 | 99.00th=[15139], 99.50th=[18744], 99.90th=[19006], 99.95th=[19006], 00:10:48.483 | 99.99th=[21890] 00:10:48.483 write: IOPS=8395, BW=32.8MiB/s (34.4MB/s)(33.0MiB/1005msec); 0 zone resets 00:10:48.483 slat (nsec): min=1647, max=7546.5k, avg=52456.76, stdev=237049.48 00:10:48.483 clat (usec): min=809, max=15303, avg=7255.54, stdev=1219.01 00:10:48.483 lat (usec): min=819, max=15332, avg=7307.99, stdev=1230.32 00:10:48.483 clat percentiles (usec): 00:10:48.483 | 1.00th=[ 2933], 5.00th=[ 4752], 10.00th=[ 6063], 20.00th=[ 6915], 00:10:48.483 | 30.00th=[ 7177], 40.00th=[ 7308], 50.00th=[ 7373], 60.00th=[ 7504], 00:10:48.483 | 70.00th=[ 7570], 80.00th=[ 7635], 90.00th=[ 8160], 95.00th=[ 9241], 00:10:48.483 | 99.00th=[10683], 99.50th=[10683], 99.90th=[11076], 99.95th=[12649], 00:10:48.483 | 99.99th=[15270] 00:10:48.483 bw ( KiB/s): min=33200, max=33272, per=34.07%, avg=33236.00, stdev=50.91, samples=2 00:10:48.483 iops : min= 8300, max= 8318, avg=8309.00, stdev=12.73, samples=2 00:10:48.483 lat (usec) : 1000=0.02% 00:10:48.483 lat (msec) : 2=0.20%, 4=0.99%, 10=95.06%, 20=3.72%, 50=0.01% 00:10:48.483 cpu : usr=4.18%, sys=6.18%, ctx=1110, majf=0, minf=1 00:10:48.483 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:10:48.483 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:48.483 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:48.483 issued rwts: total=8192,8437,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:48.483 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:48.483 job1: (groupid=0, jobs=1): err= 0: pid=206076: Mon Dec 9 06:09:42 2024 00:10:48.483 read: IOPS=3137, BW=12.3MiB/s (12.8MB/s)(12.3MiB/1004msec) 00:10:48.483 slat (nsec): min=941, max=24947k, avg=161311.66, stdev=1267574.70 00:10:48.483 clat (usec): min=3560, max=84370, avg=20048.02, stdev=17042.71 00:10:48.483 lat (usec): min=3566, max=84382, avg=20209.33, stdev=17129.08 00:10:48.483 clat percentiles (usec): 00:10:48.483 | 1.00th=[ 3818], 5.00th=[ 6390], 10.00th=[ 7242], 20.00th=[ 7570], 00:10:48.483 | 30.00th=[ 9765], 40.00th=[11338], 50.00th=[15795], 60.00th=[16581], 00:10:48.483 | 70.00th=[17957], 80.00th=[30540], 90.00th=[44303], 95.00th=[57934], 00:10:48.483 | 99.00th=[83362], 99.50th=[84411], 99.90th=[84411], 99.95th=[84411], 00:10:48.483 | 99.99th=[84411] 00:10:48.483 write: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets 00:10:48.483 slat (nsec): min=1685, max=8750.9k, avg=132395.32, stdev=637908.29 00:10:48.483 clat (usec): min=3546, max=86672, avg=17834.00, stdev=15436.22 00:10:48.483 lat (usec): min=3644, max=86676, avg=17966.39, stdev=15524.45 00:10:48.483 clat percentiles (usec): 00:10:48.483 | 1.00th=[ 4293], 5.00th=[ 5735], 10.00th=[ 6521], 20.00th=[ 6980], 00:10:48.483 | 30.00th=[ 7439], 40.00th=[13960], 50.00th=[15270], 60.00th=[15533], 00:10:48.483 | 70.00th=[15926], 80.00th=[20841], 90.00th=[40633], 95.00th=[46400], 00:10:48.483 | 99.00th=[81265], 99.50th=[84411], 99.90th=[86508], 99.95th=[86508], 00:10:48.483 | 99.99th=[86508] 00:10:48.483 bw ( KiB/s): min=12288, max=15992, per=14.49%, avg=14140.00, stdev=2619.12, samples=2 00:10:48.483 iops : min= 3072, max= 3998, avg=3535.00, stdev=654.78, samples=2 00:10:48.483 lat (msec) : 4=0.67%, 10=32.70%, 20=43.47%, 50=17.75%, 100=5.42% 00:10:48.483 cpu : usr=1.40%, sys=3.69%, ctx=404, majf=0, minf=2 00:10:48.483 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:10:48.483 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:48.483 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:48.483 issued rwts: total=3150,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:48.483 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:48.483 job2: (groupid=0, jobs=1): err= 0: pid=206077: Mon Dec 9 06:09:42 2024 00:10:48.483 read: IOPS=5414, BW=21.1MiB/s (22.2MB/s)(21.2MiB/1004msec) 00:10:48.483 slat (nsec): min=1023, max=9113.6k, avg=86288.82, stdev=597926.74 00:10:48.483 clat (usec): min=2395, max=24059, avg=10369.33, stdev=3220.99 00:10:48.483 lat (usec): min=3399, max=24062, avg=10455.62, stdev=3258.70 00:10:48.483 clat percentiles (usec): 00:10:48.483 | 1.00th=[ 4883], 5.00th=[ 6718], 10.00th=[ 7504], 20.00th=[ 8094], 00:10:48.483 | 30.00th=[ 8717], 40.00th=[ 9241], 50.00th=[ 9503], 60.00th=[ 9765], 00:10:48.483 | 70.00th=[10683], 80.00th=[12387], 90.00th=[14746], 95.00th=[17695], 00:10:48.483 | 99.00th=[21627], 99.50th=[22676], 99.90th=[23200], 99.95th=[23987], 00:10:48.483 | 99.99th=[23987] 00:10:48.483 write: IOPS=5609, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec); 0 zone resets 00:10:48.483 slat (nsec): min=1765, max=41079k, avg=89351.35, stdev=688485.20 00:10:48.483 clat (usec): min=1218, max=42287, avg=11378.53, stdev=3727.14 00:10:48.483 lat (usec): min=1229, max=54981, avg=11467.89, stdev=3801.02 00:10:48.483 clat percentiles (usec): 00:10:48.483 | 1.00th=[ 3359], 5.00th=[ 5669], 10.00th=[ 6587], 20.00th=[ 8094], 00:10:48.483 | 30.00th=[ 8586], 40.00th=[ 9241], 50.00th=[11469], 60.00th=[13698], 00:10:48.483 | 70.00th=[14746], 80.00th=[15401], 90.00th=[15795], 95.00th=[15926], 00:10:48.483 | 99.00th=[16319], 99.50th=[16319], 99.90th=[22938], 99.95th=[23987], 00:10:48.483 | 99.99th=[42206] 00:10:48.484 bw ( KiB/s): min=20480, max=24576, per=23.09%, avg=22528.00, stdev=2896.31, samples=2 00:10:48.484 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:10:48.484 lat (msec) : 2=0.08%, 4=1.11%, 10=52.71%, 20=44.77%, 50=1.33% 00:10:48.484 cpu : usr=3.09%, sys=5.98%, ctx=616, majf=0, minf=1 00:10:48.484 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:48.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:48.484 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:48.484 issued rwts: total=5436,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:48.484 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:48.484 job3: (groupid=0, jobs=1): err= 0: pid=206078: Mon Dec 9 06:09:42 2024 00:10:48.484 read: IOPS=6590, BW=25.7MiB/s (27.0MB/s)(26.0MiB/1010msec) 00:10:48.484 slat (nsec): min=1007, max=8467.7k, avg=69825.31, stdev=510755.19 00:10:48.484 clat (usec): min=3204, max=19636, avg=9057.75, stdev=2072.44 00:10:48.484 lat (usec): min=3229, max=19647, avg=9127.58, stdev=2111.24 00:10:48.484 clat percentiles (usec): 00:10:48.484 | 1.00th=[ 5669], 5.00th=[ 6587], 10.00th=[ 7242], 20.00th=[ 7898], 00:10:48.484 | 30.00th=[ 8029], 40.00th=[ 8160], 50.00th=[ 8356], 60.00th=[ 8717], 00:10:48.484 | 70.00th=[ 9634], 80.00th=[10552], 90.00th=[11731], 95.00th=[13566], 00:10:48.484 | 99.00th=[15926], 99.50th=[16581], 99.90th=[16712], 99.95th=[16909], 00:10:48.484 | 99.99th=[19530] 00:10:48.484 write: IOPS=6912, BW=27.0MiB/s (28.3MB/s)(27.3MiB/1010msec); 0 zone resets 00:10:48.484 slat (nsec): min=1735, max=19062k, avg=71004.46, stdev=509218.52 00:10:48.484 clat (usec): min=365, max=83507, avg=9194.54, stdev=9343.47 00:10:48.484 lat (usec): min=404, max=83510, avg=9265.54, stdev=9410.30 00:10:48.484 clat percentiles (usec): 00:10:48.484 | 1.00th=[ 2343], 5.00th=[ 4047], 10.00th=[ 5080], 20.00th=[ 6652], 00:10:48.484 | 30.00th=[ 7504], 40.00th=[ 7832], 50.00th=[ 8029], 60.00th=[ 8160], 00:10:48.484 | 70.00th=[ 8356], 80.00th=[ 8586], 90.00th=[ 8848], 95.00th=[11600], 00:10:48.484 | 99.00th=[68682], 99.50th=[72877], 99.90th=[81265], 99.95th=[83362], 00:10:48.484 | 99.99th=[83362] 00:10:48.484 bw ( KiB/s): min=25352, max=29480, per=28.10%, avg=27416.00, stdev=2918.94, samples=2 00:10:48.484 iops : min= 6338, max= 7370, avg=6854.00, stdev=729.73, samples=2 00:10:48.484 lat (usec) : 500=0.01%, 750=0.04% 00:10:48.484 lat (msec) : 2=0.26%, 4=2.57%, 10=81.17%, 20=13.97%, 50=0.94% 00:10:48.484 lat (msec) : 100=1.05% 00:10:48.484 cpu : usr=3.96%, sys=7.93%, ctx=648, majf=0, minf=1 00:10:48.484 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:10:48.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:48.484 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:48.484 issued rwts: total=6656,6982,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:48.484 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:48.484 00:10:48.484 Run status group 0 (all jobs): 00:10:48.484 READ: bw=90.6MiB/s (95.0MB/s), 12.3MiB/s-31.8MiB/s (12.8MB/s-33.4MB/s), io=91.5MiB (96.0MB), run=1004-1010msec 00:10:48.484 WRITE: bw=95.3MiB/s (99.9MB/s), 13.9MiB/s-32.8MiB/s (14.6MB/s-34.4MB/s), io=96.2MiB (101MB), run=1004-1010msec 00:10:48.484 00:10:48.484 Disk stats (read/write): 00:10:48.484 nvme0n1: ios=6565/6656, merge=0/0, ticks=26653/23385, in_queue=50038, util=85.87% 00:10:48.484 nvme0n2: ios=2094/2151, merge=0/0, ticks=14098/11413, in_queue=25511, util=89.06% 00:10:48.484 nvme0n3: ios=4120/4311, merge=0/0, ticks=41850/48701, in_queue=90551, util=94.85% 00:10:48.484 nvme0n4: ios=5654/6144, merge=0/0, ticks=48219/43800, in_queue=92019, util=99.55% 00:10:48.484 06:09:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:48.484 [global] 00:10:48.484 thread=1 00:10:48.484 invalidate=1 00:10:48.484 rw=randwrite 00:10:48.484 time_based=1 00:10:48.484 runtime=1 00:10:48.484 ioengine=libaio 00:10:48.484 direct=1 00:10:48.484 bs=4096 00:10:48.484 iodepth=128 00:10:48.484 norandommap=0 00:10:48.484 numjobs=1 00:10:48.484 00:10:48.484 verify_dump=1 00:10:48.484 verify_backlog=512 00:10:48.484 verify_state_save=0 00:10:48.484 do_verify=1 00:10:48.484 verify=crc32c-intel 00:10:48.484 [job0] 00:10:48.484 filename=/dev/nvme0n1 00:10:48.484 [job1] 00:10:48.484 filename=/dev/nvme0n2 00:10:48.484 [job2] 00:10:48.484 filename=/dev/nvme0n3 00:10:48.484 [job3] 00:10:48.484 filename=/dev/nvme0n4 00:10:48.484 Could not set queue depth (nvme0n1) 00:10:48.484 Could not set queue depth (nvme0n2) 00:10:48.484 Could not set queue depth (nvme0n3) 00:10:48.484 Could not set queue depth (nvme0n4) 00:10:48.744 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:48.744 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:48.744 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:48.744 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:48.744 fio-3.35 00:10:48.745 Starting 4 threads 00:10:50.129 00:10:50.129 job0: (groupid=0, jobs=1): err= 0: pid=206546: Mon Dec 9 06:09:44 2024 00:10:50.129 read: IOPS=4877, BW=19.1MiB/s (20.0MB/s)(19.1MiB/1005msec) 00:10:50.129 slat (nsec): min=961, max=19848k, avg=111967.30, stdev=826867.11 00:10:50.129 clat (usec): min=2265, max=48846, avg=12448.60, stdev=6585.57 00:10:50.129 lat (usec): min=3482, max=48848, avg=12560.56, stdev=6664.71 00:10:50.129 clat percentiles (usec): 00:10:50.129 | 1.00th=[ 5342], 5.00th=[ 6783], 10.00th=[ 7242], 20.00th=[ 7439], 00:10:50.129 | 30.00th=[ 8029], 40.00th=[ 8291], 50.00th=[ 9765], 60.00th=[13304], 00:10:50.129 | 70.00th=[14746], 80.00th=[15533], 90.00th=[21103], 95.00th=[24511], 00:10:50.129 | 99.00th=[40633], 99.50th=[42206], 99.90th=[49021], 99.95th=[49021], 00:10:50.129 | 99.99th=[49021] 00:10:50.129 write: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec); 0 zone resets 00:10:50.129 slat (nsec): min=1617, max=12027k, avg=83404.64, stdev=449599.57 00:10:50.129 clat (usec): min=1129, max=48838, avg=12984.41, stdev=7507.50 00:10:50.129 lat (usec): min=1138, max=48840, avg=13067.82, stdev=7541.06 00:10:50.129 clat percentiles (usec): 00:10:50.129 | 1.00th=[ 3884], 5.00th=[ 4752], 10.00th=[ 5866], 20.00th=[ 6390], 00:10:50.129 | 30.00th=[ 7373], 40.00th=[10683], 50.00th=[13829], 60.00th=[14484], 00:10:50.129 | 70.00th=[14877], 80.00th=[15270], 90.00th=[19268], 95.00th=[27132], 00:10:50.130 | 99.00th=[42206], 99.50th=[42730], 99.90th=[43254], 99.95th=[43779], 00:10:50.130 | 99.99th=[49021] 00:10:50.130 bw ( KiB/s): min=17280, max=23680, per=22.97%, avg=20480.00, stdev=4525.48, samples=2 00:10:50.130 iops : min= 4320, max= 5920, avg=5120.00, stdev=1131.37, samples=2 00:10:50.130 lat (msec) : 2=0.05%, 4=0.66%, 10=43.93%, 20=44.87%, 50=10.49% 00:10:50.130 cpu : usr=3.09%, sys=5.58%, ctx=486, majf=0, minf=1 00:10:50.130 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:50.130 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.130 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:50.130 issued rwts: total=4902,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:50.130 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:50.130 job1: (groupid=0, jobs=1): err= 0: pid=206547: Mon Dec 9 06:09:44 2024 00:10:50.130 read: IOPS=6094, BW=23.8MiB/s (25.0MB/s)(23.9MiB/1003msec) 00:10:50.130 slat (nsec): min=900, max=22981k, avg=88059.87, stdev=760886.84 00:10:50.130 clat (usec): min=1053, max=63110, avg=12107.33, stdev=11984.35 00:10:50.130 lat (usec): min=1687, max=63115, avg=12195.39, stdev=12051.98 00:10:50.130 clat percentiles (usec): 00:10:50.130 | 1.00th=[ 3064], 5.00th=[ 4424], 10.00th=[ 5932], 20.00th=[ 6718], 00:10:50.130 | 30.00th=[ 7111], 40.00th=[ 7439], 50.00th=[ 8029], 60.00th=[ 8848], 00:10:50.130 | 70.00th=[ 9765], 80.00th=[11994], 90.00th=[22938], 95.00th=[47449], 00:10:50.130 | 99.00th=[62653], 99.50th=[63177], 99.90th=[63177], 99.95th=[63177], 00:10:50.130 | 99.99th=[63177] 00:10:50.130 write: IOPS=6125, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1003msec); 0 zone resets 00:10:50.130 slat (nsec): min=1505, max=14241k, avg=67300.63, stdev=510435.93 00:10:50.130 clat (usec): min=1004, max=40461, avg=8612.13, stdev=5841.49 00:10:50.130 lat (usec): min=1006, max=40468, avg=8679.43, stdev=5885.54 00:10:50.130 clat percentiles (usec): 00:10:50.130 | 1.00th=[ 1762], 5.00th=[ 3621], 10.00th=[ 4490], 20.00th=[ 5932], 00:10:50.130 | 30.00th=[ 6652], 40.00th=[ 6849], 50.00th=[ 7046], 60.00th=[ 7308], 00:10:50.130 | 70.00th=[ 8225], 80.00th=[10290], 90.00th=[12256], 95.00th=[19268], 00:10:50.130 | 99.00th=[37487], 99.50th=[40633], 99.90th=[40633], 99.95th=[40633], 00:10:50.130 | 99.99th=[40633] 00:10:50.130 bw ( KiB/s): min=12592, max=36560, per=27.57%, avg=24576.00, stdev=16947.94, samples=2 00:10:50.130 iops : min= 3148, max= 9140, avg=6144.00, stdev=4236.98, samples=2 00:10:50.130 lat (msec) : 2=0.91%, 4=4.21%, 10=70.81%, 20=15.80%, 50=6.00% 00:10:50.130 lat (msec) : 100=2.28% 00:10:50.130 cpu : usr=3.49%, sys=6.39%, ctx=465, majf=0, minf=1 00:10:50.130 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:50.130 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.130 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:50.130 issued rwts: total=6113,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:50.130 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:50.130 job2: (groupid=0, jobs=1): err= 0: pid=206548: Mon Dec 9 06:09:44 2024 00:10:50.130 read: IOPS=7034, BW=27.5MiB/s (28.8MB/s)(27.6MiB/1004msec) 00:10:50.130 slat (nsec): min=939, max=11068k, avg=71315.52, stdev=459021.10 00:10:50.130 clat (usec): min=1475, max=20406, avg=9200.20, stdev=2137.09 00:10:50.130 lat (usec): min=2530, max=20417, avg=9271.51, stdev=2161.21 00:10:50.130 clat percentiles (usec): 00:10:50.130 | 1.00th=[ 4752], 5.00th=[ 6587], 10.00th=[ 7111], 20.00th=[ 7767], 00:10:50.130 | 30.00th=[ 7963], 40.00th=[ 8356], 50.00th=[ 8848], 60.00th=[ 9372], 00:10:50.130 | 70.00th=[ 9896], 80.00th=[10421], 90.00th=[11863], 95.00th=[13435], 00:10:50.130 | 99.00th=[16581], 99.50th=[17957], 99.90th=[20055], 99.95th=[20317], 00:10:50.130 | 99.99th=[20317] 00:10:50.130 write: IOPS=7139, BW=27.9MiB/s (29.2MB/s)(28.0MiB/1004msec); 0 zone resets 00:10:50.130 slat (nsec): min=1549, max=6047.7k, avg=64271.72, stdev=357349.75 00:10:50.130 clat (usec): min=1623, max=20355, avg=8641.21, stdev=2656.70 00:10:50.130 lat (usec): min=1632, max=20358, avg=8705.48, stdev=2668.80 00:10:50.130 clat percentiles (usec): 00:10:50.130 | 1.00th=[ 3818], 5.00th=[ 4424], 10.00th=[ 5080], 20.00th=[ 6390], 00:10:50.130 | 30.00th=[ 7308], 40.00th=[ 8029], 50.00th=[ 8586], 60.00th=[ 9110], 00:10:50.130 | 70.00th=[ 9765], 80.00th=[10290], 90.00th=[12518], 95.00th=[13960], 00:10:50.130 | 99.00th=[15139], 99.50th=[15533], 99.90th=[16319], 99.95th=[18482], 00:10:50.130 | 99.99th=[20317] 00:10:50.130 bw ( KiB/s): min=27816, max=29528, per=32.16%, avg=28672.00, stdev=1210.57, samples=2 00:10:50.130 iops : min= 6954, max= 7382, avg=7168.00, stdev=302.64, samples=2 00:10:50.130 lat (msec) : 2=0.11%, 4=1.07%, 10=72.77%, 20=25.99%, 50=0.06% 00:10:50.130 cpu : usr=4.19%, sys=6.58%, ctx=680, majf=0, minf=1 00:10:50.130 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:10:50.130 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.130 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:50.130 issued rwts: total=7063,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:50.130 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:50.130 job3: (groupid=0, jobs=1): err= 0: pid=206549: Mon Dec 9 06:09:44 2024 00:10:50.130 read: IOPS=3552, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1009msec) 00:10:50.130 slat (nsec): min=1040, max=13542k, avg=111736.88, stdev=794706.85 00:10:50.130 clat (usec): min=4124, max=33069, avg=13293.91, stdev=5452.93 00:10:50.130 lat (usec): min=4128, max=33078, avg=13405.65, stdev=5505.22 00:10:50.130 clat percentiles (usec): 00:10:50.130 | 1.00th=[ 5407], 5.00th=[ 7046], 10.00th=[ 7767], 20.00th=[ 8586], 00:10:50.130 | 30.00th=[ 9241], 40.00th=[10945], 50.00th=[12649], 60.00th=[14615], 00:10:50.130 | 70.00th=[15139], 80.00th=[16188], 90.00th=[20317], 95.00th=[25822], 00:10:50.130 | 99.00th=[31065], 99.50th=[32113], 99.90th=[33162], 99.95th=[33162], 00:10:50.130 | 99.99th=[33162] 00:10:50.130 write: IOPS=4017, BW=15.7MiB/s (16.5MB/s)(15.8MiB/1009msec); 0 zone resets 00:10:50.130 slat (nsec): min=1710, max=17124k, avg=142629.62, stdev=728991.47 00:10:50.130 clat (usec): min=2650, max=75311, avg=19762.47, stdev=13149.31 00:10:50.130 lat (usec): min=2657, max=75320, avg=19905.09, stdev=13231.92 00:10:50.130 clat percentiles (usec): 00:10:50.130 | 1.00th=[ 3425], 5.00th=[ 6521], 10.00th=[10552], 20.00th=[13829], 00:10:50.130 | 30.00th=[14484], 40.00th=[14746], 50.00th=[15139], 60.00th=[15533], 00:10:50.130 | 70.00th=[18482], 80.00th=[21627], 90.00th=[36439], 95.00th=[54264], 00:10:50.130 | 99.00th=[68682], 99.50th=[69731], 99.90th=[74974], 99.95th=[74974], 00:10:50.130 | 99.99th=[74974] 00:10:50.130 bw ( KiB/s): min=14984, max=16432, per=17.62%, avg=15708.00, stdev=1023.89, samples=2 00:10:50.130 iops : min= 3746, max= 4108, avg=3927.00, stdev=255.97, samples=2 00:10:50.130 lat (msec) : 4=1.00%, 10=19.29%, 20=61.38%, 50=15.04%, 100=3.30% 00:10:50.130 cpu : usr=3.67%, sys=3.08%, ctx=495, majf=0, minf=1 00:10:50.130 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:50.130 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.130 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:50.130 issued rwts: total=3584,4054,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:50.130 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:50.130 00:10:50.130 Run status group 0 (all jobs): 00:10:50.130 READ: bw=83.9MiB/s (87.9MB/s), 13.9MiB/s-27.5MiB/s (14.5MB/s-28.8MB/s), io=84.6MiB (88.7MB), run=1003-1009msec 00:10:50.130 WRITE: bw=87.1MiB/s (91.3MB/s), 15.7MiB/s-27.9MiB/s (16.5MB/s-29.2MB/s), io=87.8MiB (92.1MB), run=1003-1009msec 00:10:50.130 00:10:50.130 Disk stats (read/write): 00:10:50.130 nvme0n1: ios=3634/3991, merge=0/0, ticks=47433/54254, in_queue=101687, util=88.48% 00:10:50.130 nvme0n2: ios=5674/5632, merge=0/0, ticks=34024/25964, in_queue=59988, util=88.82% 00:10:50.130 nvme0n3: ios=5766/6144, merge=0/0, ticks=28622/26572, in_queue=55194, util=88.61% 00:10:50.130 nvme0n4: ios=3317/3584, merge=0/0, ticks=44504/60503, in_queue=105007, util=98.42% 00:10:50.130 06:09:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:50.130 06:09:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=206766 00:10:50.130 06:09:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:50.130 06:09:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:50.130 [global] 00:10:50.130 thread=1 00:10:50.130 invalidate=1 00:10:50.130 rw=read 00:10:50.130 time_based=1 00:10:50.130 runtime=10 00:10:50.130 ioengine=libaio 00:10:50.130 direct=1 00:10:50.130 bs=4096 00:10:50.130 iodepth=1 00:10:50.130 norandommap=1 00:10:50.130 numjobs=1 00:10:50.130 00:10:50.130 [job0] 00:10:50.130 filename=/dev/nvme0n1 00:10:50.130 [job1] 00:10:50.130 filename=/dev/nvme0n2 00:10:50.130 [job2] 00:10:50.130 filename=/dev/nvme0n3 00:10:50.130 [job3] 00:10:50.130 filename=/dev/nvme0n4 00:10:50.130 Could not set queue depth (nvme0n1) 00:10:50.130 Could not set queue depth (nvme0n2) 00:10:50.130 Could not set queue depth (nvme0n3) 00:10:50.130 Could not set queue depth (nvme0n4) 00:10:50.390 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:50.390 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:50.390 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:50.390 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:50.390 fio-3.35 00:10:50.390 Starting 4 threads 00:10:52.944 06:09:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:53.204 06:09:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:53.204 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=258048, buflen=4096 00:10:53.204 fio: pid=207031, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:53.464 06:09:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:53.464 06:09:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:53.464 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=1798144, buflen=4096 00:10:53.464 fio: pid=207030, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:53.464 06:09:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:53.464 06:09:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:53.464 fio: io_u error on file /dev/nvme0n1: Input/output error: read offset=307200, buflen=4096 00:10:53.464 fio: pid=207027, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:10:53.725 06:09:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:53.725 06:09:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:53.725 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=303104, buflen=4096 00:10:53.725 fio: pid=207028, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:53.725 00:10:53.725 job0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=207027: Mon Dec 9 06:09:48 2024 00:10:53.725 read: IOPS=25, BW=102KiB/s (104kB/s)(300KiB/2948msec) 00:10:53.725 slat (usec): min=23, max=6951, avg=116.55, stdev=794.46 00:10:53.725 clat (usec): min=628, max=42912, avg=39170.15, stdev=10308.34 00:10:53.725 lat (usec): min=653, max=42936, avg=39195.58, stdev=10307.68 00:10:53.725 clat percentiles (usec): 00:10:53.725 | 1.00th=[ 627], 5.00th=[ 971], 10.00th=[41157], 20.00th=[41681], 00:10:53.725 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:10:53.725 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:53.725 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:10:53.725 | 99.99th=[42730] 00:10:53.725 bw ( KiB/s): min= 96, max= 104, per=12.05%, avg=100.80, stdev= 4.38, samples=5 00:10:53.725 iops : min= 24, max= 26, avg=25.20, stdev= 1.10, samples=5 00:10:53.725 lat (usec) : 750=1.32%, 1000=3.95% 00:10:53.725 lat (msec) : 2=1.32%, 50=92.11% 00:10:53.725 cpu : usr=0.00%, sys=0.31%, ctx=77, majf=0, minf=1 00:10:53.725 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:53.725 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:53.725 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:53.725 issued rwts: total=76,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:53.725 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:53.725 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=207028: Mon Dec 9 06:09:48 2024 00:10:53.725 read: IOPS=23, BW=94.4KiB/s (96.6kB/s)(296KiB/3137msec) 00:10:53.725 slat (usec): min=25, max=19586, avg=703.27, stdev=3364.33 00:10:53.725 clat (usec): min=770, max=42921, avg=41383.93, stdev=4794.00 00:10:53.725 lat (usec): min=810, max=59054, avg=41832.02, stdev=5509.46 00:10:53.725 clat percentiles (usec): 00:10:53.725 | 1.00th=[ 775], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:10:53.725 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:10:53.725 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:53.725 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:10:53.725 | 99.99th=[42730] 00:10:53.725 bw ( KiB/s): min= 90, max= 96, per=11.44%, avg=95.00, stdev= 2.45, samples=6 00:10:53.725 iops : min= 22, max= 24, avg=23.67, stdev= 0.82, samples=6 00:10:53.725 lat (usec) : 1000=1.33% 00:10:53.725 lat (msec) : 50=97.33% 00:10:53.725 cpu : usr=0.13%, sys=0.00%, ctx=79, majf=0, minf=2 00:10:53.725 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:53.725 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:53.725 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:53.725 issued rwts: total=75,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:53.725 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:53.725 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=207030: Mon Dec 9 06:09:48 2024 00:10:53.725 read: IOPS=158, BW=632KiB/s (647kB/s)(1756KiB/2778msec) 00:10:53.725 slat (usec): min=7, max=16868, avg=98.09, stdev=1071.38 00:10:53.725 clat (usec): min=850, max=42960, avg=6173.53, stdev=13565.56 00:10:53.725 lat (usec): min=876, max=42985, avg=6271.79, stdev=13580.59 00:10:53.725 clat percentiles (usec): 00:10:53.725 | 1.00th=[ 898], 5.00th=[ 947], 10.00th=[ 979], 20.00th=[ 1004], 00:10:53.725 | 30.00th=[ 1020], 40.00th=[ 1037], 50.00th=[ 1057], 60.00th=[ 1074], 00:10:53.725 | 70.00th=[ 1106], 80.00th=[ 1139], 90.00th=[41681], 95.00th=[42206], 00:10:53.725 | 99.00th=[42206], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:10:53.725 | 99.99th=[42730] 00:10:53.725 bw ( KiB/s): min= 88, max= 1264, per=74.33%, avg=617.60, stdev=571.35, samples=5 00:10:53.725 iops : min= 22, max= 316, avg=154.40, stdev=142.84, samples=5 00:10:53.725 lat (usec) : 1000=18.18% 00:10:53.725 lat (msec) : 2=69.09%, 50=12.50% 00:10:53.725 cpu : usr=0.22%, sys=0.43%, ctx=442, majf=0, minf=1 00:10:53.725 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:53.725 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:53.725 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:53.725 issued rwts: total=440,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:53.725 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:53.725 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=207031: Mon Dec 9 06:09:48 2024 00:10:53.725 read: IOPS=24, BW=96.2KiB/s (98.5kB/s)(252KiB/2620msec) 00:10:53.725 slat (nsec): min=26084, max=34866, avg=26712.09, stdev=1088.12 00:10:53.725 clat (usec): min=1179, max=42920, avg=41205.73, stdev=5137.54 00:10:53.725 lat (usec): min=1214, max=42947, avg=41232.45, stdev=5136.50 00:10:53.725 clat percentiles (usec): 00:10:53.725 | 1.00th=[ 1188], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:10:53.725 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:10:53.725 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:53.725 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:10:53.725 | 99.99th=[42730] 00:10:53.725 bw ( KiB/s): min= 96, max= 96, per=11.56%, avg=96.00, stdev= 0.00, samples=5 00:10:53.725 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=5 00:10:53.725 lat (msec) : 2=1.56%, 50=96.88% 00:10:53.725 cpu : usr=0.15%, sys=0.00%, ctx=64, majf=0, minf=2 00:10:53.725 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:53.725 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:53.725 complete : 0=1.5%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:53.725 issued rwts: total=64,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:53.725 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:53.725 00:10:53.725 Run status group 0 (all jobs): 00:10:53.725 READ: bw=830KiB/s (850kB/s), 94.4KiB/s-632KiB/s (96.6kB/s-647kB/s), io=2604KiB (2666kB), run=2620-3137msec 00:10:53.725 00:10:53.725 Disk stats (read/write): 00:10:53.725 nvme0n1: ios=71/0, merge=0/0, ticks=2772/0, in_queue=2772, util=93.02% 00:10:53.725 nvme0n2: ios=72/0, merge=0/0, ticks=2980/0, in_queue=2980, util=93.53% 00:10:53.725 nvme0n3: ios=403/0, merge=0/0, ticks=2486/0, in_queue=2486, util=95.59% 00:10:53.725 nvme0n4: ios=61/0, merge=0/0, ticks=2514/0, in_queue=2514, util=96.35% 00:10:53.986 06:09:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:53.986 06:09:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:53.986 06:09:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:53.986 06:09:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:54.247 06:09:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:54.247 06:09:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:54.507 06:09:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:54.507 06:09:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:54.507 06:09:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:54.508 06:09:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 206766 00:10:54.508 06:09:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:54.508 06:09:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:54.768 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.768 06:09:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:54.768 06:09:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:10:54.768 06:09:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:54.768 06:09:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:54.768 06:09:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:54.768 06:09:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:54.768 06:09:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:10:54.768 06:09:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:54.768 06:09:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:54.768 nvmf hotplug test: fio failed as expected 00:10:54.768 06:09:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:54.768 06:09:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:55.028 06:09:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:55.028 06:09:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:55.028 06:09:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:55.028 06:09:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:55.028 06:09:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:55.028 06:09:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:55.028 06:09:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:55.028 06:09:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:55.028 06:09:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:55.028 06:09:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:55.028 rmmod nvme_tcp 00:10:55.028 rmmod nvme_fabrics 00:10:55.028 rmmod nvme_keyring 00:10:55.028 06:09:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:55.028 06:09:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:55.028 06:09:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:55.028 06:09:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 203655 ']' 00:10:55.028 06:09:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 203655 00:10:55.028 06:09:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 203655 ']' 00:10:55.028 06:09:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 203655 00:10:55.028 06:09:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:10:55.028 06:09:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:55.028 06:09:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 203655 00:10:55.028 06:09:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:55.028 06:09:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:55.028 06:09:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 203655' 00:10:55.028 killing process with pid 203655 00:10:55.028 06:09:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 203655 00:10:55.028 06:09:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 203655 00:10:55.290 06:09:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:55.290 06:09:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:55.290 06:09:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:55.290 06:09:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:55.290 06:09:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:55.290 06:09:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:10:55.290 06:09:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:10:55.290 06:09:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:55.290 06:09:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:55.290 06:09:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:55.290 06:09:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:55.290 06:09:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:57.204 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:57.204 00:10:57.204 real 0m28.887s 00:10:57.204 user 2m5.734s 00:10:57.204 sys 0m8.932s 00:10:57.204 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:57.204 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.204 ************************************ 00:10:57.204 END TEST nvmf_fio_target 00:10:57.204 ************************************ 00:10:57.204 06:09:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:57.204 06:09:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:57.204 06:09:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:57.204 06:09:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:57.204 ************************************ 00:10:57.204 START TEST nvmf_bdevio 00:10:57.204 ************************************ 00:10:57.204 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:57.465 * Looking for test storage... 00:10:57.465 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:57.465 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:57.465 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:10:57.465 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:57.466 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:57.466 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:57.466 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:57.466 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:57.466 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:57.466 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:57.466 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:57.466 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:57.466 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:57.466 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:57.466 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:57.466 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:57.466 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:57.466 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:57.466 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:57.466 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:57.466 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:57.466 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:57.466 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:57.466 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:57.466 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:57.466 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:57.466 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:57.466 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:57.466 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:57.466 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:57.466 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:57.466 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:57.466 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:57.466 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:57.466 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:57.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.466 --rc genhtml_branch_coverage=1 00:10:57.466 --rc genhtml_function_coverage=1 00:10:57.466 --rc genhtml_legend=1 00:10:57.466 --rc geninfo_all_blocks=1 00:10:57.466 --rc geninfo_unexecuted_blocks=1 00:10:57.466 00:10:57.466 ' 00:10:57.466 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:57.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.466 --rc genhtml_branch_coverage=1 00:10:57.466 --rc genhtml_function_coverage=1 00:10:57.466 --rc genhtml_legend=1 00:10:57.466 --rc geninfo_all_blocks=1 00:10:57.466 --rc geninfo_unexecuted_blocks=1 00:10:57.466 00:10:57.466 ' 00:10:57.466 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:57.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.466 --rc genhtml_branch_coverage=1 00:10:57.466 --rc genhtml_function_coverage=1 00:10:57.466 --rc genhtml_legend=1 00:10:57.466 --rc geninfo_all_blocks=1 00:10:57.466 --rc geninfo_unexecuted_blocks=1 00:10:57.466 00:10:57.466 ' 00:10:57.466 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:57.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.466 --rc genhtml_branch_coverage=1 00:10:57.466 --rc genhtml_function_coverage=1 00:10:57.466 --rc genhtml_legend=1 00:10:57.466 --rc geninfo_all_blocks=1 00:10:57.466 --rc geninfo_unexecuted_blocks=1 00:10:57.466 00:10:57.466 ' 00:10:57.466 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:57.466 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:57.466 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:57.466 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:57.466 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:57.466 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:57.466 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:57.466 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:57.466 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:57.466 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:57.466 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:57.466 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:57.466 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:10:57.466 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:10:57.466 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:57.466 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:57.466 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:57.466 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:57.466 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:57.466 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:57.466 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:57.466 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:57.466 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:57.466 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.466 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.466 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.466 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:57.466 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.466 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:57.466 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:57.466 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:57.466 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:57.466 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:57.466 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:57.467 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:57.467 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:57.467 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:57.467 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:57.467 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:57.467 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:57.467 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:57.467 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:57.467 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:57.467 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:57.467 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:57.467 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:57.467 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:57.467 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:57.467 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:57.467 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:57.467 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:57.467 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:57.467 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:57.467 06:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:05.609 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:05.609 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:11:05.609 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:05.609 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:05.609 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:05.609 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:05.609 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:05.609 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:11:05.609 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:05.609 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:11:05.609 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:11:05.609 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:11:05.609 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:11:05.609 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:11:05.609 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:11:05.609 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:05.609 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:05.609 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:05.609 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:05.609 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:05.610 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:05.610 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:05.610 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:05.610 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:05.610 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:05.610 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:05.610 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:05.610 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:05.610 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:05.610 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:05.610 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:05.610 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:05.610 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:05.610 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:05.610 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:05.610 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:05.610 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:05.610 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:05.610 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:05.610 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:05.610 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:05.610 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:05.610 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:05.610 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:05.610 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:05.610 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:05.610 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:05.610 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:05.610 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:05.610 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:05.610 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:05.610 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:05.610 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:05.610 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:05.610 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:05.610 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:05.610 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:05.610 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:05.610 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:05.610 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:05.610 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:05.610 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:05.610 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:05.610 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:05.610 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:05.610 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:05.610 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:05.610 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:05.610 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:05.610 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:05.610 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:05.610 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:05.610 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:05.610 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:11:05.610 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:05.610 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:05.610 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:05.610 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:05.610 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:05.610 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:05.610 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:05.610 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:05.610 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:05.610 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:05.610 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:05.610 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:05.610 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:05.610 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:05.610 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:05.610 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:05.610 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:05.610 06:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:05.610 06:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:05.610 06:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:05.610 06:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:05.610 06:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:05.610 06:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:05.610 06:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:05.610 06:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:05.610 06:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:05.610 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:05.610 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.613 ms 00:11:05.610 00:11:05.610 --- 10.0.0.2 ping statistics --- 00:11:05.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:05.610 rtt min/avg/max/mdev = 0.613/0.613/0.613/0.000 ms 00:11:05.610 06:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:05.610 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:05.610 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:11:05.610 00:11:05.611 --- 10.0.0.1 ping statistics --- 00:11:05.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:05.611 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:11:05.611 06:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:05.611 06:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:11:05.611 06:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:05.611 06:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:05.611 06:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:05.611 06:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:05.611 06:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:05.611 06:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:05.611 06:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:05.611 06:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:05.611 06:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:05.611 06:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:05.611 06:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:05.611 06:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:05.611 06:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=211662 00:11:05.611 06:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 211662 00:11:05.611 06:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 211662 ']' 00:11:05.611 06:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:05.611 06:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:05.611 06:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:05.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:05.611 06:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:05.611 06:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:05.611 [2024-12-09 06:09:59.330326] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:11:05.611 [2024-12-09 06:09:59.330388] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:05.611 [2024-12-09 06:09:59.409031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:05.611 [2024-12-09 06:09:59.459207] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:05.611 [2024-12-09 06:09:59.459260] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:05.611 [2024-12-09 06:09:59.459268] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:05.611 [2024-12-09 06:09:59.459275] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:05.611 [2024-12-09 06:09:59.459281] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:05.611 [2024-12-09 06:09:59.461489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:05.611 [2024-12-09 06:09:59.461646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:11:05.611 [2024-12-09 06:09:59.461801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:05.611 [2024-12-09 06:09:59.461801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:11:05.611 06:10:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:05.611 06:10:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:11:05.611 06:10:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:05.611 06:10:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:05.611 06:10:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:05.872 06:10:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:05.872 06:10:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:05.872 06:10:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.872 06:10:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:05.872 [2024-12-09 06:10:00.230150] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:05.872 06:10:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.872 06:10:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:05.872 06:10:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.873 06:10:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:05.873 Malloc0 00:11:05.873 06:10:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.873 06:10:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:05.873 06:10:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.873 06:10:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:05.873 06:10:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.873 06:10:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:05.873 06:10:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.873 06:10:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:05.873 06:10:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.873 06:10:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:05.873 06:10:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.873 06:10:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:05.873 [2024-12-09 06:10:00.303477] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:05.873 06:10:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.873 06:10:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:05.873 06:10:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:05.873 06:10:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:11:05.873 06:10:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:11:05.873 06:10:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:05.873 06:10:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:05.873 { 00:11:05.873 "params": { 00:11:05.873 "name": "Nvme$subsystem", 00:11:05.873 "trtype": "$TEST_TRANSPORT", 00:11:05.873 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:05.873 "adrfam": "ipv4", 00:11:05.873 "trsvcid": "$NVMF_PORT", 00:11:05.873 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:05.873 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:05.873 "hdgst": ${hdgst:-false}, 00:11:05.873 "ddgst": ${ddgst:-false} 00:11:05.873 }, 00:11:05.873 "method": "bdev_nvme_attach_controller" 00:11:05.873 } 00:11:05.873 EOF 00:11:05.873 )") 00:11:05.873 06:10:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:11:05.873 06:10:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:11:05.873 06:10:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:11:05.873 06:10:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:05.873 "params": { 00:11:05.873 "name": "Nvme1", 00:11:05.873 "trtype": "tcp", 00:11:05.873 "traddr": "10.0.0.2", 00:11:05.873 "adrfam": "ipv4", 00:11:05.873 "trsvcid": "4420", 00:11:05.873 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:05.873 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:05.873 "hdgst": false, 00:11:05.873 "ddgst": false 00:11:05.873 }, 00:11:05.873 "method": "bdev_nvme_attach_controller" 00:11:05.873 }' 00:11:05.873 [2024-12-09 06:10:00.362115] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:11:05.873 [2024-12-09 06:10:00.362184] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid211909 ] 00:11:05.873 [2024-12-09 06:10:00.455069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:06.133 [2024-12-09 06:10:00.509346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:06.133 [2024-12-09 06:10:00.509506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:06.133 [2024-12-09 06:10:00.509507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.394 I/O targets: 00:11:06.394 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:06.394 00:11:06.394 00:11:06.394 CUnit - A unit testing framework for C - Version 2.1-3 00:11:06.394 http://cunit.sourceforge.net/ 00:11:06.394 00:11:06.394 00:11:06.394 Suite: bdevio tests on: Nvme1n1 00:11:06.394 Test: blockdev write read block ...passed 00:11:06.394 Test: blockdev write zeroes read block ...passed 00:11:06.394 Test: blockdev write zeroes read no split ...passed 00:11:06.394 Test: blockdev write zeroes read split ...passed 00:11:06.394 Test: blockdev write zeroes read split partial ...passed 00:11:06.394 Test: blockdev reset ...[2024-12-09 06:10:00.946486] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:11:06.394 [2024-12-09 06:10:00.946543] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161aa40 (9): Bad file descriptor 00:11:06.655 [2024-12-09 06:10:01.003759] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:11:06.655 passed 00:11:06.655 Test: blockdev write read 8 blocks ...passed 00:11:06.655 Test: blockdev write read size > 128k ...passed 00:11:06.655 Test: blockdev write read invalid size ...passed 00:11:06.655 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:06.655 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:06.655 Test: blockdev write read max offset ...passed 00:11:06.655 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:06.655 Test: blockdev writev readv 8 blocks ...passed 00:11:06.655 Test: blockdev writev readv 30 x 1block ...passed 00:11:06.655 Test: blockdev writev readv block ...passed 00:11:06.655 Test: blockdev writev readv size > 128k ...passed 00:11:06.655 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:06.655 Test: blockdev comparev and writev ...[2024-12-09 06:10:01.186853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:06.655 [2024-12-09 06:10:01.186884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:06.655 [2024-12-09 06:10:01.186898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:06.655 [2024-12-09 06:10:01.186907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:06.655 [2024-12-09 06:10:01.187323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:06.655 [2024-12-09 06:10:01.187334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:06.655 [2024-12-09 06:10:01.187347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:06.655 [2024-12-09 06:10:01.187354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:06.655 [2024-12-09 06:10:01.187771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:06.655 [2024-12-09 06:10:01.187781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:06.655 [2024-12-09 06:10:01.187794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:06.655 [2024-12-09 06:10:01.187801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:06.655 [2024-12-09 06:10:01.188253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:06.655 [2024-12-09 06:10:01.188263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:06.655 [2024-12-09 06:10:01.188276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:06.655 [2024-12-09 06:10:01.188283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:06.655 passed 00:11:06.915 Test: blockdev nvme passthru rw ...passed 00:11:06.915 Test: blockdev nvme passthru vendor specific ...[2024-12-09 06:10:01.272202] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:06.916 [2024-12-09 06:10:01.272218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:06.916 [2024-12-09 06:10:01.272538] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:06.916 [2024-12-09 06:10:01.272548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:06.916 [2024-12-09 06:10:01.272867] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:06.916 [2024-12-09 06:10:01.272876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:06.916 [2024-12-09 06:10:01.273204] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:06.916 [2024-12-09 06:10:01.273221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:06.916 passed 00:11:06.916 Test: blockdev nvme admin passthru ...passed 00:11:06.916 Test: blockdev copy ...passed 00:11:06.916 00:11:06.916 Run Summary: Type Total Ran Passed Failed Inactive 00:11:06.916 suites 1 1 n/a 0 0 00:11:06.916 tests 23 23 23 0 0 00:11:06.916 asserts 152 152 152 0 n/a 00:11:06.916 00:11:06.916 Elapsed time = 0.990 seconds 00:11:06.916 06:10:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:06.916 06:10:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.916 06:10:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:06.916 06:10:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.916 06:10:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:06.916 06:10:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:06.916 06:10:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:06.916 06:10:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:06.916 06:10:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:06.916 06:10:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:06.916 06:10:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:06.916 06:10:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:06.916 rmmod nvme_tcp 00:11:06.916 rmmod nvme_fabrics 00:11:06.916 rmmod nvme_keyring 00:11:06.916 06:10:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:07.175 06:10:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:07.175 06:10:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:07.175 06:10:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 211662 ']' 00:11:07.175 06:10:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 211662 00:11:07.175 06:10:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 211662 ']' 00:11:07.175 06:10:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 211662 00:11:07.175 06:10:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:11:07.175 06:10:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:07.175 06:10:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 211662 00:11:07.175 06:10:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:11:07.175 06:10:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:11:07.175 06:10:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 211662' 00:11:07.175 killing process with pid 211662 00:11:07.175 06:10:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 211662 00:11:07.175 06:10:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 211662 00:11:07.175 06:10:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:07.175 06:10:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:07.176 06:10:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:07.176 06:10:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:11:07.176 06:10:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:11:07.176 06:10:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:11:07.176 06:10:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:07.176 06:10:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:07.176 06:10:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:07.176 06:10:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:07.176 06:10:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:07.176 06:10:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:09.724 06:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:09.724 00:11:09.724 real 0m12.012s 00:11:09.724 user 0m13.342s 00:11:09.724 sys 0m6.061s 00:11:09.724 06:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:09.724 06:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:09.724 ************************************ 00:11:09.724 END TEST nvmf_bdevio 00:11:09.724 ************************************ 00:11:09.724 06:10:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:09.724 00:11:09.724 real 4m59.273s 00:11:09.725 user 11m10.934s 00:11:09.725 sys 1m46.723s 00:11:09.725 06:10:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:09.725 06:10:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:09.725 ************************************ 00:11:09.725 END TEST nvmf_target_core 00:11:09.725 ************************************ 00:11:09.725 06:10:03 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:09.725 06:10:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:09.725 06:10:03 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:09.725 06:10:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:09.725 ************************************ 00:11:09.725 START TEST nvmf_target_extra 00:11:09.725 ************************************ 00:11:09.725 06:10:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:09.725 * Looking for test storage... 00:11:09.725 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:09.725 06:10:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:09.725 06:10:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:11:09.725 06:10:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:09.725 06:10:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:09.725 06:10:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:09.725 06:10:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:09.725 06:10:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:09.725 06:10:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:09.725 06:10:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:09.725 06:10:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:09.725 06:10:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:09.725 06:10:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:09.725 06:10:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:09.725 06:10:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:09.725 06:10:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:09.725 06:10:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:09.725 06:10:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:09.725 06:10:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:09.725 06:10:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:09.725 06:10:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:09.725 06:10:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:09.725 06:10:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:09.725 06:10:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:09.725 06:10:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:09.725 06:10:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:09.725 06:10:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:09.725 06:10:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:09.725 06:10:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:09.725 06:10:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:09.725 06:10:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:09.725 06:10:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:09.725 06:10:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:09.725 06:10:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:09.725 06:10:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:09.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.725 --rc genhtml_branch_coverage=1 00:11:09.725 --rc genhtml_function_coverage=1 00:11:09.725 --rc genhtml_legend=1 00:11:09.725 --rc geninfo_all_blocks=1 00:11:09.725 --rc geninfo_unexecuted_blocks=1 00:11:09.725 00:11:09.725 ' 00:11:09.725 06:10:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:09.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.725 --rc genhtml_branch_coverage=1 00:11:09.725 --rc genhtml_function_coverage=1 00:11:09.725 --rc genhtml_legend=1 00:11:09.725 --rc geninfo_all_blocks=1 00:11:09.725 --rc geninfo_unexecuted_blocks=1 00:11:09.725 00:11:09.725 ' 00:11:09.725 06:10:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:09.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.725 --rc genhtml_branch_coverage=1 00:11:09.725 --rc genhtml_function_coverage=1 00:11:09.725 --rc genhtml_legend=1 00:11:09.725 --rc geninfo_all_blocks=1 00:11:09.725 --rc geninfo_unexecuted_blocks=1 00:11:09.725 00:11:09.725 ' 00:11:09.725 06:10:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:09.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.725 --rc genhtml_branch_coverage=1 00:11:09.725 --rc genhtml_function_coverage=1 00:11:09.725 --rc genhtml_legend=1 00:11:09.725 --rc geninfo_all_blocks=1 00:11:09.725 --rc geninfo_unexecuted_blocks=1 00:11:09.725 00:11:09.725 ' 00:11:09.725 06:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:09.725 06:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:09.725 06:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:09.725 06:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:09.725 06:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:09.725 06:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:09.725 06:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:09.725 06:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:09.725 06:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:09.725 06:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:09.725 06:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:09.725 06:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:09.725 06:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:11:09.725 06:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:11:09.725 06:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:09.725 06:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:09.725 06:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:09.725 06:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:09.725 06:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:09.725 06:10:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:09.725 06:10:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:09.725 06:10:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:09.725 06:10:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:09.726 06:10:04 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.726 06:10:04 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.726 06:10:04 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.726 06:10:04 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:09.726 06:10:04 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.726 06:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:09.726 06:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:09.726 06:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:09.726 06:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:09.726 06:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:09.726 06:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:09.726 06:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:09.726 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:09.726 06:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:09.726 06:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:09.726 06:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:09.726 06:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:09.726 06:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:09.726 06:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:09.726 06:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:09.726 06:10:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:09.726 06:10:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:09.726 06:10:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:09.726 ************************************ 00:11:09.726 START TEST nvmf_example 00:11:09.726 ************************************ 00:11:09.726 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:09.726 * Looking for test storage... 00:11:09.726 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:09.726 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:09.726 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:11:09.726 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:09.726 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:09.726 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:09.726 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:09.726 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:09.726 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:11:09.726 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:11:09.726 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:11:09.726 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:11:09.726 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:11:09.726 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:11:09.726 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:11:09.726 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:09.726 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:11:09.726 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:11:09.726 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:09.726 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:09.726 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:11:09.726 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:11:09.726 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:09.726 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:11:09.726 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:11:09.726 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:11:09.726 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:11:09.726 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:09.726 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:11:09.726 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:11:09.726 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:09.726 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:09.726 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:11:09.726 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:09.726 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:09.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.726 --rc genhtml_branch_coverage=1 00:11:09.726 --rc genhtml_function_coverage=1 00:11:09.726 --rc genhtml_legend=1 00:11:09.726 --rc geninfo_all_blocks=1 00:11:09.726 --rc geninfo_unexecuted_blocks=1 00:11:09.726 00:11:09.726 ' 00:11:09.726 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:09.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.726 --rc genhtml_branch_coverage=1 00:11:09.726 --rc genhtml_function_coverage=1 00:11:09.726 --rc genhtml_legend=1 00:11:09.726 --rc geninfo_all_blocks=1 00:11:09.726 --rc geninfo_unexecuted_blocks=1 00:11:09.726 00:11:09.726 ' 00:11:09.726 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:09.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.726 --rc genhtml_branch_coverage=1 00:11:09.726 --rc genhtml_function_coverage=1 00:11:09.726 --rc genhtml_legend=1 00:11:09.726 --rc geninfo_all_blocks=1 00:11:09.726 --rc geninfo_unexecuted_blocks=1 00:11:09.726 00:11:09.726 ' 00:11:09.726 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:09.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.726 --rc genhtml_branch_coverage=1 00:11:09.726 --rc genhtml_function_coverage=1 00:11:09.726 --rc genhtml_legend=1 00:11:09.726 --rc geninfo_all_blocks=1 00:11:09.727 --rc geninfo_unexecuted_blocks=1 00:11:09.727 00:11:09.727 ' 00:11:09.727 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:09.727 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:09.727 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:09.727 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:09.727 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:09.727 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:09.727 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:09.727 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:09.727 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:09.727 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:09.727 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:09.727 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:09.727 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:11:09.727 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:11:09.727 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:09.727 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:09.727 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:09.727 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:09.727 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:09.727 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:11:09.727 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:09.727 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:09.727 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:09.727 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.727 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.727 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.727 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:09.727 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.727 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:11:09.727 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:09.727 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:09.727 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:09.727 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:09.727 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:09.727 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:09.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:09.727 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:09.727 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:09.727 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:09.727 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:09.727 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:09.727 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:09.727 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:09.727 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:09.727 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:09.727 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:09.727 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:09.727 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:09.727 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:09.727 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:09.727 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:09.727 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:09.727 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:09.727 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:09.727 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:09.727 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:09.727 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:09.727 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:09.727 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:09.727 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:09.727 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:11:09.727 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:17.877 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:17.877 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:11:17.877 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:17.877 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:17.877 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:17.877 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:17.877 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:17.877 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:11:17.877 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:17.877 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:11:17.877 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:11:17.877 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:11:17.877 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:11:17.877 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:11:17.877 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:11:17.877 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:17.877 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:17.877 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:17.877 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:17.877 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:17.877 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:17.877 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:17.877 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:17.877 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:17.877 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:17.877 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:17.877 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:17.877 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:17.877 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:17.877 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:17.877 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:17.877 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:17.877 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:17.877 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:17.877 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:17.877 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:17.877 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:17.877 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:17.877 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:17.877 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:17.877 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:17.877 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:17.877 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:17.877 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:17.877 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:17.877 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:17.877 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:17.877 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:17.877 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:17.877 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:17.877 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:17.877 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:17.877 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:17.877 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:17.877 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:17.877 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:17.877 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:17.877 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:17.877 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:17.877 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:17.877 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:17.877 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:17.877 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:17.877 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:17.877 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:17.877 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:17.877 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:17.877 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:17.877 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:17.877 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:17.877 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:17.877 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:17.877 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:17.877 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:11:17.877 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:17.877 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:17.877 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:17.877 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:17.877 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:17.877 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:17.877 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:17.877 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:17.877 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:17.878 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:17.878 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:17.878 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:17.878 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:17.878 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:17.878 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:17.878 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:17.878 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:17.878 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:17.878 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:17.878 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:17.878 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:17.878 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:17.878 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:17.878 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:17.878 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:17.878 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:17.878 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:17.878 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.628 ms 00:11:17.878 00:11:17.878 --- 10.0.0.2 ping statistics --- 00:11:17.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:17.878 rtt min/avg/max/mdev = 0.628/0.628/0.628/0.000 ms 00:11:17.878 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:17.878 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:17.878 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:11:17.878 00:11:17.878 --- 10.0.0.1 ping statistics --- 00:11:17.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:17.878 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:11:17.878 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:17.878 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:11:17.878 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:17.878 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:17.878 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:17.878 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:17.878 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:17.878 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:17.878 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:17.878 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:17.878 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:17.878 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:17.878 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:17.878 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:17.878 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:17.878 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=216185 00:11:17.878 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:17.878 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:17.878 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 216185 00:11:17.878 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 216185 ']' 00:11:17.878 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:17.878 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:17.878 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:17.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:17.878 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:17.878 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:18.141 06:10:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:18.141 06:10:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:11:18.141 06:10:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:18.141 06:10:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:18.141 06:10:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:18.141 06:10:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:18.141 06:10:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.141 06:10:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:18.141 06:10:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.141 06:10:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:18.141 06:10:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.141 06:10:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:18.141 06:10:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.141 06:10:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:18.141 06:10:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:18.141 06:10:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.141 06:10:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:18.141 06:10:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.141 06:10:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:18.141 06:10:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:18.141 06:10:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.141 06:10:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:18.141 06:10:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.141 06:10:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:18.141 06:10:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.141 06:10:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:18.141 06:10:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.141 06:10:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:18.141 06:10:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:30.378 Initializing NVMe Controllers 00:11:30.378 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:30.378 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:30.378 Initialization complete. Launching workers. 00:11:30.378 ======================================================== 00:11:30.378 Latency(us) 00:11:30.378 Device Information : IOPS MiB/s Average min max 00:11:30.378 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19065.09 74.47 3356.71 632.30 15572.52 00:11:30.378 ======================================================== 00:11:30.378 Total : 19065.09 74.47 3356.71 632.30 15572.52 00:11:30.378 00:11:30.378 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:30.378 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:30.378 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:30.378 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:30.378 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:30.378 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:30.378 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:30.378 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:30.378 rmmod nvme_tcp 00:11:30.378 rmmod nvme_fabrics 00:11:30.378 rmmod nvme_keyring 00:11:30.378 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:30.378 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:30.378 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:30.378 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 216185 ']' 00:11:30.378 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 216185 00:11:30.379 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 216185 ']' 00:11:30.379 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 216185 00:11:30.379 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:11:30.379 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:30.379 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 216185 00:11:30.379 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:11:30.379 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:11:30.379 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 216185' 00:11:30.379 killing process with pid 216185 00:11:30.379 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 216185 00:11:30.379 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 216185 00:11:30.379 nvmf threads initialize successfully 00:11:30.379 bdev subsystem init successfully 00:11:30.379 created a nvmf target service 00:11:30.379 create targets's poll groups done 00:11:30.379 all subsystems of target started 00:11:30.379 nvmf target is running 00:11:30.379 all subsystems of target stopped 00:11:30.379 destroy targets's poll groups done 00:11:30.379 destroyed the nvmf target service 00:11:30.379 bdev subsystem finish successfully 00:11:30.379 nvmf threads destroy successfully 00:11:30.379 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:30.379 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:30.379 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:30.379 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:11:30.379 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:11:30.379 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:11:30.379 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:30.379 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:30.379 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:30.379 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:30.379 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:30.379 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:30.640 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:30.640 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:30.640 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:30.640 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:30.900 00:11:30.900 real 0m21.157s 00:11:30.900 user 0m46.348s 00:11:30.900 sys 0m6.995s 00:11:30.900 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:30.900 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:30.900 ************************************ 00:11:30.900 END TEST nvmf_example 00:11:30.900 ************************************ 00:11:30.900 06:10:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:30.900 06:10:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:30.900 06:10:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:30.900 06:10:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:30.900 ************************************ 00:11:30.900 START TEST nvmf_filesystem 00:11:30.900 ************************************ 00:11:30.900 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:30.900 * Looking for test storage... 00:11:30.900 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:30.900 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:30.900 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:11:30.900 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:31.164 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:31.164 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:31.164 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:31.164 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:31.164 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:31.164 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:31.164 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:31.164 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:31.164 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:31.164 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:31.164 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:31.164 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:31.164 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:31.164 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:31.164 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:31.164 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:31.164 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:31.164 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:31.164 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:31.164 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:31.164 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:31.164 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:31.164 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:31.164 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:31.164 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:31.164 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:31.164 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:31.164 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:31.164 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:31.164 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:31.164 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:31.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.164 --rc genhtml_branch_coverage=1 00:11:31.164 --rc genhtml_function_coverage=1 00:11:31.164 --rc genhtml_legend=1 00:11:31.164 --rc geninfo_all_blocks=1 00:11:31.164 --rc geninfo_unexecuted_blocks=1 00:11:31.164 00:11:31.164 ' 00:11:31.164 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:31.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.164 --rc genhtml_branch_coverage=1 00:11:31.164 --rc genhtml_function_coverage=1 00:11:31.164 --rc genhtml_legend=1 00:11:31.164 --rc geninfo_all_blocks=1 00:11:31.164 --rc geninfo_unexecuted_blocks=1 00:11:31.164 00:11:31.164 ' 00:11:31.164 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:31.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.164 --rc genhtml_branch_coverage=1 00:11:31.164 --rc genhtml_function_coverage=1 00:11:31.164 --rc genhtml_legend=1 00:11:31.164 --rc geninfo_all_blocks=1 00:11:31.164 --rc geninfo_unexecuted_blocks=1 00:11:31.164 00:11:31.164 ' 00:11:31.164 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:31.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.164 --rc genhtml_branch_coverage=1 00:11:31.164 --rc genhtml_function_coverage=1 00:11:31.164 --rc genhtml_legend=1 00:11:31.164 --rc geninfo_all_blocks=1 00:11:31.164 --rc geninfo_unexecuted_blocks=1 00:11:31.164 00:11:31.164 ' 00:11:31.164 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:31.164 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:31.164 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:31.164 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:31.164 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:31.164 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:31.164 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:31.164 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:31.164 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:31.164 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:31.164 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:31.164 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:31.164 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:31.164 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:31.164 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:31.164 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:31.164 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:31.164 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:31.164 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:31.164 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:31.164 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:31.164 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:31.164 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:31.164 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:31.164 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:31.164 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:31.164 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:31.164 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:31.164 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:31.164 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:31.164 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:31.164 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:31.164 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:31.164 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:31.164 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:31.164 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:31.164 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:31.164 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:31.164 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:31.164 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:31.165 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:31.165 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:31.165 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:31.165 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:31.165 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:31.165 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:31.165 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:31.165 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:31.165 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:31.165 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:31.165 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:31.165 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:31.165 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:11:31.165 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:31.165 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:31.165 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:31.165 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:31.165 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:31.165 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:31.165 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:31.165 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:31.165 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:31.165 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:31.165 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:31.165 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:11:31.165 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:11:31.165 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:31.165 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:31.165 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:31.165 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:31.165 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:31.165 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:31.165 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:31.165 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:31.165 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:31.165 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:31.165 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:31.165 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:11:31.165 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:31.165 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:31.165 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:31.165 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:31.165 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:31.165 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:31.165 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:31.165 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:31.165 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:31.165 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:31.165 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:31.165 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:31.165 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:31.165 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:31.165 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:31.165 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:31.165 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:31.165 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:31.165 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:31.165 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:31.165 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:31.165 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:31.165 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:31.165 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:31.165 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:31.165 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:31.165 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:31.165 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:31.165 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:31.165 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:31.165 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:31.165 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:31.165 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:31.165 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:31.165 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:31.165 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:31.165 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:31.165 #define SPDK_CONFIG_H 00:11:31.165 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:31.165 #define SPDK_CONFIG_APPS 1 00:11:31.165 #define SPDK_CONFIG_ARCH native 00:11:31.165 #undef SPDK_CONFIG_ASAN 00:11:31.165 #undef SPDK_CONFIG_AVAHI 00:11:31.165 #undef SPDK_CONFIG_CET 00:11:31.165 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:31.165 #define SPDK_CONFIG_COVERAGE 1 00:11:31.165 #define SPDK_CONFIG_CROSS_PREFIX 00:11:31.165 #undef SPDK_CONFIG_CRYPTO 00:11:31.165 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:31.165 #undef SPDK_CONFIG_CUSTOMOCF 00:11:31.165 #undef SPDK_CONFIG_DAOS 00:11:31.165 #define SPDK_CONFIG_DAOS_DIR 00:11:31.165 #define SPDK_CONFIG_DEBUG 1 00:11:31.165 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:31.165 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:31.165 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:31.165 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:31.165 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:31.165 #undef SPDK_CONFIG_DPDK_UADK 00:11:31.165 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:31.165 #define SPDK_CONFIG_EXAMPLES 1 00:11:31.165 #undef SPDK_CONFIG_FC 00:11:31.165 #define SPDK_CONFIG_FC_PATH 00:11:31.165 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:31.165 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:31.165 #define SPDK_CONFIG_FSDEV 1 00:11:31.165 #undef SPDK_CONFIG_FUSE 00:11:31.165 #undef SPDK_CONFIG_FUZZER 00:11:31.165 #define SPDK_CONFIG_FUZZER_LIB 00:11:31.165 #undef SPDK_CONFIG_GOLANG 00:11:31.165 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:31.165 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:31.165 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:31.165 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:31.165 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:31.165 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:31.165 #undef SPDK_CONFIG_HAVE_LZ4 00:11:31.165 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:31.165 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:31.165 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:31.165 #define SPDK_CONFIG_IDXD 1 00:11:31.165 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:31.165 #undef SPDK_CONFIG_IPSEC_MB 00:11:31.165 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:31.165 #define SPDK_CONFIG_ISAL 1 00:11:31.165 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:31.165 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:31.165 #define SPDK_CONFIG_LIBDIR 00:11:31.165 #undef SPDK_CONFIG_LTO 00:11:31.165 #define SPDK_CONFIG_MAX_LCORES 128 00:11:31.165 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:31.165 #define SPDK_CONFIG_NVME_CUSE 1 00:11:31.165 #undef SPDK_CONFIG_OCF 00:11:31.165 #define SPDK_CONFIG_OCF_PATH 00:11:31.165 #define SPDK_CONFIG_OPENSSL_PATH 00:11:31.165 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:31.165 #define SPDK_CONFIG_PGO_DIR 00:11:31.166 #undef SPDK_CONFIG_PGO_USE 00:11:31.166 #define SPDK_CONFIG_PREFIX /usr/local 00:11:31.166 #undef SPDK_CONFIG_RAID5F 00:11:31.166 #undef SPDK_CONFIG_RBD 00:11:31.166 #define SPDK_CONFIG_RDMA 1 00:11:31.166 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:31.166 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:31.166 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:31.166 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:31.166 #define SPDK_CONFIG_SHARED 1 00:11:31.166 #undef SPDK_CONFIG_SMA 00:11:31.166 #define SPDK_CONFIG_TESTS 1 00:11:31.166 #undef SPDK_CONFIG_TSAN 00:11:31.166 #define SPDK_CONFIG_UBLK 1 00:11:31.166 #define SPDK_CONFIG_UBSAN 1 00:11:31.166 #undef SPDK_CONFIG_UNIT_TESTS 00:11:31.166 #undef SPDK_CONFIG_URING 00:11:31.166 #define SPDK_CONFIG_URING_PATH 00:11:31.166 #undef SPDK_CONFIG_URING_ZNS 00:11:31.166 #undef SPDK_CONFIG_USDT 00:11:31.166 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:31.166 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:31.166 #define SPDK_CONFIG_VFIO_USER 1 00:11:31.166 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:31.166 #define SPDK_CONFIG_VHOST 1 00:11:31.166 #define SPDK_CONFIG_VIRTIO 1 00:11:31.166 #undef SPDK_CONFIG_VTUNE 00:11:31.166 #define SPDK_CONFIG_VTUNE_DIR 00:11:31.166 #define SPDK_CONFIG_WERROR 1 00:11:31.166 #define SPDK_CONFIG_WPDK_DIR 00:11:31.166 #undef SPDK_CONFIG_XNVME 00:11:31.166 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:31.166 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:31.166 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:31.166 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:31.166 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:31.166 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:31.166 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:31.166 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.166 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.166 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.166 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:31.166 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.166 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:31.166 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:31.166 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:31.166 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:31.166 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:31.166 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:31.166 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:31.166 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:31.166 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:31.166 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:31.166 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:31.166 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:31.166 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:31.166 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:31.166 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:31.166 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:31.166 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:31.166 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:31.166 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:31.166 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:31.166 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:31.166 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:31.166 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:31.166 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:31.166 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:31.166 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:31.166 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:31.166 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:11:31.166 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:31.166 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:31.166 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:31.166 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:31.166 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:31.166 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:31.166 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:31.166 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:31.166 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:31.166 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:31.166 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:31.166 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:31.166 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:31.166 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:31.166 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:31.166 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:31.166 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:31.166 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:31.166 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:31.166 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:31.166 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:31.166 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:31.166 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:31.166 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:31.166 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:31.166 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:31.166 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:31.166 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:31.166 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:31.166 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:31.166 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:31.166 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:31.166 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:31.166 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:31.166 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:31.166 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:31.167 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:31.168 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:31.168 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:31.168 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:31.168 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:31.168 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:11:31.168 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:31.168 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:31.168 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:31.168 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:31.168 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:31.168 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:11:31.168 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:11:31.168 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:11:31.168 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:31.168 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:31.168 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:31.168 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:31.168 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:11:31.168 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:11:31.168 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:31.168 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:31.168 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:31.168 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:31.168 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:31.168 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:31.168 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:31.168 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:31.168 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:31.168 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:31.168 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:31.168 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:31.168 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:11:31.168 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:11:31.168 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:11:31.168 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:11:31.168 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:11:31.168 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:31.168 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:11:31.168 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:11:31.168 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:11:31.168 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:11:31.168 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:11:31.168 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:11:31.168 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:11:31.168 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:11:31.168 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:11:31.168 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:11:31.168 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:11:31.168 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j128 00:11:31.168 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:11:31.168 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:11:31.168 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:11:31.168 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:11:31.168 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:11:31.168 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:11:31.168 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:11:31.168 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 218561 ]] 00:11:31.168 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 218561 00:11:31.168 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:11:31.168 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:11:31.168 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:11:31.168 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:11:31.168 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:11:31.168 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:11:31.168 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:11:31.168 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:11:31.168 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.P4yvJU 00:11:31.168 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:31.168 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:11:31.168 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.P4yvJU/tests/target /tmp/spdk.P4yvJU 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=123668979712 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=129363189760 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5694210048 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64671563776 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64681594880 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=25849303040 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=25872637952 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23334912 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=efivarfs 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=efivarfs 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=335872 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=507904 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=167936 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64681418752 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64681594880 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=176128 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12936306688 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12936318976 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:11:31.169 * Looking for test storage... 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=123668979712 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=7908802560 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:31.169 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:31.169 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:31.170 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:31.170 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:31.170 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:31.170 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:31.170 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:31.170 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:31.170 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:31.170 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:31.170 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:31.170 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:31.170 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:31.170 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:31.170 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:31.430 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:31.430 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:31.430 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:31.430 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:31.430 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:31.430 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:31.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.430 --rc genhtml_branch_coverage=1 00:11:31.430 --rc genhtml_function_coverage=1 00:11:31.430 --rc genhtml_legend=1 00:11:31.430 --rc geninfo_all_blocks=1 00:11:31.430 --rc geninfo_unexecuted_blocks=1 00:11:31.430 00:11:31.430 ' 00:11:31.430 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:31.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.430 --rc genhtml_branch_coverage=1 00:11:31.430 --rc genhtml_function_coverage=1 00:11:31.430 --rc genhtml_legend=1 00:11:31.430 --rc geninfo_all_blocks=1 00:11:31.430 --rc geninfo_unexecuted_blocks=1 00:11:31.430 00:11:31.430 ' 00:11:31.430 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:31.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.430 --rc genhtml_branch_coverage=1 00:11:31.430 --rc genhtml_function_coverage=1 00:11:31.430 --rc genhtml_legend=1 00:11:31.430 --rc geninfo_all_blocks=1 00:11:31.430 --rc geninfo_unexecuted_blocks=1 00:11:31.430 00:11:31.430 ' 00:11:31.431 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:31.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.431 --rc genhtml_branch_coverage=1 00:11:31.431 --rc genhtml_function_coverage=1 00:11:31.431 --rc genhtml_legend=1 00:11:31.431 --rc geninfo_all_blocks=1 00:11:31.431 --rc geninfo_unexecuted_blocks=1 00:11:31.431 00:11:31.431 ' 00:11:31.431 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:31.431 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:31.431 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:31.431 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:31.431 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:31.431 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:31.431 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:31.431 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:31.431 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:31.431 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:31.431 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:31.431 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:31.431 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:11:31.431 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:11:31.431 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:31.431 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:31.431 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:31.431 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:31.431 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:31.431 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:31.431 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:31.431 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:31.431 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:31.431 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.431 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.431 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.431 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:31.431 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.431 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:31.431 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:31.431 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:31.431 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:31.431 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:31.431 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:31.431 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:31.431 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:31.431 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:31.431 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:31.431 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:31.431 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:31.431 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:31.431 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:31.431 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:31.431 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:31.431 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:31.431 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:31.431 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:31.431 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:31.431 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:31.431 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:31.431 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:31.431 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:31.431 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:31.431 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:39.576 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:39.576 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:39.576 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:39.576 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:39.576 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:39.576 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:39.576 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:39.576 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:39.576 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:39.576 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:39.576 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:39.576 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:39.576 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:39.576 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:39.576 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:39.576 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:39.576 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:39.576 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:39.576 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:39.576 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:39.576 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:39.576 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:39.576 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:39.576 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:39.576 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:39.576 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:39.576 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:39.576 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:39.576 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:39.576 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:39.576 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:39.576 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:39.576 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:39.576 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:39.576 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:39.576 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:39.576 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:39.576 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:39.576 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:39.576 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:39.576 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:39.576 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:39.576 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:39.576 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:39.576 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:39.576 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:39.576 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:39.576 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:39.576 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:39.576 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:39.576 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:39.576 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:39.576 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:39.576 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:39.576 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:39.576 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:39.576 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:39.576 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:39.576 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:39.576 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:39.576 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:39.576 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:39.576 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:39.576 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:39.576 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:39.576 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:39.576 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:39.576 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:39.576 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:39.576 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:39.576 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:39.576 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:39.576 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:39.577 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:11:39.577 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:39.577 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:39.577 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:39.577 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:39.577 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:39.577 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:39.577 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:39.577 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:39.577 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:39.577 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:39.577 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:39.577 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:39.577 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:39.577 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:39.577 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:39.577 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:39.577 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:39.577 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:39.577 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:39.577 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:39.577 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:39.577 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:39.577 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:39.577 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:39.577 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:39.577 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:39.577 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:39.577 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.629 ms 00:11:39.577 00:11:39.577 --- 10.0.0.2 ping statistics --- 00:11:39.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:39.577 rtt min/avg/max/mdev = 0.629/0.629/0.629/0.000 ms 00:11:39.577 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:39.577 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:39.577 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:11:39.577 00:11:39.577 --- 10.0.0.1 ping statistics --- 00:11:39.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:39.577 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:11:39.577 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:39.577 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:11:39.577 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:39.577 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:39.577 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:39.577 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:39.577 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:39.577 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:39.577 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:39.577 06:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:39.577 06:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:39.577 06:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:39.577 06:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:39.577 ************************************ 00:11:39.577 START TEST nvmf_filesystem_no_in_capsule 00:11:39.577 ************************************ 00:11:39.577 06:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:11:39.577 06:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:39.577 06:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:39.577 06:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:39.577 06:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:39.577 06:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:39.577 06:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=222231 00:11:39.577 06:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 222231 00:11:39.577 06:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:39.577 06:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 222231 ']' 00:11:39.577 06:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:39.577 06:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:39.577 06:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:39.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:39.577 06:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:39.577 06:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:39.577 [2024-12-09 06:10:33.144187] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:11:39.577 [2024-12-09 06:10:33.144244] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:39.577 [2024-12-09 06:10:33.239624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:39.577 [2024-12-09 06:10:33.290580] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:39.577 [2024-12-09 06:10:33.290634] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:39.577 [2024-12-09 06:10:33.290642] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:39.577 [2024-12-09 06:10:33.290648] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:39.577 [2024-12-09 06:10:33.290654] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:39.577 [2024-12-09 06:10:33.292535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:39.577 [2024-12-09 06:10:33.292726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:39.577 [2024-12-09 06:10:33.292883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:39.577 [2024-12-09 06:10:33.292883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:39.577 06:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:39.577 06:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:39.577 06:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:39.577 06:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:39.577 06:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:39.577 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:39.577 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:39.577 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:39.577 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.577 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:39.577 [2024-12-09 06:10:34.032389] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:39.577 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.577 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:39.577 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.577 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:39.577 Malloc1 00:11:39.577 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.577 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:39.577 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.577 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:39.839 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.839 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:39.839 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.839 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:39.839 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.839 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:39.839 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.839 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:39.839 [2024-12-09 06:10:34.188481] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:39.839 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.839 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:39.839 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:39.839 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:39.839 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:39.839 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:39.839 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:39.839 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.839 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:39.839 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.839 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:39.839 { 00:11:39.839 "name": "Malloc1", 00:11:39.839 "aliases": [ 00:11:39.839 "32e2fc4e-f8ee-4d55-b89d-8a4e03baf054" 00:11:39.839 ], 00:11:39.839 "product_name": "Malloc disk", 00:11:39.839 "block_size": 512, 00:11:39.839 "num_blocks": 1048576, 00:11:39.839 "uuid": "32e2fc4e-f8ee-4d55-b89d-8a4e03baf054", 00:11:39.839 "assigned_rate_limits": { 00:11:39.839 "rw_ios_per_sec": 0, 00:11:39.839 "rw_mbytes_per_sec": 0, 00:11:39.839 "r_mbytes_per_sec": 0, 00:11:39.839 "w_mbytes_per_sec": 0 00:11:39.839 }, 00:11:39.839 "claimed": true, 00:11:39.839 "claim_type": "exclusive_write", 00:11:39.839 "zoned": false, 00:11:39.839 "supported_io_types": { 00:11:39.839 "read": true, 00:11:39.839 "write": true, 00:11:39.839 "unmap": true, 00:11:39.839 "flush": true, 00:11:39.839 "reset": true, 00:11:39.839 "nvme_admin": false, 00:11:39.839 "nvme_io": false, 00:11:39.839 "nvme_io_md": false, 00:11:39.839 "write_zeroes": true, 00:11:39.839 "zcopy": true, 00:11:39.839 "get_zone_info": false, 00:11:39.839 "zone_management": false, 00:11:39.839 "zone_append": false, 00:11:39.839 "compare": false, 00:11:39.839 "compare_and_write": false, 00:11:39.839 "abort": true, 00:11:39.839 "seek_hole": false, 00:11:39.839 "seek_data": false, 00:11:39.839 "copy": true, 00:11:39.839 "nvme_iov_md": false 00:11:39.839 }, 00:11:39.839 "memory_domains": [ 00:11:39.839 { 00:11:39.839 "dma_device_id": "system", 00:11:39.839 "dma_device_type": 1 00:11:39.839 }, 00:11:39.839 { 00:11:39.839 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.839 "dma_device_type": 2 00:11:39.839 } 00:11:39.839 ], 00:11:39.839 "driver_specific": {} 00:11:39.839 } 00:11:39.839 ]' 00:11:39.839 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:39.839 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:39.839 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:39.839 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:39.839 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:39.839 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:39.839 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:39.839 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:41.761 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:41.761 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:41.761 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:41.761 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:41.761 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:43.669 06:10:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:43.669 06:10:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:43.669 06:10:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:43.669 06:10:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:43.669 06:10:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:43.669 06:10:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:43.669 06:10:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:43.669 06:10:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:43.669 06:10:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:43.669 06:10:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:43.669 06:10:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:43.669 06:10:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:43.669 06:10:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:43.669 06:10:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:43.669 06:10:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:43.669 06:10:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:43.669 06:10:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:43.929 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:44.497 06:10:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:45.880 06:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:45.880 06:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:45.880 06:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:45.880 06:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:45.880 06:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:45.880 ************************************ 00:11:45.880 START TEST filesystem_ext4 00:11:45.880 ************************************ 00:11:45.880 06:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:45.880 06:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:45.880 06:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:45.880 06:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:45.880 06:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:45.880 06:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:45.880 06:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:45.880 06:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:45.880 06:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:45.880 06:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:45.880 06:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:45.880 mke2fs 1.47.0 (5-Feb-2023) 00:11:45.880 Discarding device blocks: 0/522240 done 00:11:45.880 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:45.880 Filesystem UUID: 99dffd28-7eb9-4663-8bdc-803ecdf7e493 00:11:45.880 Superblock backups stored on blocks: 00:11:45.880 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:45.880 00:11:45.880 Allocating group tables: 0/64 done 00:11:45.880 Writing inode tables: 0/64 done 00:11:48.425 Creating journal (8192 blocks): done 00:11:50.751 Writing superblocks and filesystem accounting information: 0/6450/64 done 00:11:50.751 00:11:50.751 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:50.751 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:56.039 06:10:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:56.300 06:10:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:56.300 06:10:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:56.300 06:10:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:56.300 06:10:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:56.300 06:10:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:56.300 06:10:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 222231 00:11:56.300 06:10:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:56.300 06:10:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:56.300 06:10:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:56.300 06:10:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:56.300 00:11:56.300 real 0m10.653s 00:11:56.300 user 0m0.030s 00:11:56.300 sys 0m0.127s 00:11:56.300 06:10:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:56.300 06:10:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:56.300 ************************************ 00:11:56.300 END TEST filesystem_ext4 00:11:56.300 ************************************ 00:11:56.300 06:10:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:56.300 06:10:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:56.300 06:10:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:56.300 06:10:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:56.300 ************************************ 00:11:56.300 START TEST filesystem_btrfs 00:11:56.300 ************************************ 00:11:56.300 06:10:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:56.300 06:10:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:56.300 06:10:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:56.300 06:10:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:56.300 06:10:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:56.300 06:10:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:56.300 06:10:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:56.300 06:10:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:56.300 06:10:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:56.300 06:10:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:56.300 06:10:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:56.561 btrfs-progs v6.8.1 00:11:56.561 See https://btrfs.readthedocs.io for more information. 00:11:56.561 00:11:56.561 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:56.561 NOTE: several default settings have changed in version 5.15, please make sure 00:11:56.561 this does not affect your deployments: 00:11:56.561 - DUP for metadata (-m dup) 00:11:56.561 - enabled no-holes (-O no-holes) 00:11:56.561 - enabled free-space-tree (-R free-space-tree) 00:11:56.561 00:11:56.561 Label: (null) 00:11:56.561 UUID: cd241403-0faa-4631-a81d-e19a0166b34a 00:11:56.561 Node size: 16384 00:11:56.561 Sector size: 4096 (CPU page size: 4096) 00:11:56.561 Filesystem size: 510.00MiB 00:11:56.561 Block group profiles: 00:11:56.561 Data: single 8.00MiB 00:11:56.561 Metadata: DUP 32.00MiB 00:11:56.561 System: DUP 8.00MiB 00:11:56.561 SSD detected: yes 00:11:56.561 Zoned device: no 00:11:56.561 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:56.561 Checksum: crc32c 00:11:56.561 Number of devices: 1 00:11:56.561 Devices: 00:11:56.561 ID SIZE PATH 00:11:56.561 1 510.00MiB /dev/nvme0n1p1 00:11:56.561 00:11:56.561 06:10:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:56.561 06:10:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:57.946 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:57.946 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:57.946 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:57.946 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:57.946 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:57.946 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:57.947 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 222231 00:11:57.947 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:57.947 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:57.947 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:57.947 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:57.947 00:11:57.947 real 0m1.565s 00:11:57.947 user 0m0.033s 00:11:57.947 sys 0m0.166s 00:11:57.947 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:57.947 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:57.947 ************************************ 00:11:57.947 END TEST filesystem_btrfs 00:11:57.947 ************************************ 00:11:57.947 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:57.947 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:57.947 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:57.947 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:57.947 ************************************ 00:11:57.947 START TEST filesystem_xfs 00:11:57.947 ************************************ 00:11:57.947 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:57.947 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:57.947 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:57.947 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:57.947 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:57.947 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:57.947 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:57.947 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:11:57.947 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:57.947 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:57.947 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:58.208 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:58.208 = sectsz=512 attr=2, projid32bit=1 00:11:58.208 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:58.208 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:58.208 data = bsize=4096 blocks=130560, imaxpct=25 00:11:58.208 = sunit=0 swidth=0 blks 00:11:58.208 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:58.208 log =internal log bsize=4096 blocks=16384, version=2 00:11:58.208 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:58.208 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:59.152 Discarding blocks...Done. 00:11:59.152 06:10:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:59.152 06:10:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:02.451 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:02.711 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:12:02.711 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:02.711 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:12:02.711 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:12:02.711 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:02.711 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 222231 00:12:02.711 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:02.711 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:02.711 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:02.711 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:02.711 00:12:02.711 real 0m4.694s 00:12:02.711 user 0m0.026s 00:12:02.711 sys 0m0.129s 00:12:02.711 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:02.711 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:02.711 ************************************ 00:12:02.711 END TEST filesystem_xfs 00:12:02.711 ************************************ 00:12:02.712 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:02.972 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:02.972 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:02.972 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.972 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:02.972 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:02.972 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:02.972 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:02.972 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:02.972 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:02.972 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:02.972 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:02.972 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.972 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:02.972 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.972 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:02.972 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 222231 00:12:02.972 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 222231 ']' 00:12:02.972 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 222231 00:12:02.972 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:02.972 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:02.972 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 222231 00:12:02.972 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:02.972 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:02.972 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 222231' 00:12:02.973 killing process with pid 222231 00:12:02.973 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 222231 00:12:02.973 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 222231 00:12:03.233 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:03.233 00:12:03.233 real 0m24.603s 00:12:03.233 user 1m37.350s 00:12:03.233 sys 0m1.717s 00:12:03.233 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:03.233 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:03.233 ************************************ 00:12:03.233 END TEST nvmf_filesystem_no_in_capsule 00:12:03.233 ************************************ 00:12:03.233 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:12:03.233 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:03.233 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:03.233 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:03.233 ************************************ 00:12:03.233 START TEST nvmf_filesystem_in_capsule 00:12:03.233 ************************************ 00:12:03.233 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:12:03.233 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:12:03.233 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:03.233 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:03.233 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:03.233 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:03.233 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=226507 00:12:03.233 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 226507 00:12:03.233 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:03.233 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 226507 ']' 00:12:03.233 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:03.233 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:03.233 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:03.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:03.233 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:03.233 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:03.494 [2024-12-09 06:10:57.822433] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:12:03.494 [2024-12-09 06:10:57.822486] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:03.494 [2024-12-09 06:10:57.912534] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:03.494 [2024-12-09 06:10:57.944034] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:03.494 [2024-12-09 06:10:57.944064] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:03.494 [2024-12-09 06:10:57.944070] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:03.494 [2024-12-09 06:10:57.944075] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:03.494 [2024-12-09 06:10:57.944079] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:03.494 [2024-12-09 06:10:57.945513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:03.494 [2024-12-09 06:10:57.945830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:03.494 [2024-12-09 06:10:57.945949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:03.494 [2024-12-09 06:10:57.945950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:04.064 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:04.064 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:12:04.064 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:04.064 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:04.064 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:04.324 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:04.324 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:04.324 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:12:04.324 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.324 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:04.324 [2024-12-09 06:10:58.678843] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:04.324 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.324 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:04.324 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.324 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:04.324 Malloc1 00:12:04.324 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.324 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:04.324 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.324 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:04.324 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.324 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:04.324 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.324 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:04.324 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.324 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:04.324 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.324 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:04.324 [2024-12-09 06:10:58.817287] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:04.324 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.324 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:04.324 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:12:04.324 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:12:04.324 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:12:04.324 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:12:04.324 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:04.324 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.324 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:04.324 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.324 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:12:04.324 { 00:12:04.324 "name": "Malloc1", 00:12:04.324 "aliases": [ 00:12:04.324 "e31fd9b8-636a-4b57-9104-41df661e65da" 00:12:04.324 ], 00:12:04.324 "product_name": "Malloc disk", 00:12:04.324 "block_size": 512, 00:12:04.324 "num_blocks": 1048576, 00:12:04.324 "uuid": "e31fd9b8-636a-4b57-9104-41df661e65da", 00:12:04.324 "assigned_rate_limits": { 00:12:04.324 "rw_ios_per_sec": 0, 00:12:04.324 "rw_mbytes_per_sec": 0, 00:12:04.324 "r_mbytes_per_sec": 0, 00:12:04.324 "w_mbytes_per_sec": 0 00:12:04.324 }, 00:12:04.324 "claimed": true, 00:12:04.324 "claim_type": "exclusive_write", 00:12:04.324 "zoned": false, 00:12:04.324 "supported_io_types": { 00:12:04.324 "read": true, 00:12:04.324 "write": true, 00:12:04.324 "unmap": true, 00:12:04.324 "flush": true, 00:12:04.324 "reset": true, 00:12:04.324 "nvme_admin": false, 00:12:04.324 "nvme_io": false, 00:12:04.324 "nvme_io_md": false, 00:12:04.324 "write_zeroes": true, 00:12:04.324 "zcopy": true, 00:12:04.324 "get_zone_info": false, 00:12:04.324 "zone_management": false, 00:12:04.324 "zone_append": false, 00:12:04.324 "compare": false, 00:12:04.324 "compare_and_write": false, 00:12:04.324 "abort": true, 00:12:04.324 "seek_hole": false, 00:12:04.324 "seek_data": false, 00:12:04.324 "copy": true, 00:12:04.324 "nvme_iov_md": false 00:12:04.324 }, 00:12:04.324 "memory_domains": [ 00:12:04.324 { 00:12:04.324 "dma_device_id": "system", 00:12:04.324 "dma_device_type": 1 00:12:04.324 }, 00:12:04.324 { 00:12:04.324 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:04.324 "dma_device_type": 2 00:12:04.324 } 00:12:04.324 ], 00:12:04.324 "driver_specific": {} 00:12:04.324 } 00:12:04.324 ]' 00:12:04.324 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:12:04.324 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:12:04.324 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:12:04.600 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:12:04.600 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:12:04.600 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:12:04.600 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:04.600 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:05.981 06:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:05.981 06:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:12:05.981 06:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:05.981 06:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:05.981 06:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:12:07.894 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:07.894 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:07.894 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:07.894 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:07.894 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:07.894 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:12:07.894 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:07.894 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:07.894 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:07.894 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:07.894 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:07.894 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:07.894 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:07.894 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:07.894 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:07.894 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:07.894 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:08.471 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:08.471 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:09.412 06:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:12:09.412 06:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:09.412 06:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:09.412 06:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:09.412 06:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:09.412 ************************************ 00:12:09.412 START TEST filesystem_in_capsule_ext4 00:12:09.412 ************************************ 00:12:09.412 06:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:09.412 06:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:09.413 06:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:09.413 06:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:09.413 06:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:12:09.413 06:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:09.413 06:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:12:09.413 06:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:12:09.413 06:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:12:09.413 06:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:12:09.413 06:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:09.413 mke2fs 1.47.0 (5-Feb-2023) 00:12:09.413 Discarding device blocks: 0/522240 done 00:12:09.413 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:09.413 Filesystem UUID: e264f0a1-7430-44d9-b61f-049fa1ce97fd 00:12:09.413 Superblock backups stored on blocks: 00:12:09.413 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:09.413 00:12:09.413 Allocating group tables: 0/64 done 00:12:09.674 Writing inode tables: 0/64 done 00:12:09.674 Creating journal (8192 blocks): done 00:12:10.196 Writing superblocks and filesystem accounting information: 0/64 done 00:12:10.196 00:12:10.196 06:11:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:12:10.196 06:11:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:16.780 06:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:16.780 06:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:16.780 06:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:16.780 06:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:16.780 06:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:16.780 06:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:16.780 06:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 226507 00:12:16.780 06:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:16.780 06:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:16.780 06:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:16.780 06:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:16.780 00:12:16.780 real 0m6.358s 00:12:16.780 user 0m0.022s 00:12:16.780 sys 0m0.083s 00:12:16.780 06:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:16.780 06:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:16.780 ************************************ 00:12:16.780 END TEST filesystem_in_capsule_ext4 00:12:16.780 ************************************ 00:12:16.780 06:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:16.780 06:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:16.780 06:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:16.780 06:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:16.780 ************************************ 00:12:16.780 START TEST filesystem_in_capsule_btrfs 00:12:16.780 ************************************ 00:12:16.780 06:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:16.780 06:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:16.780 06:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:16.780 06:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:16.780 06:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:16.780 06:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:16.780 06:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:16.780 06:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:16.780 06:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:16.780 06:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:16.780 06:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:16.780 btrfs-progs v6.8.1 00:12:16.780 See https://btrfs.readthedocs.io for more information. 00:12:16.780 00:12:16.780 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:16.780 NOTE: several default settings have changed in version 5.15, please make sure 00:12:16.780 this does not affect your deployments: 00:12:16.780 - DUP for metadata (-m dup) 00:12:16.780 - enabled no-holes (-O no-holes) 00:12:16.780 - enabled free-space-tree (-R free-space-tree) 00:12:16.780 00:12:16.780 Label: (null) 00:12:16.780 UUID: 6dd8accc-600f-400b-853f-8f6879886677 00:12:16.780 Node size: 16384 00:12:16.780 Sector size: 4096 (CPU page size: 4096) 00:12:16.780 Filesystem size: 510.00MiB 00:12:16.780 Block group profiles: 00:12:16.780 Data: single 8.00MiB 00:12:16.780 Metadata: DUP 32.00MiB 00:12:16.780 System: DUP 8.00MiB 00:12:16.780 SSD detected: yes 00:12:16.780 Zoned device: no 00:12:16.780 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:16.780 Checksum: crc32c 00:12:16.780 Number of devices: 1 00:12:16.780 Devices: 00:12:16.780 ID SIZE PATH 00:12:16.780 1 510.00MiB /dev/nvme0n1p1 00:12:16.780 00:12:16.780 06:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:16.780 06:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:16.780 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:16.780 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:16.780 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:16.780 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:16.780 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:16.780 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:16.780 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 226507 00:12:16.780 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:16.780 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:16.780 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:16.780 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:16.780 00:12:16.780 real 0m0.696s 00:12:16.780 user 0m0.033s 00:12:16.780 sys 0m0.116s 00:12:16.780 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:16.780 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:16.780 ************************************ 00:12:16.780 END TEST filesystem_in_capsule_btrfs 00:12:16.780 ************************************ 00:12:16.780 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:16.780 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:16.780 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:16.780 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:16.780 ************************************ 00:12:16.780 START TEST filesystem_in_capsule_xfs 00:12:16.780 ************************************ 00:12:16.780 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:16.780 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:16.780 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:16.780 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:16.780 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:16.780 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:16.780 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:16.780 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:12:16.780 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:16.780 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:16.780 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:16.780 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:16.780 = sectsz=512 attr=2, projid32bit=1 00:12:16.780 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:16.780 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:16.780 data = bsize=4096 blocks=130560, imaxpct=25 00:12:16.780 = sunit=0 swidth=0 blks 00:12:16.780 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:16.780 log =internal log bsize=4096 blocks=16384, version=2 00:12:16.780 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:16.780 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:17.719 Discarding blocks...Done. 00:12:17.719 06:11:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:17.719 06:11:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:20.263 06:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:20.263 06:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:20.263 06:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:20.263 06:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:20.524 06:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:20.524 06:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:20.524 06:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 226507 00:12:20.524 06:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:20.524 06:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:20.524 06:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:20.525 06:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:20.525 00:12:20.525 real 0m3.733s 00:12:20.525 user 0m0.032s 00:12:20.525 sys 0m0.074s 00:12:20.525 06:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:20.525 06:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:20.525 ************************************ 00:12:20.525 END TEST filesystem_in_capsule_xfs 00:12:20.525 ************************************ 00:12:20.525 06:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:20.784 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:20.784 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:20.784 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:20.784 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:20.784 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:20.784 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:20.784 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:20.784 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:20.784 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:21.044 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:21.044 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:21.044 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.044 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:21.044 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.044 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:21.044 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 226507 00:12:21.044 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 226507 ']' 00:12:21.044 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 226507 00:12:21.044 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:21.044 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:21.044 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 226507 00:12:21.044 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:21.044 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:21.044 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 226507' 00:12:21.044 killing process with pid 226507 00:12:21.045 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 226507 00:12:21.045 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 226507 00:12:21.306 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:21.306 00:12:21.306 real 0m17.892s 00:12:21.306 user 1m10.762s 00:12:21.306 sys 0m1.417s 00:12:21.306 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:21.306 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:21.306 ************************************ 00:12:21.306 END TEST nvmf_filesystem_in_capsule 00:12:21.306 ************************************ 00:12:21.306 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:21.306 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:21.306 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:12:21.306 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:21.306 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:12:21.307 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:21.307 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:21.307 rmmod nvme_tcp 00:12:21.307 rmmod nvme_fabrics 00:12:21.307 rmmod nvme_keyring 00:12:21.307 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:21.307 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:12:21.307 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:12:21.307 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:12:21.307 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:21.307 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:21.307 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:21.307 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:12:21.307 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:12:21.307 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:21.307 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:12:21.307 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:21.307 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:21.307 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:21.307 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:21.307 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:23.851 06:11:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:23.851 00:12:23.851 real 0m52.520s 00:12:23.851 user 2m50.411s 00:12:23.851 sys 0m8.817s 00:12:23.851 06:11:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:23.851 06:11:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:23.851 ************************************ 00:12:23.851 END TEST nvmf_filesystem 00:12:23.851 ************************************ 00:12:23.851 06:11:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:23.851 06:11:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:23.851 06:11:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:23.851 06:11:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:23.851 ************************************ 00:12:23.851 START TEST nvmf_target_discovery 00:12:23.851 ************************************ 00:12:23.851 06:11:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:23.851 * Looking for test storage... 00:12:23.851 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:23.851 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:23.851 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:12:23.851 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:23.851 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:23.851 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:23.851 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:23.851 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:23.851 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:12:23.851 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:12:23.851 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:12:23.851 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:12:23.851 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:12:23.851 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:12:23.851 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:12:23.851 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:23.851 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:12:23.851 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:12:23.851 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:23.851 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:23.851 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:12:23.851 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:12:23.851 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:23.851 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:12:23.851 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:12:23.851 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:12:23.851 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:12:23.851 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:23.851 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:12:23.851 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:12:23.851 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:23.851 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:23.851 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:12:23.851 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:23.851 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:23.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.851 --rc genhtml_branch_coverage=1 00:12:23.851 --rc genhtml_function_coverage=1 00:12:23.851 --rc genhtml_legend=1 00:12:23.851 --rc geninfo_all_blocks=1 00:12:23.851 --rc geninfo_unexecuted_blocks=1 00:12:23.851 00:12:23.851 ' 00:12:23.851 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:23.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.851 --rc genhtml_branch_coverage=1 00:12:23.851 --rc genhtml_function_coverage=1 00:12:23.851 --rc genhtml_legend=1 00:12:23.851 --rc geninfo_all_blocks=1 00:12:23.851 --rc geninfo_unexecuted_blocks=1 00:12:23.851 00:12:23.851 ' 00:12:23.851 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:23.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.851 --rc genhtml_branch_coverage=1 00:12:23.851 --rc genhtml_function_coverage=1 00:12:23.851 --rc genhtml_legend=1 00:12:23.851 --rc geninfo_all_blocks=1 00:12:23.851 --rc geninfo_unexecuted_blocks=1 00:12:23.851 00:12:23.851 ' 00:12:23.851 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:23.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.851 --rc genhtml_branch_coverage=1 00:12:23.851 --rc genhtml_function_coverage=1 00:12:23.851 --rc genhtml_legend=1 00:12:23.851 --rc geninfo_all_blocks=1 00:12:23.851 --rc geninfo_unexecuted_blocks=1 00:12:23.851 00:12:23.851 ' 00:12:23.851 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:23.851 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:23.851 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:23.851 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:23.851 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:23.851 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:23.851 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:23.851 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:23.851 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:23.851 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:23.851 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:23.851 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:23.851 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:12:23.851 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:12:23.851 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:23.851 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:23.851 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:23.851 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:23.851 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:23.851 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:12:23.851 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:23.852 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:23.852 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:23.852 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.852 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.852 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.852 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:23.852 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.852 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:12:23.852 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:23.852 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:23.852 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:23.852 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:23.852 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:23.852 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:23.852 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:23.852 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:23.852 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:23.852 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:23.852 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:23.852 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:23.852 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:23.852 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:23.852 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:23.852 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:23.852 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:23.852 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:23.852 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:23.852 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:23.852 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:23.852 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:23.852 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:23.852 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:23.852 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:23.852 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:12:23.852 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:31.990 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:31.990 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:12:31.990 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:31.990 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:31.990 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:31.990 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:31.991 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:31.991 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:31.991 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:31.991 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:31.991 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:31.991 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.438 ms 00:12:31.991 00:12:31.991 --- 10.0.0.2 ping statistics --- 00:12:31.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.991 rtt min/avg/max/mdev = 0.438/0.438/0.438/0.000 ms 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:31.991 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:31.991 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.303 ms 00:12:31.991 00:12:31.991 --- 10.0.0.1 ping statistics --- 00:12:31.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.991 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:12:31.991 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:31.992 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:12:31.992 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:31.992 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:31.992 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:31.992 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:31.992 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:31.992 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:31.992 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:31.992 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:31.992 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:31.992 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:31.992 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:31.992 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=233681 00:12:31.992 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 233681 00:12:31.992 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:31.992 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 233681 ']' 00:12:31.992 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:31.992 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:31.992 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:31.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:31.992 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:31.992 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:31.992 [2024-12-09 06:11:25.810137] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:12:31.992 [2024-12-09 06:11:25.810200] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:31.992 [2024-12-09 06:11:25.905597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:31.992 [2024-12-09 06:11:25.957403] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:31.992 [2024-12-09 06:11:25.957467] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:31.992 [2024-12-09 06:11:25.957476] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:31.992 [2024-12-09 06:11:25.957483] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:31.992 [2024-12-09 06:11:25.957489] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:31.992 [2024-12-09 06:11:25.959434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:31.992 [2024-12-09 06:11:25.959598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:31.992 [2024-12-09 06:11:25.959840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:31.992 [2024-12-09 06:11:25.959843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.252 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:32.252 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:12:32.252 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:32.252 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:32.252 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.252 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:32.252 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:32.252 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.252 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.252 [2024-12-09 06:11:26.679724] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:32.252 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.252 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:32.252 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:32.252 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:32.252 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.252 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.252 Null1 00:12:32.252 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.252 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:32.252 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.252 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.252 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.252 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:32.252 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.252 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.252 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.252 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:32.252 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.252 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.252 [2024-12-09 06:11:26.744571] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:32.252 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.252 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:32.252 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:32.252 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.252 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.252 Null2 00:12:32.252 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.252 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:32.252 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.252 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.252 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.252 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:32.252 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.252 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.252 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.252 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:32.252 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.252 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.252 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.252 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:32.252 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:32.252 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.252 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.252 Null3 00:12:32.252 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.252 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:32.252 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.252 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.252 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.252 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:32.252 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.252 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.252 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.252 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:32.252 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.252 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.252 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.252 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:32.252 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:32.252 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.252 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.252 Null4 00:12:32.252 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.252 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:32.252 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.252 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.252 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.252 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:32.252 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.252 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.512 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.512 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:32.512 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.512 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.512 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.512 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:32.512 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.512 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.512 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.512 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:32.512 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.512 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.512 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.512 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -a 10.0.0.2 -s 4420 00:12:32.512 00:12:32.512 Discovery Log Number of Records 6, Generation counter 6 00:12:32.512 =====Discovery Log Entry 0====== 00:12:32.512 trtype: tcp 00:12:32.512 adrfam: ipv4 00:12:32.512 subtype: current discovery subsystem 00:12:32.512 treq: not required 00:12:32.512 portid: 0 00:12:32.512 trsvcid: 4420 00:12:32.512 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:32.512 traddr: 10.0.0.2 00:12:32.512 eflags: explicit discovery connections, duplicate discovery information 00:12:32.512 sectype: none 00:12:32.512 =====Discovery Log Entry 1====== 00:12:32.512 trtype: tcp 00:12:32.512 adrfam: ipv4 00:12:32.512 subtype: nvme subsystem 00:12:32.512 treq: not required 00:12:32.512 portid: 0 00:12:32.512 trsvcid: 4420 00:12:32.512 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:32.512 traddr: 10.0.0.2 00:12:32.512 eflags: none 00:12:32.512 sectype: none 00:12:32.512 =====Discovery Log Entry 2====== 00:12:32.512 trtype: tcp 00:12:32.512 adrfam: ipv4 00:12:32.512 subtype: nvme subsystem 00:12:32.512 treq: not required 00:12:32.512 portid: 0 00:12:32.512 trsvcid: 4420 00:12:32.512 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:32.512 traddr: 10.0.0.2 00:12:32.513 eflags: none 00:12:32.513 sectype: none 00:12:32.513 =====Discovery Log Entry 3====== 00:12:32.513 trtype: tcp 00:12:32.513 adrfam: ipv4 00:12:32.513 subtype: nvme subsystem 00:12:32.513 treq: not required 00:12:32.513 portid: 0 00:12:32.513 trsvcid: 4420 00:12:32.513 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:32.513 traddr: 10.0.0.2 00:12:32.513 eflags: none 00:12:32.513 sectype: none 00:12:32.513 =====Discovery Log Entry 4====== 00:12:32.513 trtype: tcp 00:12:32.513 adrfam: ipv4 00:12:32.513 subtype: nvme subsystem 00:12:32.513 treq: not required 00:12:32.513 portid: 0 00:12:32.513 trsvcid: 4420 00:12:32.513 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:32.513 traddr: 10.0.0.2 00:12:32.513 eflags: none 00:12:32.513 sectype: none 00:12:32.513 =====Discovery Log Entry 5====== 00:12:32.513 trtype: tcp 00:12:32.513 adrfam: ipv4 00:12:32.513 subtype: discovery subsystem referral 00:12:32.513 treq: not required 00:12:32.513 portid: 0 00:12:32.513 trsvcid: 4430 00:12:32.513 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:32.513 traddr: 10.0.0.2 00:12:32.513 eflags: none 00:12:32.513 sectype: none 00:12:32.513 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:32.513 Perform nvmf subsystem discovery via RPC 00:12:32.513 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:32.513 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.513 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.773 [ 00:12:32.773 { 00:12:32.773 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:32.773 "subtype": "Discovery", 00:12:32.773 "listen_addresses": [ 00:12:32.773 { 00:12:32.773 "trtype": "TCP", 00:12:32.773 "adrfam": "IPv4", 00:12:32.773 "traddr": "10.0.0.2", 00:12:32.773 "trsvcid": "4420" 00:12:32.773 } 00:12:32.773 ], 00:12:32.773 "allow_any_host": true, 00:12:32.773 "hosts": [] 00:12:32.773 }, 00:12:32.773 { 00:12:32.773 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:32.773 "subtype": "NVMe", 00:12:32.773 "listen_addresses": [ 00:12:32.773 { 00:12:32.773 "trtype": "TCP", 00:12:32.773 "adrfam": "IPv4", 00:12:32.773 "traddr": "10.0.0.2", 00:12:32.773 "trsvcid": "4420" 00:12:32.773 } 00:12:32.773 ], 00:12:32.773 "allow_any_host": true, 00:12:32.773 "hosts": [], 00:12:32.773 "serial_number": "SPDK00000000000001", 00:12:32.773 "model_number": "SPDK bdev Controller", 00:12:32.773 "max_namespaces": 32, 00:12:32.773 "min_cntlid": 1, 00:12:32.773 "max_cntlid": 65519, 00:12:32.773 "namespaces": [ 00:12:32.773 { 00:12:32.773 "nsid": 1, 00:12:32.773 "bdev_name": "Null1", 00:12:32.773 "name": "Null1", 00:12:32.773 "nguid": "2DD53D3286F34F589D7169777DF12116", 00:12:32.773 "uuid": "2dd53d32-86f3-4f58-9d71-69777df12116" 00:12:32.773 } 00:12:32.773 ] 00:12:32.773 }, 00:12:32.773 { 00:12:32.773 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:32.773 "subtype": "NVMe", 00:12:32.773 "listen_addresses": [ 00:12:32.773 { 00:12:32.773 "trtype": "TCP", 00:12:32.773 "adrfam": "IPv4", 00:12:32.773 "traddr": "10.0.0.2", 00:12:32.773 "trsvcid": "4420" 00:12:32.773 } 00:12:32.773 ], 00:12:32.773 "allow_any_host": true, 00:12:32.773 "hosts": [], 00:12:32.774 "serial_number": "SPDK00000000000002", 00:12:32.774 "model_number": "SPDK bdev Controller", 00:12:32.774 "max_namespaces": 32, 00:12:32.774 "min_cntlid": 1, 00:12:32.774 "max_cntlid": 65519, 00:12:32.774 "namespaces": [ 00:12:32.774 { 00:12:32.774 "nsid": 1, 00:12:32.774 "bdev_name": "Null2", 00:12:32.774 "name": "Null2", 00:12:32.774 "nguid": "AE7548EF8A62458191BCD0F8FB384DDA", 00:12:32.774 "uuid": "ae7548ef-8a62-4581-91bc-d0f8fb384dda" 00:12:32.774 } 00:12:32.774 ] 00:12:32.774 }, 00:12:32.774 { 00:12:32.774 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:32.774 "subtype": "NVMe", 00:12:32.774 "listen_addresses": [ 00:12:32.774 { 00:12:32.774 "trtype": "TCP", 00:12:32.774 "adrfam": "IPv4", 00:12:32.774 "traddr": "10.0.0.2", 00:12:32.774 "trsvcid": "4420" 00:12:32.774 } 00:12:32.774 ], 00:12:32.774 "allow_any_host": true, 00:12:32.774 "hosts": [], 00:12:32.774 "serial_number": "SPDK00000000000003", 00:12:32.774 "model_number": "SPDK bdev Controller", 00:12:32.774 "max_namespaces": 32, 00:12:32.774 "min_cntlid": 1, 00:12:32.774 "max_cntlid": 65519, 00:12:32.774 "namespaces": [ 00:12:32.774 { 00:12:32.774 "nsid": 1, 00:12:32.774 "bdev_name": "Null3", 00:12:32.774 "name": "Null3", 00:12:32.774 "nguid": "885D6428C10A4C8C9D0582D5C80C135A", 00:12:32.774 "uuid": "885d6428-c10a-4c8c-9d05-82d5c80c135a" 00:12:32.774 } 00:12:32.774 ] 00:12:32.774 }, 00:12:32.774 { 00:12:32.774 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:32.774 "subtype": "NVMe", 00:12:32.774 "listen_addresses": [ 00:12:32.774 { 00:12:32.774 "trtype": "TCP", 00:12:32.774 "adrfam": "IPv4", 00:12:32.774 "traddr": "10.0.0.2", 00:12:32.774 "trsvcid": "4420" 00:12:32.774 } 00:12:32.774 ], 00:12:32.774 "allow_any_host": true, 00:12:32.774 "hosts": [], 00:12:32.774 "serial_number": "SPDK00000000000004", 00:12:32.774 "model_number": "SPDK bdev Controller", 00:12:32.774 "max_namespaces": 32, 00:12:32.774 "min_cntlid": 1, 00:12:32.774 "max_cntlid": 65519, 00:12:32.774 "namespaces": [ 00:12:32.774 { 00:12:32.774 "nsid": 1, 00:12:32.774 "bdev_name": "Null4", 00:12:32.774 "name": "Null4", 00:12:32.774 "nguid": "5F189FE432E2449DB89400B444460B15", 00:12:32.774 "uuid": "5f189fe4-32e2-449d-b894-00b444460b15" 00:12:32.774 } 00:12:32.774 ] 00:12:32.774 } 00:12:32.774 ] 00:12:32.774 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.774 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:32.774 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:32.774 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:32.774 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.774 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.774 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.774 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:32.774 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.774 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.774 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.774 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:32.774 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:32.774 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.774 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.774 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.774 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:32.774 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.774 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.774 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.774 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:32.774 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:32.774 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.774 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.774 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.774 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:32.774 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.774 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.774 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.774 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:32.774 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:32.774 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.774 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.774 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.774 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:32.774 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.774 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.774 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.774 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:32.774 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.774 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.774 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.774 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:32.774 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:32.774 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.774 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.774 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.774 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:32.774 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:32.774 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:32.774 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:32.774 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:32.774 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:32.774 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:32.774 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:32.774 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:32.774 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:32.774 rmmod nvme_tcp 00:12:32.774 rmmod nvme_fabrics 00:12:32.774 rmmod nvme_keyring 00:12:32.774 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:32.774 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:32.774 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:32.774 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 233681 ']' 00:12:32.774 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 233681 00:12:32.774 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 233681 ']' 00:12:32.774 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 233681 00:12:32.774 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:12:32.774 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:32.774 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 233681 00:12:33.035 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:33.035 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:33.035 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 233681' 00:12:33.035 killing process with pid 233681 00:12:33.035 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 233681 00:12:33.035 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 233681 00:12:33.035 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:33.035 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:33.035 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:33.035 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:12:33.035 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:12:33.035 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:33.035 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:12:33.035 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:33.035 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:33.035 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:33.035 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:33.035 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:35.578 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:35.578 00:12:35.578 real 0m11.665s 00:12:35.578 user 0m8.563s 00:12:35.578 sys 0m6.184s 00:12:35.578 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:35.578 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:35.578 ************************************ 00:12:35.578 END TEST nvmf_target_discovery 00:12:35.578 ************************************ 00:12:35.578 06:11:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:35.578 06:11:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:35.578 06:11:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:35.578 06:11:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:35.578 ************************************ 00:12:35.578 START TEST nvmf_referrals 00:12:35.578 ************************************ 00:12:35.578 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:35.578 * Looking for test storage... 00:12:35.578 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:35.578 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:35.578 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:12:35.578 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:35.578 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:35.578 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:35.578 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:35.578 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:35.578 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:35.578 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:35.578 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:35.578 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:35.578 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:35.578 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:35.578 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:35.579 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:35.579 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:35.579 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:35.579 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:35.579 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:35.579 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:35.579 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:35.579 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:35.579 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:35.579 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:35.579 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:35.579 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:35.579 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:35.579 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:35.579 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:35.579 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:35.579 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:35.579 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:35.579 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:35.579 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:35.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.579 --rc genhtml_branch_coverage=1 00:12:35.579 --rc genhtml_function_coverage=1 00:12:35.579 --rc genhtml_legend=1 00:12:35.579 --rc geninfo_all_blocks=1 00:12:35.579 --rc geninfo_unexecuted_blocks=1 00:12:35.579 00:12:35.579 ' 00:12:35.579 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:35.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.579 --rc genhtml_branch_coverage=1 00:12:35.579 --rc genhtml_function_coverage=1 00:12:35.579 --rc genhtml_legend=1 00:12:35.579 --rc geninfo_all_blocks=1 00:12:35.579 --rc geninfo_unexecuted_blocks=1 00:12:35.579 00:12:35.579 ' 00:12:35.579 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:35.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.579 --rc genhtml_branch_coverage=1 00:12:35.579 --rc genhtml_function_coverage=1 00:12:35.579 --rc genhtml_legend=1 00:12:35.579 --rc geninfo_all_blocks=1 00:12:35.579 --rc geninfo_unexecuted_blocks=1 00:12:35.579 00:12:35.579 ' 00:12:35.579 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:35.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.579 --rc genhtml_branch_coverage=1 00:12:35.579 --rc genhtml_function_coverage=1 00:12:35.579 --rc genhtml_legend=1 00:12:35.579 --rc geninfo_all_blocks=1 00:12:35.579 --rc geninfo_unexecuted_blocks=1 00:12:35.579 00:12:35.579 ' 00:12:35.579 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:35.579 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:35.579 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:35.579 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:35.579 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:35.579 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:35.579 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:35.579 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:35.579 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:35.579 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:35.579 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:35.579 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:35.579 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:12:35.579 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:12:35.579 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:35.579 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:35.579 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:35.579 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:35.579 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:35.579 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:35.579 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:35.579 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:35.579 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:35.579 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.579 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.579 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.579 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:35.579 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.579 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:35.579 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:35.579 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:35.579 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:35.579 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:35.579 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:35.579 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:35.579 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:35.579 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:35.579 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:35.579 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:35.579 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:35.579 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:35.579 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:35.579 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:35.579 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:35.579 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:35.579 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:35.579 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:35.579 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:35.579 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:35.579 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:35.579 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:35.579 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:35.580 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:35.580 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:35.580 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:35.580 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:35.580 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:12:35.580 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:43.722 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:43.722 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:43.722 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:43.722 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:43.722 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:43.722 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:43.722 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:43.722 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:43.722 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:43.722 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:43.722 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:43.722 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:43.723 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:43.723 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:43.723 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:43.723 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:43.723 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:43.723 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.539 ms 00:12:43.723 00:12:43.723 --- 10.0.0.2 ping statistics --- 00:12:43.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:43.723 rtt min/avg/max/mdev = 0.539/0.539/0.539/0.000 ms 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:43.723 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:43.723 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:12:43.723 00:12:43.723 --- 10.0.0.1 ping statistics --- 00:12:43.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:43.723 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:43.723 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:43.724 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:43.724 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:43.724 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=238038 00:12:43.724 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 238038 00:12:43.724 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:43.724 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 238038 ']' 00:12:43.724 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:43.724 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:43.724 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:43.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:43.724 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:43.724 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:43.724 [2024-12-09 06:11:37.503372] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:12:43.724 [2024-12-09 06:11:37.503442] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:43.724 [2024-12-09 06:11:37.600389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:43.724 [2024-12-09 06:11:37.651516] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:43.724 [2024-12-09 06:11:37.651575] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:43.724 [2024-12-09 06:11:37.651583] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:43.724 [2024-12-09 06:11:37.651590] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:43.724 [2024-12-09 06:11:37.651596] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:43.724 [2024-12-09 06:11:37.653691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:43.724 [2024-12-09 06:11:37.653846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:43.724 [2024-12-09 06:11:37.654006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:43.724 [2024-12-09 06:11:37.654006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.985 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:43.985 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:12:43.985 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:43.985 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:43.985 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:43.985 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:43.985 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:43.985 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.985 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:43.985 [2024-12-09 06:11:38.389598] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:43.985 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.985 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:43.985 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.985 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:43.985 [2024-12-09 06:11:38.413717] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:43.985 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.985 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:43.985 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.985 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:43.985 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.985 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:43.985 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.985 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:43.985 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.985 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:43.985 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.985 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:43.985 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.985 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:43.985 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:43.985 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.985 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:43.985 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.985 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:43.985 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:43.985 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:43.985 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:43.985 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:43.985 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.985 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:43.985 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:43.985 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.985 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:43.985 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:43.985 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:43.985 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:43.985 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:43.985 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:43.985 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:43.985 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:44.246 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:44.246 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:44.246 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:44.246 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.246 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:44.246 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.246 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:44.246 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.246 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:44.246 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.246 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:44.246 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.246 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:44.246 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.506 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:44.506 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:44.506 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.506 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:44.506 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.506 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:44.506 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:44.506 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:44.506 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:44.506 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:44.507 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:44.507 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:44.766 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:44.767 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:44.767 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:44.767 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.767 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:44.767 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.767 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:44.767 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.767 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:44.767 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.767 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:44.767 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:44.767 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:44.767 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:44.767 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.767 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:44.767 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:44.767 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.767 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:44.767 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:44.767 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:44.767 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:44.767 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:44.767 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:44.767 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:44.767 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:44.767 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:44.767 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:44.767 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:44.767 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:44.767 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:44.767 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:44.767 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:45.028 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:45.028 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:45.028 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:45.028 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:45.028 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:45.028 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:45.290 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:45.290 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:45.290 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.290 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:45.290 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.290 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:45.290 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:45.290 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:45.290 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:45.290 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.290 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:45.290 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:45.290 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.290 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:45.290 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:45.290 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:45.290 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:45.290 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:45.290 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:45.290 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:45.290 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:45.552 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:45.552 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:45.552 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:45.552 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:45.552 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:45.552 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:45.552 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:45.814 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:45.814 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:45.814 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:45.814 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:45.814 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:45.814 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:45.814 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:45.814 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:45.814 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.814 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:45.814 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.814 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:45.814 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:45.814 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.814 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:45.814 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.814 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:45.814 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:45.814 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:45.814 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:45.814 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:45.814 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:45.814 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:46.075 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:46.075 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:46.075 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:46.075 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:46.075 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:46.075 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:46.075 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:46.075 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:46.075 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:46.075 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:46.075 rmmod nvme_tcp 00:12:46.075 rmmod nvme_fabrics 00:12:46.075 rmmod nvme_keyring 00:12:46.075 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:46.075 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:46.075 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:46.075 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 238038 ']' 00:12:46.075 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 238038 00:12:46.075 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 238038 ']' 00:12:46.075 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 238038 00:12:46.075 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:12:46.076 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:46.076 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 238038 00:12:46.337 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:46.337 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:46.337 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 238038' 00:12:46.337 killing process with pid 238038 00:12:46.337 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 238038 00:12:46.337 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 238038 00:12:46.337 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:46.337 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:46.337 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:46.337 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:46.337 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:46.337 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:12:46.337 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:12:46.337 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:46.337 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:46.337 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:46.337 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:46.337 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:48.886 06:11:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:48.886 00:12:48.886 real 0m13.250s 00:12:48.886 user 0m15.857s 00:12:48.886 sys 0m6.528s 00:12:48.886 06:11:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:48.886 06:11:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:48.886 ************************************ 00:12:48.886 END TEST nvmf_referrals 00:12:48.886 ************************************ 00:12:48.886 06:11:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:48.886 06:11:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:48.886 06:11:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:48.886 06:11:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:48.886 ************************************ 00:12:48.886 START TEST nvmf_connect_disconnect 00:12:48.886 ************************************ 00:12:48.886 06:11:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:48.886 * Looking for test storage... 00:12:48.886 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:48.886 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:48.886 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:12:48.886 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:48.886 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:48.886 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:48.886 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:48.886 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:48.886 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:48.886 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:48.886 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:48.886 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:48.886 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:48.886 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:48.886 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:48.886 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:48.886 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:48.886 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:48.886 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:48.886 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:48.886 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:48.886 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:48.886 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:48.886 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:48.886 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:48.886 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:48.887 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:48.887 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:48.887 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:48.887 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:48.887 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:48.887 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:48.887 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:48.887 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:48.887 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:48.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:48.887 --rc genhtml_branch_coverage=1 00:12:48.887 --rc genhtml_function_coverage=1 00:12:48.887 --rc genhtml_legend=1 00:12:48.887 --rc geninfo_all_blocks=1 00:12:48.887 --rc geninfo_unexecuted_blocks=1 00:12:48.887 00:12:48.887 ' 00:12:48.887 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:48.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:48.887 --rc genhtml_branch_coverage=1 00:12:48.887 --rc genhtml_function_coverage=1 00:12:48.887 --rc genhtml_legend=1 00:12:48.887 --rc geninfo_all_blocks=1 00:12:48.887 --rc geninfo_unexecuted_blocks=1 00:12:48.887 00:12:48.887 ' 00:12:48.887 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:48.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:48.887 --rc genhtml_branch_coverage=1 00:12:48.887 --rc genhtml_function_coverage=1 00:12:48.887 --rc genhtml_legend=1 00:12:48.887 --rc geninfo_all_blocks=1 00:12:48.887 --rc geninfo_unexecuted_blocks=1 00:12:48.887 00:12:48.887 ' 00:12:48.887 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:48.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:48.887 --rc genhtml_branch_coverage=1 00:12:48.887 --rc genhtml_function_coverage=1 00:12:48.887 --rc genhtml_legend=1 00:12:48.887 --rc geninfo_all_blocks=1 00:12:48.887 --rc geninfo_unexecuted_blocks=1 00:12:48.887 00:12:48.887 ' 00:12:48.887 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:48.887 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:48.887 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:48.887 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:48.887 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:48.887 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:48.887 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:48.887 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:48.887 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:48.887 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:48.887 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:48.887 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:48.887 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:12:48.887 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:12:48.887 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:48.887 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:48.887 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:48.887 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:48.887 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:48.887 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:48.887 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:48.887 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:48.887 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:48.887 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.887 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.887 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.887 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:48.887 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.887 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:48.887 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:48.887 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:48.887 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:48.887 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:48.887 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:48.887 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:48.887 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:48.887 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:48.887 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:48.887 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:48.887 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:48.887 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:48.887 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:48.887 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:48.887 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:48.887 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:48.887 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:48.887 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:48.887 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:48.887 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:48.887 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:48.887 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:48.887 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:48.887 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:48.887 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:57.035 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:57.035 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:57.035 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:57.035 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:57.035 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:57.035 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:57.035 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:57.035 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:57.035 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:57.035 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:57.035 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:57.035 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:57.035 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:57.035 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:57.035 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:57.035 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:57.035 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:57.035 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:57.035 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:57.035 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:57.035 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:57.035 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:57.035 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:57.035 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:57.035 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:57.035 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:57.035 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:57.035 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:57.035 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:57.035 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:57.035 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:57.035 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:57.035 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:57.035 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:57.035 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:57.035 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:57.035 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:57.035 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:57.035 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:57.035 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:57.035 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:57.035 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:57.035 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:57.035 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:57.035 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:57.035 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:57.035 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:57.035 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:57.035 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:57.035 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:57.035 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:57.035 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:57.035 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:57.035 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:57.035 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:57.035 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:57.035 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:57.035 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:57.035 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:57.035 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:57.035 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:57.035 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:57.035 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:57.035 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:57.035 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:57.035 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:57.035 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:57.035 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:57.035 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:57.035 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:57.035 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:57.035 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:57.035 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:57.035 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:12:57.035 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:57.036 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:57.036 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:57.036 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:57.036 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:57.036 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:57.036 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:57.036 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:57.036 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:57.036 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:57.036 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:57.036 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:57.036 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:57.036 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:57.036 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:57.036 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:57.036 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:57.036 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:57.036 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:57.036 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:57.036 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:57.036 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:57.036 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:57.036 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:57.036 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:57.036 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:57.036 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:57.036 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.448 ms 00:12:57.036 00:12:57.036 --- 10.0.0.2 ping statistics --- 00:12:57.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:57.036 rtt min/avg/max/mdev = 0.448/0.448/0.448/0.000 ms 00:12:57.036 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:57.036 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:57.036 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.303 ms 00:12:57.036 00:12:57.036 --- 10.0.0.1 ping statistics --- 00:12:57.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:57.036 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:12:57.036 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:57.036 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:12:57.036 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:57.036 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:57.036 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:57.036 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:57.036 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:57.036 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:57.036 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:57.036 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:57.036 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:57.036 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:57.036 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:57.036 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=242742 00:12:57.036 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 242742 00:12:57.036 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:57.036 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 242742 ']' 00:12:57.036 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:57.036 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:57.036 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:57.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:57.036 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:57.036 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:57.036 [2024-12-09 06:11:50.819496] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:12:57.036 [2024-12-09 06:11:50.819561] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:57.036 [2024-12-09 06:11:50.917760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:57.036 [2024-12-09 06:11:50.969385] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:57.036 [2024-12-09 06:11:50.969439] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:57.036 [2024-12-09 06:11:50.969447] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:57.036 [2024-12-09 06:11:50.969461] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:57.036 [2024-12-09 06:11:50.969467] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:57.036 [2024-12-09 06:11:50.971385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:57.036 [2024-12-09 06:11:50.971550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:57.036 [2024-12-09 06:11:50.971857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:57.036 [2024-12-09 06:11:50.971859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:57.297 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:57.297 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:12:57.297 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:57.297 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:57.297 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:57.297 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:57.297 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:57.297 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.297 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:57.297 [2024-12-09 06:11:51.703984] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:57.297 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.297 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:57.297 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.297 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:57.297 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.297 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:57.297 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:57.297 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.297 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:57.297 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.298 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:57.298 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.298 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:57.298 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.298 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:57.298 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.298 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:57.298 [2024-12-09 06:11:51.769520] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:57.298 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.298 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:12:57.298 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:12:57.298 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:13:01.504 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:04.800 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:09.004 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.303 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:15.601 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:15.601 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:13:15.601 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:13:15.601 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:15.601 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:13:15.601 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:15.601 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:13:15.601 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:15.601 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:15.601 rmmod nvme_tcp 00:13:15.601 rmmod nvme_fabrics 00:13:15.601 rmmod nvme_keyring 00:13:15.601 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:15.601 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:13:15.601 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:13:15.601 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 242742 ']' 00:13:15.601 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 242742 00:13:15.601 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 242742 ']' 00:13:15.601 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 242742 00:13:15.601 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:13:15.601 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:15.601 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 242742 00:13:15.601 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:15.601 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:15.601 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 242742' 00:13:15.601 killing process with pid 242742 00:13:15.601 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 242742 00:13:15.601 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 242742 00:13:15.862 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:15.862 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:15.862 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:15.862 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:13:15.862 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:13:15.862 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:13:15.862 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:15.862 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:15.862 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:15.862 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:15.862 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:15.862 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:18.408 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:18.408 00:13:18.408 real 0m29.374s 00:13:18.408 user 1m19.134s 00:13:18.408 sys 0m7.065s 00:13:18.408 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:18.408 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:18.408 ************************************ 00:13:18.408 END TEST nvmf_connect_disconnect 00:13:18.408 ************************************ 00:13:18.408 06:12:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:18.408 06:12:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:18.408 06:12:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:18.408 06:12:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:18.408 ************************************ 00:13:18.408 START TEST nvmf_multitarget 00:13:18.408 ************************************ 00:13:18.408 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:18.408 * Looking for test storage... 00:13:18.408 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:18.408 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:18.408 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:13:18.408 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:18.408 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:18.408 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:18.408 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:18.408 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:18.408 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:13:18.408 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:13:18.408 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:13:18.408 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:13:18.408 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:13:18.408 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:13:18.408 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:13:18.408 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:18.408 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:13:18.408 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:13:18.408 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:18.408 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:18.408 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:13:18.408 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:13:18.408 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:18.408 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:13:18.408 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:13:18.408 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:13:18.408 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:13:18.408 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:18.408 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:13:18.408 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:13:18.408 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:18.409 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:18.409 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:13:18.409 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:18.409 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:18.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:18.409 --rc genhtml_branch_coverage=1 00:13:18.409 --rc genhtml_function_coverage=1 00:13:18.409 --rc genhtml_legend=1 00:13:18.409 --rc geninfo_all_blocks=1 00:13:18.409 --rc geninfo_unexecuted_blocks=1 00:13:18.409 00:13:18.409 ' 00:13:18.409 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:18.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:18.409 --rc genhtml_branch_coverage=1 00:13:18.409 --rc genhtml_function_coverage=1 00:13:18.409 --rc genhtml_legend=1 00:13:18.409 --rc geninfo_all_blocks=1 00:13:18.409 --rc geninfo_unexecuted_blocks=1 00:13:18.409 00:13:18.409 ' 00:13:18.409 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:18.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:18.409 --rc genhtml_branch_coverage=1 00:13:18.409 --rc genhtml_function_coverage=1 00:13:18.409 --rc genhtml_legend=1 00:13:18.409 --rc geninfo_all_blocks=1 00:13:18.409 --rc geninfo_unexecuted_blocks=1 00:13:18.409 00:13:18.409 ' 00:13:18.409 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:18.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:18.409 --rc genhtml_branch_coverage=1 00:13:18.409 --rc genhtml_function_coverage=1 00:13:18.409 --rc genhtml_legend=1 00:13:18.409 --rc geninfo_all_blocks=1 00:13:18.409 --rc geninfo_unexecuted_blocks=1 00:13:18.409 00:13:18.409 ' 00:13:18.409 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:18.409 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:13:18.409 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:18.409 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:18.409 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:18.409 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:18.409 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:18.409 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:18.409 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:18.409 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:18.409 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:18.409 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:18.409 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:13:18.409 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:13:18.409 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:18.409 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:18.409 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:18.409 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:18.409 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:18.409 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:13:18.409 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:18.409 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:18.409 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:18.409 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.409 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.409 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.409 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:13:18.409 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.409 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:13:18.409 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:18.409 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:18.409 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:18.409 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:18.409 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:18.409 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:18.409 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:18.409 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:18.409 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:18.409 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:18.409 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:18.409 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:13:18.409 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:18.409 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:18.410 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:18.410 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:18.410 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:18.410 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:18.410 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:18.410 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:18.410 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:18.410 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:18.410 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:13:18.410 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:26.551 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:26.551 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:13:26.551 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:26.551 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:26.551 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:26.551 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:26.551 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:26.551 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:13:26.551 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:26.551 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:13:26.551 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:13:26.551 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:13:26.551 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:13:26.551 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:13:26.551 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:13:26.551 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:26.551 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:26.551 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:26.551 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:26.551 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:26.551 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:26.551 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:26.551 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:26.551 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:26.552 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:26.552 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:26.552 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:26.552 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:26.552 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:26.552 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:26.552 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:26.552 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:26.552 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:26.552 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:26.552 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:26.552 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:26.552 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:26.552 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:26.552 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:26.552 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:26.552 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:26.552 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:26.552 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:26.552 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:26.552 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:26.552 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:26.552 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:26.552 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:26.552 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:26.552 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:26.552 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:26.552 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:26.552 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:26.552 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:26.552 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:26.552 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:26.552 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:26.552 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:26.552 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:26.552 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:26.552 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:26.552 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:26.552 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:26.552 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:26.552 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:26.552 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:26.552 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:26.552 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:26.552 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:26.552 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:26.552 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:26.552 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:26.552 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:26.552 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:13:26.552 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:26.552 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:26.552 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:26.552 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:26.552 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:26.552 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:26.552 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:26.552 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:26.552 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:26.552 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:26.552 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:26.552 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:26.552 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:26.552 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:26.552 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:26.552 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:26.552 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:26.552 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:26.552 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:26.552 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:26.552 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:26.552 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:26.552 06:12:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:26.552 06:12:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:26.552 06:12:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:26.552 06:12:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:26.552 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:26.552 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.618 ms 00:13:26.552 00:13:26.552 --- 10.0.0.2 ping statistics --- 00:13:26.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:26.552 rtt min/avg/max/mdev = 0.618/0.618/0.618/0.000 ms 00:13:26.552 06:12:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:26.552 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:26.552 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.296 ms 00:13:26.552 00:13:26.552 --- 10.0.0.1 ping statistics --- 00:13:26.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:26.552 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:13:26.552 06:12:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:26.552 06:12:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:13:26.552 06:12:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:26.552 06:12:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:26.552 06:12:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:26.552 06:12:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:26.552 06:12:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:26.552 06:12:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:26.552 06:12:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:26.552 06:12:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:13:26.552 06:12:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:26.552 06:12:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:26.552 06:12:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:26.552 06:12:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=250616 00:13:26.552 06:12:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 250616 00:13:26.552 06:12:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:26.552 06:12:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 250616 ']' 00:13:26.552 06:12:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:26.552 06:12:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:26.552 06:12:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:26.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:26.552 06:12:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:26.552 06:12:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:26.552 [2024-12-09 06:12:20.209914] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:13:26.553 [2024-12-09 06:12:20.209980] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:26.553 [2024-12-09 06:12:20.307504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:26.553 [2024-12-09 06:12:20.360250] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:26.553 [2024-12-09 06:12:20.360304] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:26.553 [2024-12-09 06:12:20.360313] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:26.553 [2024-12-09 06:12:20.360320] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:26.553 [2024-12-09 06:12:20.360326] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:26.553 [2024-12-09 06:12:20.362347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:26.553 [2024-12-09 06:12:20.362538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:26.553 [2024-12-09 06:12:20.362614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:26.553 [2024-12-09 06:12:20.362616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:26.553 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:26.553 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:13:26.553 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:26.553 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:26.553 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:26.553 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:26.553 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:26.553 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:26.553 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:13:26.814 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:13:26.814 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:13:26.814 "nvmf_tgt_1" 00:13:26.814 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:13:27.074 "nvmf_tgt_2" 00:13:27.074 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:27.074 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:13:27.074 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:13:27.074 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:13:27.074 true 00:13:27.074 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:13:27.335 true 00:13:27.335 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:27.335 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:13:27.335 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:13:27.335 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:27.335 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:13:27.335 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:27.335 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:13:27.335 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:27.335 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:13:27.335 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:27.335 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:27.336 rmmod nvme_tcp 00:13:27.336 rmmod nvme_fabrics 00:13:27.336 rmmod nvme_keyring 00:13:27.597 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:27.597 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:13:27.597 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:13:27.597 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 250616 ']' 00:13:27.597 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 250616 00:13:27.597 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 250616 ']' 00:13:27.597 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 250616 00:13:27.597 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:13:27.597 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:27.597 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 250616 00:13:27.597 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:27.597 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:27.597 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 250616' 00:13:27.597 killing process with pid 250616 00:13:27.597 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 250616 00:13:27.597 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 250616 00:13:27.597 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:27.597 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:27.597 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:27.597 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:13:27.597 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:13:27.597 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:13:27.597 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:27.597 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:27.597 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:27.597 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:27.597 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:27.597 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:30.144 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:30.144 00:13:30.144 real 0m11.743s 00:13:30.144 user 0m10.015s 00:13:30.144 sys 0m6.116s 00:13:30.144 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:30.144 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:30.144 ************************************ 00:13:30.144 END TEST nvmf_multitarget 00:13:30.144 ************************************ 00:13:30.144 06:12:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:30.144 06:12:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:30.144 06:12:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:30.144 06:12:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:30.144 ************************************ 00:13:30.144 START TEST nvmf_rpc 00:13:30.144 ************************************ 00:13:30.144 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:30.144 * Looking for test storage... 00:13:30.144 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:30.144 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:30.144 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:13:30.144 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:30.144 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:30.144 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:30.144 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:30.144 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:30.144 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:13:30.144 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:13:30.144 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:13:30.144 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:13:30.144 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:13:30.144 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:13:30.144 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:13:30.144 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:30.144 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:13:30.144 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:13:30.144 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:30.144 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:30.144 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:13:30.144 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:13:30.144 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:30.144 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:13:30.144 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:13:30.144 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:13:30.144 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:13:30.144 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:30.144 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:13:30.144 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:13:30.144 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:30.144 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:30.144 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:13:30.144 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:30.144 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:30.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:30.145 --rc genhtml_branch_coverage=1 00:13:30.145 --rc genhtml_function_coverage=1 00:13:30.145 --rc genhtml_legend=1 00:13:30.145 --rc geninfo_all_blocks=1 00:13:30.145 --rc geninfo_unexecuted_blocks=1 00:13:30.145 00:13:30.145 ' 00:13:30.145 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:30.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:30.145 --rc genhtml_branch_coverage=1 00:13:30.145 --rc genhtml_function_coverage=1 00:13:30.145 --rc genhtml_legend=1 00:13:30.145 --rc geninfo_all_blocks=1 00:13:30.145 --rc geninfo_unexecuted_blocks=1 00:13:30.145 00:13:30.145 ' 00:13:30.145 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:30.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:30.145 --rc genhtml_branch_coverage=1 00:13:30.145 --rc genhtml_function_coverage=1 00:13:30.145 --rc genhtml_legend=1 00:13:30.145 --rc geninfo_all_blocks=1 00:13:30.145 --rc geninfo_unexecuted_blocks=1 00:13:30.145 00:13:30.145 ' 00:13:30.145 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:30.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:30.145 --rc genhtml_branch_coverage=1 00:13:30.145 --rc genhtml_function_coverage=1 00:13:30.145 --rc genhtml_legend=1 00:13:30.145 --rc geninfo_all_blocks=1 00:13:30.145 --rc geninfo_unexecuted_blocks=1 00:13:30.145 00:13:30.145 ' 00:13:30.145 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:30.145 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:13:30.145 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:30.145 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:30.145 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:30.145 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:30.145 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:30.145 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:30.145 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:30.145 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:30.145 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:30.145 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:30.145 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:13:30.145 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:13:30.145 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:30.145 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:30.145 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:30.145 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:30.145 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:30.145 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:13:30.145 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:30.145 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:30.145 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:30.145 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.145 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.145 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.145 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:13:30.145 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.145 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:13:30.145 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:30.145 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:30.145 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:30.145 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:30.145 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:30.145 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:30.145 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:30.145 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:30.145 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:30.145 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:30.145 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:13:30.145 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:13:30.145 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:30.145 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:30.145 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:30.145 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:30.145 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:30.145 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:30.145 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:30.145 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:30.145 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:30.145 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:30.145 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:13:30.145 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.287 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:38.287 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:13:38.287 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:38.287 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:38.287 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:38.287 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:38.287 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:38.287 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:13:38.287 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:38.287 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:13:38.287 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:13:38.287 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:13:38.287 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:13:38.287 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:13:38.287 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:13:38.287 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:38.287 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:38.287 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:38.287 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:38.287 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:38.287 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:38.287 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:38.287 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:38.287 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:38.287 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:38.287 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:38.287 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:38.287 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:38.287 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:38.287 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:38.287 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:38.287 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:38.287 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:38.287 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:38.287 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:38.287 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:38.287 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:38.287 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:38.288 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:38.288 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:38.288 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:38.288 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:38.288 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:38.288 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:38.288 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:38.288 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:38.288 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:38.288 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:38.288 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:38.288 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:38.288 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:38.288 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:38.288 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:38.288 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:38.288 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:38.288 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:38.288 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:38.288 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:38.288 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:38.288 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:38.288 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:38.288 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:38.288 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:38.288 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:38.288 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:38.288 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:38.288 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:38.288 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:38.288 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:38.288 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:38.288 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:38.288 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:38.288 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:38.288 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:13:38.288 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:38.288 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:38.288 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:38.288 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:38.288 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:38.288 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:38.288 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:38.288 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:38.288 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:38.288 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:38.288 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:38.288 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:38.288 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:38.288 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:38.288 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:38.288 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:38.288 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:38.288 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:38.288 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:38.288 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:38.288 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:38.288 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:38.288 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:38.288 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:38.288 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:38.288 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:38.288 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:38.288 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.642 ms 00:13:38.288 00:13:38.288 --- 10.0.0.2 ping statistics --- 00:13:38.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:38.288 rtt min/avg/max/mdev = 0.642/0.642/0.642/0.000 ms 00:13:38.288 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:38.288 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:38.288 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.267 ms 00:13:38.288 00:13:38.288 --- 10.0.0.1 ping statistics --- 00:13:38.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:38.288 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:13:38.288 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:38.288 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:13:38.288 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:38.288 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:38.288 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:38.288 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:38.288 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:38.288 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:38.288 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:38.288 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:13:38.288 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:38.288 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:38.288 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.288 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=255016 00:13:38.288 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 255016 00:13:38.288 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:38.288 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 255016 ']' 00:13:38.288 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:38.288 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:38.288 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:38.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:38.288 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:38.288 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.288 [2024-12-09 06:12:31.907973] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:13:38.288 [2024-12-09 06:12:31.908043] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:38.288 [2024-12-09 06:12:32.005530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:38.288 [2024-12-09 06:12:32.056602] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:38.288 [2024-12-09 06:12:32.056656] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:38.288 [2024-12-09 06:12:32.056670] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:38.288 [2024-12-09 06:12:32.056677] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:38.288 [2024-12-09 06:12:32.056682] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:38.288 [2024-12-09 06:12:32.058900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:38.288 [2024-12-09 06:12:32.059056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:38.288 [2024-12-09 06:12:32.059210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:38.288 [2024-12-09 06:12:32.059210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:38.288 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:38.288 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:38.288 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:38.288 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:38.288 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.288 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:38.289 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:13:38.289 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.289 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.289 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.289 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:13:38.289 "tick_rate": 2600000000, 00:13:38.289 "poll_groups": [ 00:13:38.289 { 00:13:38.289 "name": "nvmf_tgt_poll_group_000", 00:13:38.289 "admin_qpairs": 0, 00:13:38.289 "io_qpairs": 0, 00:13:38.289 "current_admin_qpairs": 0, 00:13:38.289 "current_io_qpairs": 0, 00:13:38.289 "pending_bdev_io": 0, 00:13:38.289 "completed_nvme_io": 0, 00:13:38.289 "transports": [] 00:13:38.289 }, 00:13:38.289 { 00:13:38.289 "name": "nvmf_tgt_poll_group_001", 00:13:38.289 "admin_qpairs": 0, 00:13:38.289 "io_qpairs": 0, 00:13:38.289 "current_admin_qpairs": 0, 00:13:38.289 "current_io_qpairs": 0, 00:13:38.289 "pending_bdev_io": 0, 00:13:38.289 "completed_nvme_io": 0, 00:13:38.289 "transports": [] 00:13:38.289 }, 00:13:38.289 { 00:13:38.289 "name": "nvmf_tgt_poll_group_002", 00:13:38.289 "admin_qpairs": 0, 00:13:38.289 "io_qpairs": 0, 00:13:38.289 "current_admin_qpairs": 0, 00:13:38.289 "current_io_qpairs": 0, 00:13:38.289 "pending_bdev_io": 0, 00:13:38.289 "completed_nvme_io": 0, 00:13:38.289 "transports": [] 00:13:38.289 }, 00:13:38.289 { 00:13:38.289 "name": "nvmf_tgt_poll_group_003", 00:13:38.289 "admin_qpairs": 0, 00:13:38.289 "io_qpairs": 0, 00:13:38.289 "current_admin_qpairs": 0, 00:13:38.289 "current_io_qpairs": 0, 00:13:38.289 "pending_bdev_io": 0, 00:13:38.289 "completed_nvme_io": 0, 00:13:38.289 "transports": [] 00:13:38.289 } 00:13:38.289 ] 00:13:38.289 }' 00:13:38.289 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:13:38.289 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:13:38.289 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:13:38.289 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:13:38.289 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:13:38.289 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:13:38.550 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:13:38.550 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:38.550 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.550 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.550 [2024-12-09 06:12:32.907343] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:38.550 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.550 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:13:38.550 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.550 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.550 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.550 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:13:38.551 "tick_rate": 2600000000, 00:13:38.551 "poll_groups": [ 00:13:38.551 { 00:13:38.551 "name": "nvmf_tgt_poll_group_000", 00:13:38.551 "admin_qpairs": 0, 00:13:38.551 "io_qpairs": 0, 00:13:38.551 "current_admin_qpairs": 0, 00:13:38.551 "current_io_qpairs": 0, 00:13:38.551 "pending_bdev_io": 0, 00:13:38.551 "completed_nvme_io": 0, 00:13:38.551 "transports": [ 00:13:38.551 { 00:13:38.551 "trtype": "TCP" 00:13:38.551 } 00:13:38.551 ] 00:13:38.551 }, 00:13:38.551 { 00:13:38.551 "name": "nvmf_tgt_poll_group_001", 00:13:38.551 "admin_qpairs": 0, 00:13:38.551 "io_qpairs": 0, 00:13:38.551 "current_admin_qpairs": 0, 00:13:38.551 "current_io_qpairs": 0, 00:13:38.551 "pending_bdev_io": 0, 00:13:38.551 "completed_nvme_io": 0, 00:13:38.551 "transports": [ 00:13:38.551 { 00:13:38.551 "trtype": "TCP" 00:13:38.551 } 00:13:38.551 ] 00:13:38.551 }, 00:13:38.551 { 00:13:38.551 "name": "nvmf_tgt_poll_group_002", 00:13:38.551 "admin_qpairs": 0, 00:13:38.551 "io_qpairs": 0, 00:13:38.551 "current_admin_qpairs": 0, 00:13:38.551 "current_io_qpairs": 0, 00:13:38.551 "pending_bdev_io": 0, 00:13:38.551 "completed_nvme_io": 0, 00:13:38.551 "transports": [ 00:13:38.551 { 00:13:38.551 "trtype": "TCP" 00:13:38.551 } 00:13:38.551 ] 00:13:38.551 }, 00:13:38.551 { 00:13:38.551 "name": "nvmf_tgt_poll_group_003", 00:13:38.551 "admin_qpairs": 0, 00:13:38.551 "io_qpairs": 0, 00:13:38.551 "current_admin_qpairs": 0, 00:13:38.551 "current_io_qpairs": 0, 00:13:38.551 "pending_bdev_io": 0, 00:13:38.551 "completed_nvme_io": 0, 00:13:38.551 "transports": [ 00:13:38.551 { 00:13:38.551 "trtype": "TCP" 00:13:38.551 } 00:13:38.551 ] 00:13:38.551 } 00:13:38.551 ] 00:13:38.551 }' 00:13:38.551 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:13:38.551 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:38.551 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:38.551 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:38.551 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:13:38.551 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:13:38.551 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:38.551 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:38.551 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:38.551 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:13:38.551 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:13:38.551 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:13:38.551 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:13:38.551 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:38.551 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.551 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.551 Malloc1 00:13:38.551 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.551 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:38.551 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.551 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.551 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.551 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:38.551 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.551 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.551 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.551 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:13:38.551 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.551 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.551 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.551 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:38.551 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.551 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.551 [2024-12-09 06:12:33.111908] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:38.551 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.551 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -a 10.0.0.2 -s 4420 00:13:38.551 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:13:38.551 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -a 10.0.0.2 -s 4420 00:13:38.551 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:13:38.551 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:38.551 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:13:38.551 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:38.551 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:13:38.551 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:38.551 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:13:38.551 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:13:38.552 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -a 10.0.0.2 -s 4420 00:13:38.813 [2024-12-09 06:12:33.148869] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a' 00:13:38.813 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:38.813 could not add new controller: failed to write to nvme-fabrics device 00:13:38.813 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:13:38.813 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:38.813 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:38.813 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:38.813 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:13:38.813 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.813 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.813 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.813 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:40.196 06:12:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:13:40.196 06:12:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:40.196 06:12:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:40.196 06:12:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:40.196 06:12:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:42.742 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:42.742 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:42.742 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:42.742 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:42.742 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:42.742 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:42.742 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:42.742 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:42.742 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:42.742 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:42.742 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:42.742 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:42.742 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:42.742 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:42.742 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:42.742 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:13:42.742 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.742 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.742 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.742 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:42.742 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:13:42.742 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:42.742 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:13:42.742 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:42.742 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:13:42.742 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:42.742 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:13:42.742 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:42.742 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:13:42.742 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:13:42.742 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:42.742 [2024-12-09 06:12:36.905516] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a' 00:13:42.742 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:42.742 could not add new controller: failed to write to nvme-fabrics device 00:13:42.742 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:13:42.742 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:42.742 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:42.742 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:42.742 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:13:42.742 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.742 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.742 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.742 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:44.128 06:12:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:13:44.128 06:12:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:44.128 06:12:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:44.128 06:12:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:44.128 06:12:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:46.062 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:46.062 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:46.062 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:46.062 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:46.062 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:46.062 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:46.062 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:46.062 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:46.062 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:46.062 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:46.062 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:46.062 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:46.062 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:46.062 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:46.062 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:46.062 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:46.062 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.062 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:46.062 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.062 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:13:46.062 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:46.062 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:46.062 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.062 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:46.323 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.323 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:46.323 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.323 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:46.323 [2024-12-09 06:12:40.655663] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:46.323 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.323 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:46.323 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.323 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:46.323 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.323 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:46.323 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.323 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:46.323 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.323 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:47.705 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:47.705 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:47.705 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:47.705 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:47.705 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:49.614 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:49.614 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:49.614 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:49.614 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:49.614 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:49.614 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:49.614 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:49.892 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:49.892 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:49.892 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:49.892 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:49.892 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:49.892 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:49.892 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:49.892 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:49.892 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:49.892 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.892 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:49.892 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.892 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:49.892 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.892 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:49.892 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.892 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:49.892 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:49.892 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.892 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:49.892 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.892 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:49.892 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.892 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:49.892 [2024-12-09 06:12:44.363893] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:49.892 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.892 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:49.892 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.892 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:49.892 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.892 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:49.892 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.892 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:49.892 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.892 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:51.802 06:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:51.802 06:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:51.802 06:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:51.802 06:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:51.802 06:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:53.713 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:53.713 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:53.713 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:53.713 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:53.713 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:53.713 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:53.713 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:53.713 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:53.713 06:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:53.713 06:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:53.713 06:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:53.713 06:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:53.713 06:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:53.713 06:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:53.713 06:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:53.713 06:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:53.713 06:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.713 06:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:53.713 06:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.713 06:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:53.713 06:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.713 06:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:53.713 06:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.713 06:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:53.713 06:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:53.713 06:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.713 06:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:53.713 06:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.713 06:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:53.713 06:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.713 06:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:53.713 [2024-12-09 06:12:48.110356] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:53.713 06:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.713 06:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:53.713 06:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.714 06:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:53.714 06:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.714 06:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:53.714 06:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.714 06:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:53.714 06:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.714 06:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:55.099 06:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:55.099 06:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:55.099 06:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:55.099 06:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:55.099 06:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:57.647 06:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:57.647 06:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:57.647 06:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:57.647 06:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:57.647 06:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:57.647 06:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:57.647 06:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:57.647 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:57.647 06:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:57.647 06:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:57.647 06:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:57.647 06:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:57.647 06:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:57.647 06:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:57.647 06:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:57.647 06:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:57.647 06:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.647 06:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:57.647 06:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.647 06:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:57.647 06:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.647 06:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:57.647 06:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.647 06:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:57.647 06:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:57.647 06:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.647 06:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:57.647 06:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.647 06:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:57.647 06:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.647 06:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:57.647 [2024-12-09 06:12:51.829831] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:57.647 06:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.647 06:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:57.647 06:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.647 06:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:57.647 06:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.647 06:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:57.647 06:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.647 06:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:57.647 06:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.647 06:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:59.033 06:12:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:59.033 06:12:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:59.033 06:12:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:59.033 06:12:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:59.033 06:12:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:00.945 06:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:00.945 06:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:00.945 06:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:00.945 06:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:00.945 06:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:00.945 06:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:00.945 06:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:00.945 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:00.945 06:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:00.945 06:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:00.945 06:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:00.945 06:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:00.945 06:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:00.945 06:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:00.945 06:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:00.945 06:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:00.945 06:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.945 06:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:00.945 06:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.945 06:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:00.945 06:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.945 06:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:00.945 06:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.945 06:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:00.945 06:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:00.945 06:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.945 06:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:01.205 06:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.205 06:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:01.205 06:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.205 06:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:01.205 [2024-12-09 06:12:55.539213] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:01.205 06:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.205 06:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:01.205 06:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.205 06:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:01.205 06:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.205 06:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:01.205 06:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.205 06:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:01.205 06:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.205 06:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:02.587 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:02.587 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:02.587 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:02.587 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:02.587 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:04.505 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:04.505 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:04.505 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:04.505 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:04.505 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:04.505 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:04.505 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:04.767 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:04.767 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:04.767 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:04.767 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:04.767 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:04.767 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:04.767 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:04.767 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:04.767 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:04.767 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.767 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:04.767 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.767 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:04.767 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.767 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:04.767 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.767 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:14:04.767 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:04.767 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:04.767 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.767 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:04.767 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.767 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:04.767 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.767 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:04.767 [2024-12-09 06:12:59.249525] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:04.767 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.767 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:04.767 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.767 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:04.767 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.767 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:04.767 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.767 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:04.767 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.767 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:04.767 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.767 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:04.767 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.767 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:04.767 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.767 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:04.767 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.767 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:04.767 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:04.767 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.767 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:04.767 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.767 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:04.767 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.767 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:04.767 [2024-12-09 06:12:59.313662] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:04.767 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.767 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:04.767 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.767 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:04.767 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.767 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:04.767 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.767 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:04.767 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.767 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:04.767 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.767 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:05.029 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.029 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:05.029 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.029 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:05.029 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.029 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:05.029 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:05.029 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.029 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:05.029 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.029 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:05.029 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.029 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:05.029 [2024-12-09 06:12:59.381880] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:05.029 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.029 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:05.029 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.029 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:05.029 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.029 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:05.029 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.029 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:05.029 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.029 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:05.029 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.029 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:05.029 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.029 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:05.029 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.029 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:05.029 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.029 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:05.029 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:05.029 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.029 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:05.029 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.029 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:05.029 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.029 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:05.029 [2024-12-09 06:12:59.454122] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:05.029 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.029 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:05.029 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.029 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:05.029 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.029 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:05.029 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.029 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:05.029 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.029 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:05.029 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.029 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:05.029 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.029 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:05.029 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.029 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:05.029 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.029 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:05.029 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:05.029 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.029 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:05.029 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.029 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:05.029 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.029 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:05.029 [2024-12-09 06:12:59.522338] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:05.029 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.029 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:05.029 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.030 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:05.030 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.030 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:05.030 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.030 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:05.030 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.030 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:05.030 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.030 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:05.030 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.030 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:05.030 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.030 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:05.030 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.030 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:14:05.030 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.030 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:05.030 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.030 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:14:05.030 "tick_rate": 2600000000, 00:14:05.030 "poll_groups": [ 00:14:05.030 { 00:14:05.030 "name": "nvmf_tgt_poll_group_000", 00:14:05.030 "admin_qpairs": 0, 00:14:05.030 "io_qpairs": 224, 00:14:05.030 "current_admin_qpairs": 0, 00:14:05.030 "current_io_qpairs": 0, 00:14:05.030 "pending_bdev_io": 0, 00:14:05.030 "completed_nvme_io": 434, 00:14:05.030 "transports": [ 00:14:05.030 { 00:14:05.030 "trtype": "TCP" 00:14:05.030 } 00:14:05.030 ] 00:14:05.030 }, 00:14:05.030 { 00:14:05.030 "name": "nvmf_tgt_poll_group_001", 00:14:05.030 "admin_qpairs": 1, 00:14:05.030 "io_qpairs": 223, 00:14:05.030 "current_admin_qpairs": 0, 00:14:05.030 "current_io_qpairs": 0, 00:14:05.030 "pending_bdev_io": 0, 00:14:05.030 "completed_nvme_io": 224, 00:14:05.030 "transports": [ 00:14:05.030 { 00:14:05.030 "trtype": "TCP" 00:14:05.030 } 00:14:05.030 ] 00:14:05.030 }, 00:14:05.030 { 00:14:05.030 "name": "nvmf_tgt_poll_group_002", 00:14:05.030 "admin_qpairs": 6, 00:14:05.030 "io_qpairs": 218, 00:14:05.030 "current_admin_qpairs": 0, 00:14:05.030 "current_io_qpairs": 0, 00:14:05.030 "pending_bdev_io": 0, 00:14:05.030 "completed_nvme_io": 306, 00:14:05.030 "transports": [ 00:14:05.030 { 00:14:05.030 "trtype": "TCP" 00:14:05.030 } 00:14:05.030 ] 00:14:05.030 }, 00:14:05.030 { 00:14:05.030 "name": "nvmf_tgt_poll_group_003", 00:14:05.030 "admin_qpairs": 0, 00:14:05.030 "io_qpairs": 224, 00:14:05.030 "current_admin_qpairs": 0, 00:14:05.030 "current_io_qpairs": 0, 00:14:05.030 "pending_bdev_io": 0, 00:14:05.030 "completed_nvme_io": 275, 00:14:05.030 "transports": [ 00:14:05.030 { 00:14:05.030 "trtype": "TCP" 00:14:05.030 } 00:14:05.030 ] 00:14:05.030 } 00:14:05.030 ] 00:14:05.030 }' 00:14:05.030 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:14:05.030 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:05.030 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:05.030 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:05.291 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:14:05.291 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:14:05.291 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:05.291 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:05.291 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:05.291 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:14:05.291 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:14:05.291 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:14:05.291 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:14:05.291 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:05.291 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:14:05.291 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:05.291 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:14:05.291 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:05.291 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:05.291 rmmod nvme_tcp 00:14:05.291 rmmod nvme_fabrics 00:14:05.291 rmmod nvme_keyring 00:14:05.291 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:05.291 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:14:05.291 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:14:05.291 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 255016 ']' 00:14:05.291 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 255016 00:14:05.291 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 255016 ']' 00:14:05.291 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 255016 00:14:05.291 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:14:05.291 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:05.291 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 255016 00:14:05.291 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:05.291 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:05.291 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 255016' 00:14:05.291 killing process with pid 255016 00:14:05.291 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 255016 00:14:05.291 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 255016 00:14:05.553 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:05.553 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:05.553 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:05.553 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:14:05.553 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:14:05.553 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:05.553 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:14:05.553 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:05.553 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:05.553 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:05.553 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:05.553 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:07.468 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:07.468 00:14:07.468 real 0m37.739s 00:14:07.468 user 1m53.435s 00:14:07.468 sys 0m7.712s 00:14:07.469 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:07.469 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:07.469 ************************************ 00:14:07.469 END TEST nvmf_rpc 00:14:07.469 ************************************ 00:14:07.469 06:13:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:07.469 06:13:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:07.469 06:13:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:07.469 06:13:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:07.737 ************************************ 00:14:07.737 START TEST nvmf_invalid 00:14:07.737 ************************************ 00:14:07.737 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:07.737 * Looking for test storage... 00:14:07.737 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:07.737 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:07.737 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:14:07.737 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:07.737 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:07.737 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:07.737 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:07.737 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:07.737 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:14:07.737 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:14:07.737 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:14:07.737 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:14:07.737 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:14:07.737 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:14:07.737 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:14:07.737 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:07.737 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:14:07.737 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:14:07.737 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:07.737 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:07.737 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:14:07.737 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:14:07.737 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:07.737 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:14:07.737 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:14:07.737 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:14:07.737 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:14:07.737 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:07.737 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:14:07.737 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:14:07.737 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:07.737 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:07.737 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:14:07.737 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:07.737 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:07.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.737 --rc genhtml_branch_coverage=1 00:14:07.737 --rc genhtml_function_coverage=1 00:14:07.737 --rc genhtml_legend=1 00:14:07.737 --rc geninfo_all_blocks=1 00:14:07.737 --rc geninfo_unexecuted_blocks=1 00:14:07.737 00:14:07.737 ' 00:14:07.737 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:07.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.737 --rc genhtml_branch_coverage=1 00:14:07.737 --rc genhtml_function_coverage=1 00:14:07.737 --rc genhtml_legend=1 00:14:07.737 --rc geninfo_all_blocks=1 00:14:07.737 --rc geninfo_unexecuted_blocks=1 00:14:07.737 00:14:07.737 ' 00:14:07.737 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:07.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.737 --rc genhtml_branch_coverage=1 00:14:07.737 --rc genhtml_function_coverage=1 00:14:07.737 --rc genhtml_legend=1 00:14:07.737 --rc geninfo_all_blocks=1 00:14:07.737 --rc geninfo_unexecuted_blocks=1 00:14:07.737 00:14:07.737 ' 00:14:07.737 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:07.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.737 --rc genhtml_branch_coverage=1 00:14:07.737 --rc genhtml_function_coverage=1 00:14:07.737 --rc genhtml_legend=1 00:14:07.737 --rc geninfo_all_blocks=1 00:14:07.737 --rc geninfo_unexecuted_blocks=1 00:14:07.737 00:14:07.737 ' 00:14:07.738 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:07.738 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:14:07.738 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:07.738 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:07.738 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:07.738 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:07.738 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:07.738 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:07.738 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:07.738 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:07.738 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:07.738 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:07.738 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:14:07.738 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:14:07.738 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:07.738 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:07.738 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:07.738 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:07.738 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:07.738 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:14:07.738 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:07.738 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:07.738 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:07.738 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.738 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.738 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.738 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:14:07.738 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.738 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:14:07.738 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:07.738 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:07.738 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:07.999 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:07.999 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:07.999 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:07.999 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:07.999 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:07.999 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:07.999 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:07.999 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:07.999 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:07.999 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:14:07.999 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:14:07.999 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:14:07.999 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:14:07.999 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:07.999 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:07.999 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:07.999 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:07.999 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:07.999 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:07.999 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:07.999 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:07.999 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:07.999 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:07.999 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:14:07.999 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:16.143 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:16.143 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:14:16.143 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:16.143 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:16.143 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:16.143 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:16.143 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:16.143 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:14:16.143 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:16.143 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:14:16.143 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:14:16.143 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:14:16.143 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:14:16.143 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:14:16.143 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:14:16.143 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:16.143 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:16.143 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:16.143 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:16.143 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:16.143 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:16.143 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:16.143 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:16.143 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:16.143 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:16.143 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:16.143 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:16.143 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:16.143 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:16.143 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:16.143 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:16.143 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:16.143 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:16.143 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:16.143 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:16.143 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:16.143 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:16.143 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:16.143 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:16.143 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:16.143 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:16.143 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:16.143 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:16.143 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:16.143 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:16.144 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:16.144 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:16.144 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:16.144 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:16.144 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:16.144 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:16.144 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:16.144 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:16.144 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:16.144 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:16.144 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:16.144 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:16.144 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:16.144 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:16.144 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:16.144 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:16.144 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:16.144 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:16.144 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:16.144 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:16.144 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:16.144 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:16.144 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:16.144 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:16.144 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:16.144 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:16.144 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:16.144 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:16.144 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:14:16.144 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:16.144 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:16.144 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:16.144 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:16.144 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:16.144 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:16.144 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:16.144 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:16.144 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:16.144 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:16.144 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:16.144 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:16.144 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:16.144 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:16.144 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:16.144 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:16.144 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:16.144 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:16.144 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:16.144 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:16.144 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:16.144 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:16.144 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:16.144 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:16.144 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:16.144 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:16.144 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:16.144 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.546 ms 00:14:16.144 00:14:16.144 --- 10.0.0.2 ping statistics --- 00:14:16.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:16.144 rtt min/avg/max/mdev = 0.546/0.546/0.546/0.000 ms 00:14:16.144 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:16.144 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:16.144 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:14:16.144 00:14:16.144 --- 10.0.0.1 ping statistics --- 00:14:16.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:16.144 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:14:16.144 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:16.144 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:14:16.144 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:16.144 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:16.144 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:16.144 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:16.144 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:16.144 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:16.144 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:16.144 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:14:16.144 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:16.144 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:16.144 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:16.144 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=263671 00:14:16.144 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 263671 00:14:16.144 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:16.144 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 263671 ']' 00:14:16.144 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:16.144 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:16.144 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:16.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:16.144 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:16.144 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:16.144 [2024-12-09 06:13:09.737670] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:14:16.144 [2024-12-09 06:13:09.737740] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:16.144 [2024-12-09 06:13:09.835306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:16.144 [2024-12-09 06:13:09.886652] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:16.144 [2024-12-09 06:13:09.886702] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:16.144 [2024-12-09 06:13:09.886711] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:16.144 [2024-12-09 06:13:09.886718] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:16.144 [2024-12-09 06:13:09.886724] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:16.144 [2024-12-09 06:13:09.888600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:16.144 [2024-12-09 06:13:09.888779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:16.144 [2024-12-09 06:13:09.888966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:16.144 [2024-12-09 06:13:09.888967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:16.144 06:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:16.144 06:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:14:16.144 06:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:16.144 06:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:16.144 06:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:16.144 06:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:16.144 06:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:16.144 06:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode29664 00:14:16.406 [2024-12-09 06:13:10.781286] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:14:16.406 06:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:14:16.406 { 00:14:16.406 "nqn": "nqn.2016-06.io.spdk:cnode29664", 00:14:16.406 "tgt_name": "foobar", 00:14:16.406 "method": "nvmf_create_subsystem", 00:14:16.406 "req_id": 1 00:14:16.406 } 00:14:16.406 Got JSON-RPC error response 00:14:16.406 response: 00:14:16.406 { 00:14:16.406 "code": -32603, 00:14:16.406 "message": "Unable to find target foobar" 00:14:16.406 }' 00:14:16.406 06:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:14:16.406 { 00:14:16.406 "nqn": "nqn.2016-06.io.spdk:cnode29664", 00:14:16.406 "tgt_name": "foobar", 00:14:16.406 "method": "nvmf_create_subsystem", 00:14:16.406 "req_id": 1 00:14:16.406 } 00:14:16.406 Got JSON-RPC error response 00:14:16.406 response: 00:14:16.406 { 00:14:16.406 "code": -32603, 00:14:16.406 "message": "Unable to find target foobar" 00:14:16.406 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:14:16.406 06:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:14:16.406 06:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode23059 00:14:16.406 [2024-12-09 06:13:10.982126] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23059: invalid serial number 'SPDKISFASTANDAWESOME' 00:14:16.667 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:14:16.667 { 00:14:16.667 "nqn": "nqn.2016-06.io.spdk:cnode23059", 00:14:16.667 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:16.667 "method": "nvmf_create_subsystem", 00:14:16.667 "req_id": 1 00:14:16.667 } 00:14:16.667 Got JSON-RPC error response 00:14:16.667 response: 00:14:16.667 { 00:14:16.667 "code": -32602, 00:14:16.667 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:16.667 }' 00:14:16.667 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:14:16.667 { 00:14:16.667 "nqn": "nqn.2016-06.io.spdk:cnode23059", 00:14:16.667 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:16.667 "method": "nvmf_create_subsystem", 00:14:16.667 "req_id": 1 00:14:16.667 } 00:14:16.667 Got JSON-RPC error response 00:14:16.667 response: 00:14:16.667 { 00:14:16.667 "code": -32602, 00:14:16.667 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:16.667 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:16.667 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:14:16.667 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode30419 00:14:16.667 [2024-12-09 06:13:11.170695] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30419: invalid model number 'SPDK_Controller' 00:14:16.667 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:14:16.667 { 00:14:16.667 "nqn": "nqn.2016-06.io.spdk:cnode30419", 00:14:16.667 "model_number": "SPDK_Controller\u001f", 00:14:16.667 "method": "nvmf_create_subsystem", 00:14:16.667 "req_id": 1 00:14:16.667 } 00:14:16.667 Got JSON-RPC error response 00:14:16.667 response: 00:14:16.667 { 00:14:16.667 "code": -32602, 00:14:16.667 "message": "Invalid MN SPDK_Controller\u001f" 00:14:16.667 }' 00:14:16.667 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:14:16.667 { 00:14:16.667 "nqn": "nqn.2016-06.io.spdk:cnode30419", 00:14:16.667 "model_number": "SPDK_Controller\u001f", 00:14:16.667 "method": "nvmf_create_subsystem", 00:14:16.667 "req_id": 1 00:14:16.667 } 00:14:16.667 Got JSON-RPC error response 00:14:16.667 response: 00:14:16.667 { 00:14:16.668 "code": -32602, 00:14:16.668 "message": "Invalid MN SPDK_Controller\u001f" 00:14:16.668 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:16.668 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:14:16.668 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:14:16.668 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:16.668 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:16.668 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:16.668 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:16.668 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.668 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:14:16.668 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:14:16.668 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:14:16.668 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.668 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.668 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:14:16.668 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:14:16.668 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:14:16.668 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.668 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.668 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:14:16.668 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:14:16.668 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:14:16.668 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.668 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.668 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:14:16.668 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:14:16.668 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:14:16.668 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.668 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.668 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:14:16.668 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:14:16.668 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:14:16.668 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.668 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.668 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:14:16.668 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:14:16.668 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:14:16.668 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.668 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.668 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:14:16.668 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:14:16.668 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:14:16.668 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.668 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.929 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:14:16.929 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:14:16.929 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:14:16.929 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.929 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.929 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:14:16.929 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:14:16.929 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:14:16.929 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.929 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.929 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:14:16.929 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:14:16.929 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:14:16.929 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.929 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.929 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:14:16.930 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:14:16.930 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:14:16.930 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.930 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.930 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:14:16.930 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:14:16.930 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:14:16.930 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.930 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.930 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:14:16.930 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:14:16.930 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:14:16.930 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.930 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.930 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:14:16.930 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:14:16.930 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:14:16.930 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.930 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.930 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:14:16.930 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:14:16.930 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:14:16.930 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.930 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.930 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:14:16.930 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:14:16.930 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:14:16.930 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.930 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.930 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:14:16.930 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:14:16.930 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:14:16.930 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.930 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.930 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:14:16.930 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:14:16.930 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:14:16.930 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.930 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.930 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:14:16.930 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:14:16.930 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:14:16.930 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.930 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.930 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:14:16.930 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:14:16.930 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:14:16.930 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.930 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.930 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:14:16.930 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:14:16.930 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:14:16.930 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.930 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.930 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ * == \- ]] 00:14:16.930 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '*#dfPNc0:yG1}\}\ G>">' 00:14:16.930 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '*#dfPNc0:yG1}\}\ G>">' nqn.2016-06.io.spdk:cnode20416 00:14:17.192 [2024-12-09 06:13:11.515771] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20416: invalid serial number '*#dfPNc0:yG1}\}\ G>">' 00:14:17.192 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:14:17.192 { 00:14:17.192 "nqn": "nqn.2016-06.io.spdk:cnode20416", 00:14:17.192 "serial_number": "*#dfPNc0:yG1}\\}\\ G>\">", 00:14:17.192 "method": "nvmf_create_subsystem", 00:14:17.192 "req_id": 1 00:14:17.192 } 00:14:17.192 Got JSON-RPC error response 00:14:17.192 response: 00:14:17.192 { 00:14:17.192 "code": -32602, 00:14:17.192 "message": "Invalid SN *#dfPNc0:yG1}\\}\\ G>\">" 00:14:17.192 }' 00:14:17.192 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:14:17.192 { 00:14:17.192 "nqn": "nqn.2016-06.io.spdk:cnode20416", 00:14:17.192 "serial_number": "*#dfPNc0:yG1}\\}\\ G>\">", 00:14:17.192 "method": "nvmf_create_subsystem", 00:14:17.192 "req_id": 1 00:14:17.192 } 00:14:17.192 Got JSON-RPC error response 00:14:17.192 response: 00:14:17.192 { 00:14:17.192 "code": -32602, 00:14:17.192 "message": "Invalid SN *#dfPNc0:yG1}\\}\\ G>\">" 00:14:17.192 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:17.192 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:14:17.192 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.193 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.194 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:14:17.194 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:14:17.194 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:14:17.194 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.194 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.194 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:14:17.194 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:14:17.194 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:14:17.194 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.194 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.194 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:14:17.194 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:14:17.194 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:14:17.194 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.194 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.194 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:14:17.194 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:14:17.194 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:14:17.194 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.194 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.194 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:14:17.194 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:14:17.194 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:14:17.194 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.194 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.194 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:14:17.194 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:14:17.194 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:14:17.194 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.194 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.194 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:14:17.194 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:14:17.194 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:14:17.194 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.194 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.194 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:14:17.194 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:14:17.194 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:14:17.194 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.194 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.194 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:14:17.194 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:14:17.194 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:14:17.194 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.194 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.455 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:14:17.455 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:14:17.455 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:14:17.455 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.455 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.455 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:14:17.455 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:14:17.455 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:14:17.455 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.455 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.455 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:14:17.455 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:14:17.455 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:14:17.455 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.455 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.455 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:14:17.455 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:14:17.455 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:14:17.455 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.455 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.455 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:14:17.455 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:14:17.455 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:14:17.455 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.455 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.455 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:14:17.455 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:14:17.455 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:14:17.455 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.455 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.455 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:14:17.455 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:14:17.455 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:14:17.455 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.455 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.455 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:14:17.455 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:14:17.455 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:14:17.455 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.455 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.455 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:14:17.455 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:14:17.455 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:14:17.455 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.455 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.455 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:14:17.455 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:14:17.455 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:14:17.455 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.455 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.455 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ b == \- ]] 00:14:17.455 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'bHC9^GrWI|[r+NB{:{Xk`7YnL:;=|HSZ`O/z5:[c;' 00:14:17.455 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'bHC9^GrWI|[r+NB{:{Xk`7YnL:;=|HSZ`O/z5:[c;' nqn.2016-06.io.spdk:cnode20911 00:14:17.455 [2024-12-09 06:13:12.005300] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20911: invalid model number 'bHC9^GrWI|[r+NB{:{Xk`7YnL:;=|HSZ`O/z5:[c;' 00:14:17.455 06:13:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:14:17.455 { 00:14:17.455 "nqn": "nqn.2016-06.io.spdk:cnode20911", 00:14:17.455 "model_number": "bHC9^GrWI|[r+NB{:{Xk`7YnL:;=|HSZ`O/z5:[c;", 00:14:17.455 "method": "nvmf_create_subsystem", 00:14:17.455 "req_id": 1 00:14:17.455 } 00:14:17.455 Got JSON-RPC error response 00:14:17.455 response: 00:14:17.455 { 00:14:17.455 "code": -32602, 00:14:17.455 "message": "Invalid MN bHC9^GrWI|[r+NB{:{Xk`7YnL:;=|HSZ`O/z5:[c;" 00:14:17.455 }' 00:14:17.455 06:13:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:14:17.455 { 00:14:17.455 "nqn": "nqn.2016-06.io.spdk:cnode20911", 00:14:17.455 "model_number": "bHC9^GrWI|[r+NB{:{Xk`7YnL:;=|HSZ`O/z5:[c;", 00:14:17.455 "method": "nvmf_create_subsystem", 00:14:17.455 "req_id": 1 00:14:17.455 } 00:14:17.455 Got JSON-RPC error response 00:14:17.455 response: 00:14:17.455 { 00:14:17.455 "code": -32602, 00:14:17.455 "message": "Invalid MN bHC9^GrWI|[r+NB{:{Xk`7YnL:;=|HSZ`O/z5:[c;" 00:14:17.455 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:17.455 06:13:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:14:17.716 [2024-12-09 06:13:12.181958] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:17.716 06:13:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:14:17.977 06:13:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:14:17.977 06:13:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:14:17.977 06:13:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:14:17.977 06:13:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:14:17.977 06:13:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:14:18.238 [2024-12-09 06:13:12.568109] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:14:18.238 06:13:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:14:18.238 { 00:14:18.238 "nqn": "nqn.2016-06.io.spdk:cnode", 00:14:18.238 "listen_address": { 00:14:18.238 "trtype": "tcp", 00:14:18.238 "traddr": "", 00:14:18.238 "trsvcid": "4421" 00:14:18.238 }, 00:14:18.238 "method": "nvmf_subsystem_remove_listener", 00:14:18.238 "req_id": 1 00:14:18.238 } 00:14:18.238 Got JSON-RPC error response 00:14:18.238 response: 00:14:18.238 { 00:14:18.238 "code": -32602, 00:14:18.238 "message": "Invalid parameters" 00:14:18.238 }' 00:14:18.238 06:13:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:14:18.238 { 00:14:18.238 "nqn": "nqn.2016-06.io.spdk:cnode", 00:14:18.238 "listen_address": { 00:14:18.238 "trtype": "tcp", 00:14:18.238 "traddr": "", 00:14:18.238 "trsvcid": "4421" 00:14:18.238 }, 00:14:18.238 "method": "nvmf_subsystem_remove_listener", 00:14:18.238 "req_id": 1 00:14:18.238 } 00:14:18.238 Got JSON-RPC error response 00:14:18.238 response: 00:14:18.238 { 00:14:18.238 "code": -32602, 00:14:18.238 "message": "Invalid parameters" 00:14:18.238 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:14:18.238 06:13:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27776 -i 0 00:14:18.238 [2024-12-09 06:13:12.748629] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27776: invalid cntlid range [0-65519] 00:14:18.238 06:13:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:14:18.238 { 00:14:18.238 "nqn": "nqn.2016-06.io.spdk:cnode27776", 00:14:18.238 "min_cntlid": 0, 00:14:18.238 "method": "nvmf_create_subsystem", 00:14:18.238 "req_id": 1 00:14:18.238 } 00:14:18.238 Got JSON-RPC error response 00:14:18.238 response: 00:14:18.238 { 00:14:18.238 "code": -32602, 00:14:18.238 "message": "Invalid cntlid range [0-65519]" 00:14:18.238 }' 00:14:18.238 06:13:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:14:18.238 { 00:14:18.238 "nqn": "nqn.2016-06.io.spdk:cnode27776", 00:14:18.238 "min_cntlid": 0, 00:14:18.238 "method": "nvmf_create_subsystem", 00:14:18.238 "req_id": 1 00:14:18.238 } 00:14:18.238 Got JSON-RPC error response 00:14:18.238 response: 00:14:18.238 { 00:14:18.238 "code": -32602, 00:14:18.238 "message": "Invalid cntlid range [0-65519]" 00:14:18.238 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:18.238 06:13:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5341 -i 65520 00:14:18.498 [2024-12-09 06:13:12.917155] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5341: invalid cntlid range [65520-65519] 00:14:18.498 06:13:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:14:18.498 { 00:14:18.498 "nqn": "nqn.2016-06.io.spdk:cnode5341", 00:14:18.498 "min_cntlid": 65520, 00:14:18.498 "method": "nvmf_create_subsystem", 00:14:18.498 "req_id": 1 00:14:18.498 } 00:14:18.498 Got JSON-RPC error response 00:14:18.498 response: 00:14:18.498 { 00:14:18.498 "code": -32602, 00:14:18.498 "message": "Invalid cntlid range [65520-65519]" 00:14:18.498 }' 00:14:18.498 06:13:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:14:18.498 { 00:14:18.498 "nqn": "nqn.2016-06.io.spdk:cnode5341", 00:14:18.498 "min_cntlid": 65520, 00:14:18.498 "method": "nvmf_create_subsystem", 00:14:18.498 "req_id": 1 00:14:18.498 } 00:14:18.498 Got JSON-RPC error response 00:14:18.498 response: 00:14:18.498 { 00:14:18.498 "code": -32602, 00:14:18.498 "message": "Invalid cntlid range [65520-65519]" 00:14:18.498 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:18.498 06:13:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode22743 -I 0 00:14:18.759 [2024-12-09 06:13:13.097721] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22743: invalid cntlid range [1-0] 00:14:18.759 06:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:14:18.759 { 00:14:18.759 "nqn": "nqn.2016-06.io.spdk:cnode22743", 00:14:18.759 "max_cntlid": 0, 00:14:18.759 "method": "nvmf_create_subsystem", 00:14:18.759 "req_id": 1 00:14:18.759 } 00:14:18.759 Got JSON-RPC error response 00:14:18.759 response: 00:14:18.759 { 00:14:18.759 "code": -32602, 00:14:18.759 "message": "Invalid cntlid range [1-0]" 00:14:18.759 }' 00:14:18.759 06:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:14:18.759 { 00:14:18.759 "nqn": "nqn.2016-06.io.spdk:cnode22743", 00:14:18.759 "max_cntlid": 0, 00:14:18.759 "method": "nvmf_create_subsystem", 00:14:18.759 "req_id": 1 00:14:18.759 } 00:14:18.759 Got JSON-RPC error response 00:14:18.759 response: 00:14:18.759 { 00:14:18.759 "code": -32602, 00:14:18.759 "message": "Invalid cntlid range [1-0]" 00:14:18.759 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:18.759 06:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1074 -I 65520 00:14:18.759 [2024-12-09 06:13:13.278273] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1074: invalid cntlid range [1-65520] 00:14:18.759 06:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:14:18.759 { 00:14:18.759 "nqn": "nqn.2016-06.io.spdk:cnode1074", 00:14:18.759 "max_cntlid": 65520, 00:14:18.759 "method": "nvmf_create_subsystem", 00:14:18.759 "req_id": 1 00:14:18.759 } 00:14:18.759 Got JSON-RPC error response 00:14:18.759 response: 00:14:18.759 { 00:14:18.759 "code": -32602, 00:14:18.759 "message": "Invalid cntlid range [1-65520]" 00:14:18.759 }' 00:14:18.759 06:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:14:18.759 { 00:14:18.759 "nqn": "nqn.2016-06.io.spdk:cnode1074", 00:14:18.759 "max_cntlid": 65520, 00:14:18.759 "method": "nvmf_create_subsystem", 00:14:18.759 "req_id": 1 00:14:18.759 } 00:14:18.759 Got JSON-RPC error response 00:14:18.759 response: 00:14:18.759 { 00:14:18.759 "code": -32602, 00:14:18.759 "message": "Invalid cntlid range [1-65520]" 00:14:18.759 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:18.759 06:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1697 -i 6 -I 5 00:14:19.020 [2024-12-09 06:13:13.454836] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1697: invalid cntlid range [6-5] 00:14:19.020 06:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:14:19.020 { 00:14:19.020 "nqn": "nqn.2016-06.io.spdk:cnode1697", 00:14:19.020 "min_cntlid": 6, 00:14:19.020 "max_cntlid": 5, 00:14:19.020 "method": "nvmf_create_subsystem", 00:14:19.020 "req_id": 1 00:14:19.020 } 00:14:19.020 Got JSON-RPC error response 00:14:19.020 response: 00:14:19.020 { 00:14:19.020 "code": -32602, 00:14:19.020 "message": "Invalid cntlid range [6-5]" 00:14:19.020 }' 00:14:19.020 06:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:14:19.020 { 00:14:19.020 "nqn": "nqn.2016-06.io.spdk:cnode1697", 00:14:19.020 "min_cntlid": 6, 00:14:19.020 "max_cntlid": 5, 00:14:19.020 "method": "nvmf_create_subsystem", 00:14:19.020 "req_id": 1 00:14:19.020 } 00:14:19.020 Got JSON-RPC error response 00:14:19.020 response: 00:14:19.020 { 00:14:19.020 "code": -32602, 00:14:19.020 "message": "Invalid cntlid range [6-5]" 00:14:19.020 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:19.020 06:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:14:19.020 06:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:14:19.020 { 00:14:19.020 "name": "foobar", 00:14:19.020 "method": "nvmf_delete_target", 00:14:19.020 "req_id": 1 00:14:19.020 } 00:14:19.020 Got JSON-RPC error response 00:14:19.020 response: 00:14:19.020 { 00:14:19.020 "code": -32602, 00:14:19.020 "message": "The specified target doesn'\''t exist, cannot delete it." 00:14:19.020 }' 00:14:19.020 06:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:14:19.020 { 00:14:19.020 "name": "foobar", 00:14:19.020 "method": "nvmf_delete_target", 00:14:19.020 "req_id": 1 00:14:19.020 } 00:14:19.020 Got JSON-RPC error response 00:14:19.020 response: 00:14:19.020 { 00:14:19.020 "code": -32602, 00:14:19.020 "message": "The specified target doesn't exist, cannot delete it." 00:14:19.020 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:14:19.020 06:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:14:19.020 06:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:14:19.020 06:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:19.020 06:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:14:19.020 06:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:19.020 06:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:14:19.020 06:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:19.020 06:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:19.020 rmmod nvme_tcp 00:14:19.020 rmmod nvme_fabrics 00:14:19.281 rmmod nvme_keyring 00:14:19.281 06:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:19.281 06:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:14:19.281 06:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:14:19.281 06:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 263671 ']' 00:14:19.281 06:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 263671 00:14:19.281 06:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 263671 ']' 00:14:19.281 06:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 263671 00:14:19.281 06:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:14:19.281 06:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:19.281 06:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 263671 00:14:19.281 06:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:19.281 06:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:19.281 06:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 263671' 00:14:19.281 killing process with pid 263671 00:14:19.281 06:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 263671 00:14:19.281 06:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 263671 00:14:19.281 06:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:19.281 06:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:19.281 06:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:19.281 06:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:14:19.281 06:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:14:19.281 06:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:14:19.281 06:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:19.281 06:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:19.281 06:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:19.281 06:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:19.281 06:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:19.281 06:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:21.830 06:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:21.830 00:14:21.830 real 0m13.821s 00:14:21.830 user 0m20.173s 00:14:21.830 sys 0m6.572s 00:14:21.830 06:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:21.830 06:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:21.830 ************************************ 00:14:21.830 END TEST nvmf_invalid 00:14:21.830 ************************************ 00:14:21.830 06:13:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:21.830 06:13:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:21.830 06:13:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:21.830 06:13:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:21.830 ************************************ 00:14:21.830 START TEST nvmf_connect_stress 00:14:21.830 ************************************ 00:14:21.830 06:13:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:21.830 * Looking for test storage... 00:14:21.830 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:21.830 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:21.830 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:14:21.830 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:21.830 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:21.830 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:21.830 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:21.830 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:21.830 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:14:21.830 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:14:21.830 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:14:21.830 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:14:21.830 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:14:21.830 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:14:21.830 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:14:21.830 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:21.830 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:14:21.830 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:14:21.830 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:21.830 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:21.830 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:14:21.830 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:14:21.830 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:21.830 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:14:21.830 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:14:21.830 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:14:21.830 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:14:21.830 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:21.830 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:14:21.830 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:14:21.830 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:21.830 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:21.830 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:14:21.830 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:21.830 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:21.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:21.830 --rc genhtml_branch_coverage=1 00:14:21.830 --rc genhtml_function_coverage=1 00:14:21.830 --rc genhtml_legend=1 00:14:21.830 --rc geninfo_all_blocks=1 00:14:21.830 --rc geninfo_unexecuted_blocks=1 00:14:21.830 00:14:21.830 ' 00:14:21.830 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:21.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:21.830 --rc genhtml_branch_coverage=1 00:14:21.830 --rc genhtml_function_coverage=1 00:14:21.830 --rc genhtml_legend=1 00:14:21.830 --rc geninfo_all_blocks=1 00:14:21.830 --rc geninfo_unexecuted_blocks=1 00:14:21.830 00:14:21.830 ' 00:14:21.830 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:21.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:21.830 --rc genhtml_branch_coverage=1 00:14:21.830 --rc genhtml_function_coverage=1 00:14:21.830 --rc genhtml_legend=1 00:14:21.830 --rc geninfo_all_blocks=1 00:14:21.830 --rc geninfo_unexecuted_blocks=1 00:14:21.830 00:14:21.830 ' 00:14:21.830 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:21.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:21.830 --rc genhtml_branch_coverage=1 00:14:21.830 --rc genhtml_function_coverage=1 00:14:21.830 --rc genhtml_legend=1 00:14:21.830 --rc geninfo_all_blocks=1 00:14:21.830 --rc geninfo_unexecuted_blocks=1 00:14:21.830 00:14:21.830 ' 00:14:21.830 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:21.830 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:14:21.830 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:21.830 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:21.830 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:21.830 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:21.830 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:21.830 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:21.830 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:21.830 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:21.830 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:21.830 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:21.830 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:14:21.830 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:14:21.830 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:21.830 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:21.830 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:21.830 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:21.830 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:21.830 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:14:21.830 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:21.830 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:21.830 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:21.830 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.830 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.831 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.831 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:14:21.831 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.831 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:14:21.831 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:21.831 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:21.831 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:21.831 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:21.831 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:21.831 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:21.831 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:21.831 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:21.831 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:21.831 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:21.831 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:21.831 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:21.831 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:21.831 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:21.831 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:21.831 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:21.831 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:21.831 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:21.831 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:21.831 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:21.831 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:21.831 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:14:21.831 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:29.974 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:29.974 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:29.974 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:29.974 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:29.974 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:29.975 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:29.975 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:29.975 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:29.975 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:29.975 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:29.975 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:29.975 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:29.975 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:29.975 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:29.975 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:29.975 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:29.975 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:29.975 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:29.975 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.639 ms 00:14:29.975 00:14:29.975 --- 10.0.0.2 ping statistics --- 00:14:29.975 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:29.975 rtt min/avg/max/mdev = 0.639/0.639/0.639/0.000 ms 00:14:29.975 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:29.975 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:29.975 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:14:29.975 00:14:29.975 --- 10.0.0.1 ping statistics --- 00:14:29.975 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:29.975 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:14:29.975 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:29.975 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:14:29.975 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:29.975 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:29.975 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:29.975 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:29.975 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:29.975 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:29.975 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:29.975 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:29.975 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:29.975 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:29.975 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:29.975 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=268574 00:14:29.975 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 268574 00:14:29.975 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:29.975 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 268574 ']' 00:14:29.975 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:29.975 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:29.975 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:29.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:29.975 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:29.975 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:29.975 [2024-12-09 06:13:23.704833] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:14:29.975 [2024-12-09 06:13:23.704896] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:29.975 [2024-12-09 06:13:23.783901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:29.975 [2024-12-09 06:13:23.833635] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:29.975 [2024-12-09 06:13:23.833688] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:29.975 [2024-12-09 06:13:23.833697] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:29.975 [2024-12-09 06:13:23.833703] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:29.975 [2024-12-09 06:13:23.833709] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:29.975 [2024-12-09 06:13:23.835645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:29.975 [2024-12-09 06:13:23.835808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:29.975 [2024-12-09 06:13:23.835808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:29.975 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:29.975 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:14:29.975 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:29.975 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:29.975 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:30.237 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:30.237 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:30.237 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.237 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:30.237 [2024-12-09 06:13:24.601587] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:30.237 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.237 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:30.237 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.237 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:30.237 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.237 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:30.237 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.237 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:30.237 [2024-12-09 06:13:24.622832] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:30.237 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.237 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:30.237 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.237 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:30.237 NULL1 00:14:30.237 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.237 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=268691 00:14:30.237 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:30.237 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:30.237 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:30.237 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:14:30.237 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:30.237 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:30.237 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:30.237 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:30.237 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:30.237 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:30.237 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:30.237 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:30.237 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:30.237 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:30.237 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:30.237 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:30.237 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:30.237 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:30.237 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:30.237 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:30.237 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:30.237 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:30.237 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:30.237 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:30.237 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:30.237 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:30.237 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:30.237 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:30.237 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:30.237 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:30.237 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:30.237 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:30.237 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:30.237 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:30.237 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:30.237 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:30.237 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:30.237 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:30.237 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:30.237 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:30.237 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:30.237 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:30.237 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:30.237 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:30.237 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 268691 00:14:30.237 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:30.237 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.237 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:30.498 06:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.498 06:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 268691 00:14:30.498 06:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:30.498 06:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.498 06:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:31.068 06:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.068 06:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 268691 00:14:31.068 06:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:31.068 06:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.068 06:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:31.328 06:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.328 06:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 268691 00:14:31.328 06:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:31.328 06:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.328 06:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:31.589 06:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.589 06:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 268691 00:14:31.589 06:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:31.589 06:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.589 06:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:31.849 06:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.849 06:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 268691 00:14:31.849 06:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:31.849 06:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.849 06:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:32.109 06:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.109 06:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 268691 00:14:32.109 06:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:32.109 06:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.109 06:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:32.680 06:13:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.680 06:13:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 268691 00:14:32.680 06:13:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:32.680 06:13:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.680 06:13:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:32.941 06:13:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.941 06:13:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 268691 00:14:32.941 06:13:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:32.941 06:13:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.941 06:13:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:33.202 06:13:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.202 06:13:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 268691 00:14:33.202 06:13:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:33.202 06:13:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.202 06:13:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:33.463 06:13:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.463 06:13:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 268691 00:14:33.463 06:13:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:33.463 06:13:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.463 06:13:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:33.723 06:13:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.723 06:13:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 268691 00:14:33.723 06:13:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:33.723 06:13:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.723 06:13:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:34.294 06:13:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.294 06:13:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 268691 00:14:34.294 06:13:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:34.294 06:13:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.294 06:13:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:34.560 06:13:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.560 06:13:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 268691 00:14:34.560 06:13:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:34.560 06:13:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.560 06:13:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:34.821 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.821 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 268691 00:14:34.821 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:34.821 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.821 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:35.087 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.087 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 268691 00:14:35.087 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:35.087 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.087 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:35.349 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.349 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 268691 00:14:35.349 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:35.349 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.349 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:35.920 06:13:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.920 06:13:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 268691 00:14:35.920 06:13:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:35.920 06:13:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.920 06:13:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:36.180 06:13:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.180 06:13:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 268691 00:14:36.181 06:13:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:36.181 06:13:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.181 06:13:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:36.441 06:13:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.441 06:13:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 268691 00:14:36.441 06:13:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:36.441 06:13:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.441 06:13:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:36.701 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.701 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 268691 00:14:36.701 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:36.701 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.701 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:36.961 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.961 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 268691 00:14:36.961 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:36.961 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.961 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:37.531 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.531 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 268691 00:14:37.531 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:37.531 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.531 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:37.791 06:13:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.791 06:13:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 268691 00:14:37.791 06:13:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:37.791 06:13:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.791 06:13:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:38.051 06:13:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.051 06:13:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 268691 00:14:38.051 06:13:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:38.051 06:13:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.051 06:13:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:38.311 06:13:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.311 06:13:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 268691 00:14:38.311 06:13:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:38.311 06:13:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.311 06:13:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:38.572 06:13:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.572 06:13:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 268691 00:14:38.572 06:13:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:38.572 06:13:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.572 06:13:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:39.142 06:13:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.142 06:13:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 268691 00:14:39.142 06:13:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:39.142 06:13:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.142 06:13:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:39.402 06:13:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.402 06:13:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 268691 00:14:39.402 06:13:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:39.402 06:13:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.402 06:13:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:39.661 06:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.661 06:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 268691 00:14:39.661 06:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:39.661 06:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.661 06:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:39.921 06:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.921 06:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 268691 00:14:39.921 06:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:39.921 06:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.921 06:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:40.181 06:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.181 06:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 268691 00:14:40.181 06:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:40.181 06:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.181 06:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:40.442 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:40.702 06:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.702 06:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 268691 00:14:40.702 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (268691) - No such process 00:14:40.702 06:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 268691 00:14:40.702 06:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:40.702 06:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:40.702 06:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:40.702 06:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:40.702 06:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:14:40.702 06:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:40.702 06:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:14:40.702 06:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:40.702 06:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:40.702 rmmod nvme_tcp 00:14:40.702 rmmod nvme_fabrics 00:14:40.702 rmmod nvme_keyring 00:14:40.702 06:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:40.702 06:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:14:40.702 06:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:14:40.702 06:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 268574 ']' 00:14:40.702 06:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 268574 00:14:40.702 06:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 268574 ']' 00:14:40.702 06:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 268574 00:14:40.702 06:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:14:40.702 06:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:40.702 06:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 268574 00:14:40.702 06:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:40.702 06:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:40.702 06:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 268574' 00:14:40.702 killing process with pid 268574 00:14:40.702 06:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 268574 00:14:40.702 06:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 268574 00:14:40.963 06:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:40.963 06:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:40.963 06:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:40.963 06:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:14:40.963 06:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:14:40.963 06:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:14:40.963 06:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:40.963 06:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:40.963 06:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:40.963 06:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:40.963 06:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:40.963 06:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:42.876 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:42.876 00:14:42.876 real 0m21.389s 00:14:42.876 user 0m44.729s 00:14:42.876 sys 0m7.837s 00:14:42.876 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:42.876 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:42.876 ************************************ 00:14:42.876 END TEST nvmf_connect_stress 00:14:42.876 ************************************ 00:14:42.876 06:13:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:42.876 06:13:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:42.876 06:13:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:42.876 06:13:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:43.138 ************************************ 00:14:43.138 START TEST nvmf_fused_ordering 00:14:43.138 ************************************ 00:14:43.138 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:43.138 * Looking for test storage... 00:14:43.138 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:43.138 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:43.138 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:14:43.138 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:43.138 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:43.138 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:43.138 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:43.138 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:43.138 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:14:43.138 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:14:43.138 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:14:43.138 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:14:43.138 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:14:43.138 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:14:43.138 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:14:43.138 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:43.138 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:14:43.138 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:14:43.138 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:43.138 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:43.138 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:14:43.138 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:14:43.138 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:43.138 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:14:43.138 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:14:43.138 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:14:43.138 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:14:43.138 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:43.138 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:14:43.138 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:14:43.138 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:43.138 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:43.138 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:14:43.138 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:43.138 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:43.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:43.138 --rc genhtml_branch_coverage=1 00:14:43.138 --rc genhtml_function_coverage=1 00:14:43.138 --rc genhtml_legend=1 00:14:43.138 --rc geninfo_all_blocks=1 00:14:43.138 --rc geninfo_unexecuted_blocks=1 00:14:43.138 00:14:43.138 ' 00:14:43.138 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:43.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:43.138 --rc genhtml_branch_coverage=1 00:14:43.138 --rc genhtml_function_coverage=1 00:14:43.138 --rc genhtml_legend=1 00:14:43.138 --rc geninfo_all_blocks=1 00:14:43.138 --rc geninfo_unexecuted_blocks=1 00:14:43.138 00:14:43.138 ' 00:14:43.138 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:43.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:43.138 --rc genhtml_branch_coverage=1 00:14:43.138 --rc genhtml_function_coverage=1 00:14:43.138 --rc genhtml_legend=1 00:14:43.138 --rc geninfo_all_blocks=1 00:14:43.138 --rc geninfo_unexecuted_blocks=1 00:14:43.138 00:14:43.138 ' 00:14:43.138 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:43.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:43.138 --rc genhtml_branch_coverage=1 00:14:43.138 --rc genhtml_function_coverage=1 00:14:43.138 --rc genhtml_legend=1 00:14:43.138 --rc geninfo_all_blocks=1 00:14:43.138 --rc geninfo_unexecuted_blocks=1 00:14:43.138 00:14:43.138 ' 00:14:43.138 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:43.138 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:14:43.138 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:43.138 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:43.138 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:43.139 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:43.139 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:43.139 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:43.139 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:43.139 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:43.139 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:43.139 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:43.139 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:14:43.139 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:14:43.139 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:43.139 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:43.139 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:43.139 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:43.139 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:43.139 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:14:43.139 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:43.139 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:43.139 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:43.139 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.139 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.139 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.139 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:14:43.139 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.139 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:14:43.139 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:43.139 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:43.139 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:43.139 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:43.139 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:43.139 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:43.139 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:43.139 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:43.139 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:43.139 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:43.139 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:43.139 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:43.139 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:43.139 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:43.139 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:43.139 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:43.139 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:43.139 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:43.139 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:43.139 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:43.139 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:43.139 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:14:43.139 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:51.287 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:51.287 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:14:51.287 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:51.287 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:51.287 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:51.287 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:51.287 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:51.287 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:14:51.287 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:51.287 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:14:51.287 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:14:51.287 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:14:51.287 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:14:51.287 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:14:51.287 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:14:51.287 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:51.287 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:51.287 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:51.287 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:51.287 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:51.287 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:51.287 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:51.287 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:51.287 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:51.287 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:51.287 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:51.287 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:51.287 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:51.287 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:51.287 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:51.287 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:51.287 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:51.287 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:51.287 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:51.287 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:51.287 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:51.287 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:51.287 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:51.287 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:51.287 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:51.287 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:51.287 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:51.287 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:51.287 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:51.287 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:51.287 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:51.287 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:51.287 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:51.287 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:51.287 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:51.287 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:51.287 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:51.287 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:51.287 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:51.288 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:51.288 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:51.288 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:51.288 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:51.288 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:51.288 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:51.288 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:51.288 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:51.288 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:51.288 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:51.288 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:51.288 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:51.288 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:51.288 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:51.288 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:51.288 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:51.288 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:51.288 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:51.288 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:51.288 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:14:51.288 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:51.288 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:51.288 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:51.288 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:51.288 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:51.288 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:51.288 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:51.288 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:51.288 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:51.288 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:51.288 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:51.288 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:51.288 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:51.288 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:51.288 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:51.288 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:51.288 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:51.288 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:51.288 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:51.288 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:51.288 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:51.288 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:51.288 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:51.288 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:51.288 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:51.288 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:51.288 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:51.288 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.616 ms 00:14:51.288 00:14:51.288 --- 10.0.0.2 ping statistics --- 00:14:51.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.288 rtt min/avg/max/mdev = 0.616/0.616/0.616/0.000 ms 00:14:51.288 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:51.288 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:51.288 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:14:51.288 00:14:51.288 --- 10.0.0.1 ping statistics --- 00:14:51.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.288 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:14:51.288 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:51.288 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:14:51.288 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:51.288 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:51.288 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:51.288 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:51.288 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:51.288 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:51.288 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:51.288 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:51.288 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:51.288 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:51.288 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:51.288 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=274466 00:14:51.288 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 274466 00:14:51.288 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:51.288 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 274466 ']' 00:14:51.288 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:51.288 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:51.288 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:51.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:51.288 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:51.288 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:51.288 [2024-12-09 06:13:45.226307] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:14:51.288 [2024-12-09 06:13:45.226376] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:51.289 [2024-12-09 06:13:45.305708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:51.289 [2024-12-09 06:13:45.354907] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:51.289 [2024-12-09 06:13:45.354961] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:51.289 [2024-12-09 06:13:45.354969] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:51.289 [2024-12-09 06:13:45.354976] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:51.289 [2024-12-09 06:13:45.354982] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:51.289 [2024-12-09 06:13:45.355707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:51.550 06:13:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:51.550 06:13:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:14:51.550 06:13:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:51.550 06:13:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:51.550 06:13:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:51.550 06:13:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:51.550 06:13:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:51.550 06:13:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.550 06:13:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:51.550 [2024-12-09 06:13:46.093935] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:51.550 06:13:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.550 06:13:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:51.550 06:13:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.550 06:13:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:51.550 06:13:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.550 06:13:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:51.550 06:13:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.550 06:13:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:51.550 [2024-12-09 06:13:46.118210] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:51.550 06:13:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.550 06:13:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:51.550 06:13:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.550 06:13:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:51.550 NULL1 00:14:51.550 06:13:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.811 06:13:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:51.811 06:13:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.811 06:13:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:51.811 06:13:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.811 06:13:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:51.811 06:13:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.811 06:13:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:51.811 06:13:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.811 06:13:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:51.811 [2024-12-09 06:13:46.174567] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:14:51.811 [2024-12-09 06:13:46.174599] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid274771 ] 00:14:52.383 Attached to nqn.2016-06.io.spdk:cnode1 00:14:52.383 Namespace ID: 1 size: 1GB 00:14:52.383 fused_ordering(0) 00:14:52.383 fused_ordering(1) 00:14:52.383 fused_ordering(2) 00:14:52.383 fused_ordering(3) 00:14:52.383 fused_ordering(4) 00:14:52.383 fused_ordering(5) 00:14:52.383 fused_ordering(6) 00:14:52.383 fused_ordering(7) 00:14:52.383 fused_ordering(8) 00:14:52.383 fused_ordering(9) 00:14:52.383 fused_ordering(10) 00:14:52.383 fused_ordering(11) 00:14:52.383 fused_ordering(12) 00:14:52.383 fused_ordering(13) 00:14:52.383 fused_ordering(14) 00:14:52.383 fused_ordering(15) 00:14:52.383 fused_ordering(16) 00:14:52.383 fused_ordering(17) 00:14:52.383 fused_ordering(18) 00:14:52.383 fused_ordering(19) 00:14:52.383 fused_ordering(20) 00:14:52.383 fused_ordering(21) 00:14:52.383 fused_ordering(22) 00:14:52.383 fused_ordering(23) 00:14:52.383 fused_ordering(24) 00:14:52.383 fused_ordering(25) 00:14:52.383 fused_ordering(26) 00:14:52.383 fused_ordering(27) 00:14:52.383 fused_ordering(28) 00:14:52.383 fused_ordering(29) 00:14:52.383 fused_ordering(30) 00:14:52.383 fused_ordering(31) 00:14:52.383 fused_ordering(32) 00:14:52.383 fused_ordering(33) 00:14:52.383 fused_ordering(34) 00:14:52.383 fused_ordering(35) 00:14:52.383 fused_ordering(36) 00:14:52.383 fused_ordering(37) 00:14:52.383 fused_ordering(38) 00:14:52.383 fused_ordering(39) 00:14:52.383 fused_ordering(40) 00:14:52.383 fused_ordering(41) 00:14:52.383 fused_ordering(42) 00:14:52.383 fused_ordering(43) 00:14:52.383 fused_ordering(44) 00:14:52.383 fused_ordering(45) 00:14:52.383 fused_ordering(46) 00:14:52.383 fused_ordering(47) 00:14:52.383 fused_ordering(48) 00:14:52.383 fused_ordering(49) 00:14:52.383 fused_ordering(50) 00:14:52.383 fused_ordering(51) 00:14:52.383 fused_ordering(52) 00:14:52.383 fused_ordering(53) 00:14:52.383 fused_ordering(54) 00:14:52.383 fused_ordering(55) 00:14:52.383 fused_ordering(56) 00:14:52.383 fused_ordering(57) 00:14:52.383 fused_ordering(58) 00:14:52.383 fused_ordering(59) 00:14:52.383 fused_ordering(60) 00:14:52.383 fused_ordering(61) 00:14:52.383 fused_ordering(62) 00:14:52.383 fused_ordering(63) 00:14:52.383 fused_ordering(64) 00:14:52.384 fused_ordering(65) 00:14:52.384 fused_ordering(66) 00:14:52.384 fused_ordering(67) 00:14:52.384 fused_ordering(68) 00:14:52.384 fused_ordering(69) 00:14:52.384 fused_ordering(70) 00:14:52.384 fused_ordering(71) 00:14:52.384 fused_ordering(72) 00:14:52.384 fused_ordering(73) 00:14:52.384 fused_ordering(74) 00:14:52.384 fused_ordering(75) 00:14:52.384 fused_ordering(76) 00:14:52.384 fused_ordering(77) 00:14:52.384 fused_ordering(78) 00:14:52.384 fused_ordering(79) 00:14:52.384 fused_ordering(80) 00:14:52.384 fused_ordering(81) 00:14:52.384 fused_ordering(82) 00:14:52.384 fused_ordering(83) 00:14:52.384 fused_ordering(84) 00:14:52.384 fused_ordering(85) 00:14:52.384 fused_ordering(86) 00:14:52.384 fused_ordering(87) 00:14:52.384 fused_ordering(88) 00:14:52.384 fused_ordering(89) 00:14:52.384 fused_ordering(90) 00:14:52.384 fused_ordering(91) 00:14:52.384 fused_ordering(92) 00:14:52.384 fused_ordering(93) 00:14:52.384 fused_ordering(94) 00:14:52.384 fused_ordering(95) 00:14:52.384 fused_ordering(96) 00:14:52.384 fused_ordering(97) 00:14:52.384 fused_ordering(98) 00:14:52.384 fused_ordering(99) 00:14:52.384 fused_ordering(100) 00:14:52.384 fused_ordering(101) 00:14:52.384 fused_ordering(102) 00:14:52.384 fused_ordering(103) 00:14:52.384 fused_ordering(104) 00:14:52.384 fused_ordering(105) 00:14:52.384 fused_ordering(106) 00:14:52.384 fused_ordering(107) 00:14:52.384 fused_ordering(108) 00:14:52.384 fused_ordering(109) 00:14:52.384 fused_ordering(110) 00:14:52.384 fused_ordering(111) 00:14:52.384 fused_ordering(112) 00:14:52.384 fused_ordering(113) 00:14:52.384 fused_ordering(114) 00:14:52.384 fused_ordering(115) 00:14:52.384 fused_ordering(116) 00:14:52.384 fused_ordering(117) 00:14:52.384 fused_ordering(118) 00:14:52.384 fused_ordering(119) 00:14:52.384 fused_ordering(120) 00:14:52.384 fused_ordering(121) 00:14:52.384 fused_ordering(122) 00:14:52.384 fused_ordering(123) 00:14:52.384 fused_ordering(124) 00:14:52.384 fused_ordering(125) 00:14:52.384 fused_ordering(126) 00:14:52.384 fused_ordering(127) 00:14:52.384 fused_ordering(128) 00:14:52.384 fused_ordering(129) 00:14:52.384 fused_ordering(130) 00:14:52.384 fused_ordering(131) 00:14:52.384 fused_ordering(132) 00:14:52.384 fused_ordering(133) 00:14:52.384 fused_ordering(134) 00:14:52.384 fused_ordering(135) 00:14:52.384 fused_ordering(136) 00:14:52.384 fused_ordering(137) 00:14:52.384 fused_ordering(138) 00:14:52.384 fused_ordering(139) 00:14:52.384 fused_ordering(140) 00:14:52.384 fused_ordering(141) 00:14:52.384 fused_ordering(142) 00:14:52.384 fused_ordering(143) 00:14:52.384 fused_ordering(144) 00:14:52.384 fused_ordering(145) 00:14:52.384 fused_ordering(146) 00:14:52.384 fused_ordering(147) 00:14:52.384 fused_ordering(148) 00:14:52.384 fused_ordering(149) 00:14:52.384 fused_ordering(150) 00:14:52.384 fused_ordering(151) 00:14:52.384 fused_ordering(152) 00:14:52.384 fused_ordering(153) 00:14:52.384 fused_ordering(154) 00:14:52.384 fused_ordering(155) 00:14:52.384 fused_ordering(156) 00:14:52.384 fused_ordering(157) 00:14:52.384 fused_ordering(158) 00:14:52.384 fused_ordering(159) 00:14:52.384 fused_ordering(160) 00:14:52.384 fused_ordering(161) 00:14:52.384 fused_ordering(162) 00:14:52.384 fused_ordering(163) 00:14:52.384 fused_ordering(164) 00:14:52.384 fused_ordering(165) 00:14:52.384 fused_ordering(166) 00:14:52.384 fused_ordering(167) 00:14:52.384 fused_ordering(168) 00:14:52.384 fused_ordering(169) 00:14:52.384 fused_ordering(170) 00:14:52.384 fused_ordering(171) 00:14:52.384 fused_ordering(172) 00:14:52.384 fused_ordering(173) 00:14:52.384 fused_ordering(174) 00:14:52.384 fused_ordering(175) 00:14:52.384 fused_ordering(176) 00:14:52.384 fused_ordering(177) 00:14:52.384 fused_ordering(178) 00:14:52.384 fused_ordering(179) 00:14:52.384 fused_ordering(180) 00:14:52.384 fused_ordering(181) 00:14:52.384 fused_ordering(182) 00:14:52.384 fused_ordering(183) 00:14:52.384 fused_ordering(184) 00:14:52.384 fused_ordering(185) 00:14:52.384 fused_ordering(186) 00:14:52.384 fused_ordering(187) 00:14:52.384 fused_ordering(188) 00:14:52.384 fused_ordering(189) 00:14:52.384 fused_ordering(190) 00:14:52.384 fused_ordering(191) 00:14:52.384 fused_ordering(192) 00:14:52.384 fused_ordering(193) 00:14:52.384 fused_ordering(194) 00:14:52.384 fused_ordering(195) 00:14:52.384 fused_ordering(196) 00:14:52.384 fused_ordering(197) 00:14:52.384 fused_ordering(198) 00:14:52.384 fused_ordering(199) 00:14:52.384 fused_ordering(200) 00:14:52.384 fused_ordering(201) 00:14:52.384 fused_ordering(202) 00:14:52.384 fused_ordering(203) 00:14:52.384 fused_ordering(204) 00:14:52.384 fused_ordering(205) 00:14:52.645 fused_ordering(206) 00:14:52.645 fused_ordering(207) 00:14:52.645 fused_ordering(208) 00:14:52.645 fused_ordering(209) 00:14:52.645 fused_ordering(210) 00:14:52.645 fused_ordering(211) 00:14:52.645 fused_ordering(212) 00:14:52.645 fused_ordering(213) 00:14:52.645 fused_ordering(214) 00:14:52.645 fused_ordering(215) 00:14:52.645 fused_ordering(216) 00:14:52.645 fused_ordering(217) 00:14:52.645 fused_ordering(218) 00:14:52.645 fused_ordering(219) 00:14:52.645 fused_ordering(220) 00:14:52.645 fused_ordering(221) 00:14:52.645 fused_ordering(222) 00:14:52.645 fused_ordering(223) 00:14:52.645 fused_ordering(224) 00:14:52.645 fused_ordering(225) 00:14:52.645 fused_ordering(226) 00:14:52.645 fused_ordering(227) 00:14:52.645 fused_ordering(228) 00:14:52.645 fused_ordering(229) 00:14:52.645 fused_ordering(230) 00:14:52.645 fused_ordering(231) 00:14:52.645 fused_ordering(232) 00:14:52.645 fused_ordering(233) 00:14:52.645 fused_ordering(234) 00:14:52.645 fused_ordering(235) 00:14:52.645 fused_ordering(236) 00:14:52.645 fused_ordering(237) 00:14:52.645 fused_ordering(238) 00:14:52.645 fused_ordering(239) 00:14:52.645 fused_ordering(240) 00:14:52.645 fused_ordering(241) 00:14:52.645 fused_ordering(242) 00:14:52.645 fused_ordering(243) 00:14:52.645 fused_ordering(244) 00:14:52.645 fused_ordering(245) 00:14:52.645 fused_ordering(246) 00:14:52.645 fused_ordering(247) 00:14:52.645 fused_ordering(248) 00:14:52.645 fused_ordering(249) 00:14:52.645 fused_ordering(250) 00:14:52.645 fused_ordering(251) 00:14:52.645 fused_ordering(252) 00:14:52.645 fused_ordering(253) 00:14:52.645 fused_ordering(254) 00:14:52.645 fused_ordering(255) 00:14:52.645 fused_ordering(256) 00:14:52.645 fused_ordering(257) 00:14:52.645 fused_ordering(258) 00:14:52.645 fused_ordering(259) 00:14:52.645 fused_ordering(260) 00:14:52.645 fused_ordering(261) 00:14:52.645 fused_ordering(262) 00:14:52.645 fused_ordering(263) 00:14:52.645 fused_ordering(264) 00:14:52.645 fused_ordering(265) 00:14:52.645 fused_ordering(266) 00:14:52.645 fused_ordering(267) 00:14:52.645 fused_ordering(268) 00:14:52.645 fused_ordering(269) 00:14:52.645 fused_ordering(270) 00:14:52.645 fused_ordering(271) 00:14:52.645 fused_ordering(272) 00:14:52.645 fused_ordering(273) 00:14:52.645 fused_ordering(274) 00:14:52.645 fused_ordering(275) 00:14:52.645 fused_ordering(276) 00:14:52.645 fused_ordering(277) 00:14:52.645 fused_ordering(278) 00:14:52.645 fused_ordering(279) 00:14:52.645 fused_ordering(280) 00:14:52.645 fused_ordering(281) 00:14:52.645 fused_ordering(282) 00:14:52.645 fused_ordering(283) 00:14:52.645 fused_ordering(284) 00:14:52.645 fused_ordering(285) 00:14:52.645 fused_ordering(286) 00:14:52.645 fused_ordering(287) 00:14:52.645 fused_ordering(288) 00:14:52.645 fused_ordering(289) 00:14:52.645 fused_ordering(290) 00:14:52.645 fused_ordering(291) 00:14:52.645 fused_ordering(292) 00:14:52.645 fused_ordering(293) 00:14:52.645 fused_ordering(294) 00:14:52.645 fused_ordering(295) 00:14:52.645 fused_ordering(296) 00:14:52.645 fused_ordering(297) 00:14:52.645 fused_ordering(298) 00:14:52.645 fused_ordering(299) 00:14:52.645 fused_ordering(300) 00:14:52.645 fused_ordering(301) 00:14:52.645 fused_ordering(302) 00:14:52.645 fused_ordering(303) 00:14:52.645 fused_ordering(304) 00:14:52.645 fused_ordering(305) 00:14:52.645 fused_ordering(306) 00:14:52.645 fused_ordering(307) 00:14:52.645 fused_ordering(308) 00:14:52.645 fused_ordering(309) 00:14:52.645 fused_ordering(310) 00:14:52.645 fused_ordering(311) 00:14:52.645 fused_ordering(312) 00:14:52.645 fused_ordering(313) 00:14:52.645 fused_ordering(314) 00:14:52.645 fused_ordering(315) 00:14:52.645 fused_ordering(316) 00:14:52.645 fused_ordering(317) 00:14:52.645 fused_ordering(318) 00:14:52.645 fused_ordering(319) 00:14:52.645 fused_ordering(320) 00:14:52.645 fused_ordering(321) 00:14:52.645 fused_ordering(322) 00:14:52.645 fused_ordering(323) 00:14:52.645 fused_ordering(324) 00:14:52.645 fused_ordering(325) 00:14:52.645 fused_ordering(326) 00:14:52.645 fused_ordering(327) 00:14:52.645 fused_ordering(328) 00:14:52.645 fused_ordering(329) 00:14:52.645 fused_ordering(330) 00:14:52.645 fused_ordering(331) 00:14:52.645 fused_ordering(332) 00:14:52.645 fused_ordering(333) 00:14:52.645 fused_ordering(334) 00:14:52.645 fused_ordering(335) 00:14:52.645 fused_ordering(336) 00:14:52.645 fused_ordering(337) 00:14:52.645 fused_ordering(338) 00:14:52.645 fused_ordering(339) 00:14:52.645 fused_ordering(340) 00:14:52.645 fused_ordering(341) 00:14:52.645 fused_ordering(342) 00:14:52.645 fused_ordering(343) 00:14:52.645 fused_ordering(344) 00:14:52.645 fused_ordering(345) 00:14:52.645 fused_ordering(346) 00:14:52.645 fused_ordering(347) 00:14:52.645 fused_ordering(348) 00:14:52.645 fused_ordering(349) 00:14:52.645 fused_ordering(350) 00:14:52.645 fused_ordering(351) 00:14:52.645 fused_ordering(352) 00:14:52.645 fused_ordering(353) 00:14:52.645 fused_ordering(354) 00:14:52.645 fused_ordering(355) 00:14:52.645 fused_ordering(356) 00:14:52.645 fused_ordering(357) 00:14:52.645 fused_ordering(358) 00:14:52.645 fused_ordering(359) 00:14:52.645 fused_ordering(360) 00:14:52.645 fused_ordering(361) 00:14:52.645 fused_ordering(362) 00:14:52.645 fused_ordering(363) 00:14:52.645 fused_ordering(364) 00:14:52.645 fused_ordering(365) 00:14:52.645 fused_ordering(366) 00:14:52.645 fused_ordering(367) 00:14:52.645 fused_ordering(368) 00:14:52.645 fused_ordering(369) 00:14:52.645 fused_ordering(370) 00:14:52.645 fused_ordering(371) 00:14:52.645 fused_ordering(372) 00:14:52.645 fused_ordering(373) 00:14:52.645 fused_ordering(374) 00:14:52.645 fused_ordering(375) 00:14:52.645 fused_ordering(376) 00:14:52.645 fused_ordering(377) 00:14:52.645 fused_ordering(378) 00:14:52.645 fused_ordering(379) 00:14:52.645 fused_ordering(380) 00:14:52.645 fused_ordering(381) 00:14:52.645 fused_ordering(382) 00:14:52.645 fused_ordering(383) 00:14:52.646 fused_ordering(384) 00:14:52.646 fused_ordering(385) 00:14:52.646 fused_ordering(386) 00:14:52.646 fused_ordering(387) 00:14:52.646 fused_ordering(388) 00:14:52.646 fused_ordering(389) 00:14:52.646 fused_ordering(390) 00:14:52.646 fused_ordering(391) 00:14:52.646 fused_ordering(392) 00:14:52.646 fused_ordering(393) 00:14:52.646 fused_ordering(394) 00:14:52.646 fused_ordering(395) 00:14:52.646 fused_ordering(396) 00:14:52.646 fused_ordering(397) 00:14:52.646 fused_ordering(398) 00:14:52.646 fused_ordering(399) 00:14:52.646 fused_ordering(400) 00:14:52.646 fused_ordering(401) 00:14:52.646 fused_ordering(402) 00:14:52.646 fused_ordering(403) 00:14:52.646 fused_ordering(404) 00:14:52.646 fused_ordering(405) 00:14:52.646 fused_ordering(406) 00:14:52.646 fused_ordering(407) 00:14:52.646 fused_ordering(408) 00:14:52.646 fused_ordering(409) 00:14:52.646 fused_ordering(410) 00:14:52.906 fused_ordering(411) 00:14:52.906 fused_ordering(412) 00:14:52.906 fused_ordering(413) 00:14:52.906 fused_ordering(414) 00:14:52.906 fused_ordering(415) 00:14:52.906 fused_ordering(416) 00:14:52.906 fused_ordering(417) 00:14:52.906 fused_ordering(418) 00:14:52.906 fused_ordering(419) 00:14:52.906 fused_ordering(420) 00:14:52.906 fused_ordering(421) 00:14:52.906 fused_ordering(422) 00:14:52.906 fused_ordering(423) 00:14:52.906 fused_ordering(424) 00:14:52.906 fused_ordering(425) 00:14:52.906 fused_ordering(426) 00:14:52.906 fused_ordering(427) 00:14:52.906 fused_ordering(428) 00:14:52.906 fused_ordering(429) 00:14:52.906 fused_ordering(430) 00:14:52.906 fused_ordering(431) 00:14:52.906 fused_ordering(432) 00:14:52.906 fused_ordering(433) 00:14:52.906 fused_ordering(434) 00:14:52.906 fused_ordering(435) 00:14:52.906 fused_ordering(436) 00:14:52.906 fused_ordering(437) 00:14:52.906 fused_ordering(438) 00:14:52.906 fused_ordering(439) 00:14:52.906 fused_ordering(440) 00:14:52.906 fused_ordering(441) 00:14:52.906 fused_ordering(442) 00:14:52.906 fused_ordering(443) 00:14:52.906 fused_ordering(444) 00:14:52.906 fused_ordering(445) 00:14:52.906 fused_ordering(446) 00:14:52.906 fused_ordering(447) 00:14:52.906 fused_ordering(448) 00:14:52.906 fused_ordering(449) 00:14:52.906 fused_ordering(450) 00:14:52.906 fused_ordering(451) 00:14:52.906 fused_ordering(452) 00:14:52.906 fused_ordering(453) 00:14:52.906 fused_ordering(454) 00:14:52.906 fused_ordering(455) 00:14:52.906 fused_ordering(456) 00:14:52.906 fused_ordering(457) 00:14:52.906 fused_ordering(458) 00:14:52.906 fused_ordering(459) 00:14:52.906 fused_ordering(460) 00:14:52.906 fused_ordering(461) 00:14:52.906 fused_ordering(462) 00:14:52.906 fused_ordering(463) 00:14:52.906 fused_ordering(464) 00:14:52.906 fused_ordering(465) 00:14:52.906 fused_ordering(466) 00:14:52.906 fused_ordering(467) 00:14:52.906 fused_ordering(468) 00:14:52.906 fused_ordering(469) 00:14:52.906 fused_ordering(470) 00:14:52.906 fused_ordering(471) 00:14:52.906 fused_ordering(472) 00:14:52.906 fused_ordering(473) 00:14:52.906 fused_ordering(474) 00:14:52.906 fused_ordering(475) 00:14:52.906 fused_ordering(476) 00:14:52.906 fused_ordering(477) 00:14:52.906 fused_ordering(478) 00:14:52.906 fused_ordering(479) 00:14:52.906 fused_ordering(480) 00:14:52.906 fused_ordering(481) 00:14:52.906 fused_ordering(482) 00:14:52.906 fused_ordering(483) 00:14:52.906 fused_ordering(484) 00:14:52.906 fused_ordering(485) 00:14:52.906 fused_ordering(486) 00:14:52.906 fused_ordering(487) 00:14:52.906 fused_ordering(488) 00:14:52.906 fused_ordering(489) 00:14:52.906 fused_ordering(490) 00:14:52.906 fused_ordering(491) 00:14:52.906 fused_ordering(492) 00:14:52.906 fused_ordering(493) 00:14:52.906 fused_ordering(494) 00:14:52.906 fused_ordering(495) 00:14:52.906 fused_ordering(496) 00:14:52.906 fused_ordering(497) 00:14:52.906 fused_ordering(498) 00:14:52.906 fused_ordering(499) 00:14:52.906 fused_ordering(500) 00:14:52.906 fused_ordering(501) 00:14:52.906 fused_ordering(502) 00:14:52.906 fused_ordering(503) 00:14:52.906 fused_ordering(504) 00:14:52.906 fused_ordering(505) 00:14:52.906 fused_ordering(506) 00:14:52.906 fused_ordering(507) 00:14:52.906 fused_ordering(508) 00:14:52.906 fused_ordering(509) 00:14:52.906 fused_ordering(510) 00:14:52.906 fused_ordering(511) 00:14:52.906 fused_ordering(512) 00:14:52.906 fused_ordering(513) 00:14:52.906 fused_ordering(514) 00:14:52.906 fused_ordering(515) 00:14:52.906 fused_ordering(516) 00:14:52.906 fused_ordering(517) 00:14:52.906 fused_ordering(518) 00:14:52.906 fused_ordering(519) 00:14:52.906 fused_ordering(520) 00:14:52.906 fused_ordering(521) 00:14:52.906 fused_ordering(522) 00:14:52.906 fused_ordering(523) 00:14:52.906 fused_ordering(524) 00:14:52.906 fused_ordering(525) 00:14:52.906 fused_ordering(526) 00:14:52.906 fused_ordering(527) 00:14:52.906 fused_ordering(528) 00:14:52.906 fused_ordering(529) 00:14:52.906 fused_ordering(530) 00:14:52.906 fused_ordering(531) 00:14:52.906 fused_ordering(532) 00:14:52.906 fused_ordering(533) 00:14:52.906 fused_ordering(534) 00:14:52.906 fused_ordering(535) 00:14:52.906 fused_ordering(536) 00:14:52.906 fused_ordering(537) 00:14:52.906 fused_ordering(538) 00:14:52.906 fused_ordering(539) 00:14:52.906 fused_ordering(540) 00:14:52.906 fused_ordering(541) 00:14:52.906 fused_ordering(542) 00:14:52.906 fused_ordering(543) 00:14:52.907 fused_ordering(544) 00:14:52.907 fused_ordering(545) 00:14:52.907 fused_ordering(546) 00:14:52.907 fused_ordering(547) 00:14:52.907 fused_ordering(548) 00:14:52.907 fused_ordering(549) 00:14:52.907 fused_ordering(550) 00:14:52.907 fused_ordering(551) 00:14:52.907 fused_ordering(552) 00:14:52.907 fused_ordering(553) 00:14:52.907 fused_ordering(554) 00:14:52.907 fused_ordering(555) 00:14:52.907 fused_ordering(556) 00:14:52.907 fused_ordering(557) 00:14:52.907 fused_ordering(558) 00:14:52.907 fused_ordering(559) 00:14:52.907 fused_ordering(560) 00:14:52.907 fused_ordering(561) 00:14:52.907 fused_ordering(562) 00:14:52.907 fused_ordering(563) 00:14:52.907 fused_ordering(564) 00:14:52.907 fused_ordering(565) 00:14:52.907 fused_ordering(566) 00:14:52.907 fused_ordering(567) 00:14:52.907 fused_ordering(568) 00:14:52.907 fused_ordering(569) 00:14:52.907 fused_ordering(570) 00:14:52.907 fused_ordering(571) 00:14:52.907 fused_ordering(572) 00:14:52.907 fused_ordering(573) 00:14:52.907 fused_ordering(574) 00:14:52.907 fused_ordering(575) 00:14:52.907 fused_ordering(576) 00:14:52.907 fused_ordering(577) 00:14:52.907 fused_ordering(578) 00:14:52.907 fused_ordering(579) 00:14:52.907 fused_ordering(580) 00:14:52.907 fused_ordering(581) 00:14:52.907 fused_ordering(582) 00:14:52.907 fused_ordering(583) 00:14:52.907 fused_ordering(584) 00:14:52.907 fused_ordering(585) 00:14:52.907 fused_ordering(586) 00:14:52.907 fused_ordering(587) 00:14:52.907 fused_ordering(588) 00:14:52.907 fused_ordering(589) 00:14:52.907 fused_ordering(590) 00:14:52.907 fused_ordering(591) 00:14:52.907 fused_ordering(592) 00:14:52.907 fused_ordering(593) 00:14:52.907 fused_ordering(594) 00:14:52.907 fused_ordering(595) 00:14:52.907 fused_ordering(596) 00:14:52.907 fused_ordering(597) 00:14:52.907 fused_ordering(598) 00:14:52.907 fused_ordering(599) 00:14:52.907 fused_ordering(600) 00:14:52.907 fused_ordering(601) 00:14:52.907 fused_ordering(602) 00:14:52.907 fused_ordering(603) 00:14:52.907 fused_ordering(604) 00:14:52.907 fused_ordering(605) 00:14:52.907 fused_ordering(606) 00:14:52.907 fused_ordering(607) 00:14:52.907 fused_ordering(608) 00:14:52.907 fused_ordering(609) 00:14:52.907 fused_ordering(610) 00:14:52.907 fused_ordering(611) 00:14:52.907 fused_ordering(612) 00:14:52.907 fused_ordering(613) 00:14:52.907 fused_ordering(614) 00:14:52.907 fused_ordering(615) 00:14:53.478 fused_ordering(616) 00:14:53.478 fused_ordering(617) 00:14:53.478 fused_ordering(618) 00:14:53.478 fused_ordering(619) 00:14:53.478 fused_ordering(620) 00:14:53.478 fused_ordering(621) 00:14:53.478 fused_ordering(622) 00:14:53.478 fused_ordering(623) 00:14:53.478 fused_ordering(624) 00:14:53.478 fused_ordering(625) 00:14:53.478 fused_ordering(626) 00:14:53.478 fused_ordering(627) 00:14:53.478 fused_ordering(628) 00:14:53.478 fused_ordering(629) 00:14:53.478 fused_ordering(630) 00:14:53.478 fused_ordering(631) 00:14:53.478 fused_ordering(632) 00:14:53.478 fused_ordering(633) 00:14:53.478 fused_ordering(634) 00:14:53.478 fused_ordering(635) 00:14:53.478 fused_ordering(636) 00:14:53.478 fused_ordering(637) 00:14:53.478 fused_ordering(638) 00:14:53.478 fused_ordering(639) 00:14:53.478 fused_ordering(640) 00:14:53.478 fused_ordering(641) 00:14:53.478 fused_ordering(642) 00:14:53.478 fused_ordering(643) 00:14:53.478 fused_ordering(644) 00:14:53.478 fused_ordering(645) 00:14:53.478 fused_ordering(646) 00:14:53.478 fused_ordering(647) 00:14:53.478 fused_ordering(648) 00:14:53.478 fused_ordering(649) 00:14:53.478 fused_ordering(650) 00:14:53.478 fused_ordering(651) 00:14:53.478 fused_ordering(652) 00:14:53.478 fused_ordering(653) 00:14:53.478 fused_ordering(654) 00:14:53.478 fused_ordering(655) 00:14:53.478 fused_ordering(656) 00:14:53.478 fused_ordering(657) 00:14:53.478 fused_ordering(658) 00:14:53.478 fused_ordering(659) 00:14:53.478 fused_ordering(660) 00:14:53.478 fused_ordering(661) 00:14:53.478 fused_ordering(662) 00:14:53.478 fused_ordering(663) 00:14:53.478 fused_ordering(664) 00:14:53.478 fused_ordering(665) 00:14:53.478 fused_ordering(666) 00:14:53.478 fused_ordering(667) 00:14:53.478 fused_ordering(668) 00:14:53.478 fused_ordering(669) 00:14:53.478 fused_ordering(670) 00:14:53.478 fused_ordering(671) 00:14:53.478 fused_ordering(672) 00:14:53.478 fused_ordering(673) 00:14:53.478 fused_ordering(674) 00:14:53.478 fused_ordering(675) 00:14:53.478 fused_ordering(676) 00:14:53.478 fused_ordering(677) 00:14:53.478 fused_ordering(678) 00:14:53.478 fused_ordering(679) 00:14:53.478 fused_ordering(680) 00:14:53.478 fused_ordering(681) 00:14:53.478 fused_ordering(682) 00:14:53.478 fused_ordering(683) 00:14:53.478 fused_ordering(684) 00:14:53.478 fused_ordering(685) 00:14:53.478 fused_ordering(686) 00:14:53.478 fused_ordering(687) 00:14:53.478 fused_ordering(688) 00:14:53.478 fused_ordering(689) 00:14:53.478 fused_ordering(690) 00:14:53.478 fused_ordering(691) 00:14:53.478 fused_ordering(692) 00:14:53.478 fused_ordering(693) 00:14:53.478 fused_ordering(694) 00:14:53.478 fused_ordering(695) 00:14:53.478 fused_ordering(696) 00:14:53.478 fused_ordering(697) 00:14:53.478 fused_ordering(698) 00:14:53.478 fused_ordering(699) 00:14:53.478 fused_ordering(700) 00:14:53.478 fused_ordering(701) 00:14:53.478 fused_ordering(702) 00:14:53.478 fused_ordering(703) 00:14:53.478 fused_ordering(704) 00:14:53.478 fused_ordering(705) 00:14:53.478 fused_ordering(706) 00:14:53.478 fused_ordering(707) 00:14:53.478 fused_ordering(708) 00:14:53.478 fused_ordering(709) 00:14:53.478 fused_ordering(710) 00:14:53.478 fused_ordering(711) 00:14:53.478 fused_ordering(712) 00:14:53.478 fused_ordering(713) 00:14:53.478 fused_ordering(714) 00:14:53.478 fused_ordering(715) 00:14:53.478 fused_ordering(716) 00:14:53.478 fused_ordering(717) 00:14:53.478 fused_ordering(718) 00:14:53.478 fused_ordering(719) 00:14:53.478 fused_ordering(720) 00:14:53.478 fused_ordering(721) 00:14:53.478 fused_ordering(722) 00:14:53.478 fused_ordering(723) 00:14:53.478 fused_ordering(724) 00:14:53.478 fused_ordering(725) 00:14:53.478 fused_ordering(726) 00:14:53.478 fused_ordering(727) 00:14:53.478 fused_ordering(728) 00:14:53.478 fused_ordering(729) 00:14:53.478 fused_ordering(730) 00:14:53.478 fused_ordering(731) 00:14:53.478 fused_ordering(732) 00:14:53.478 fused_ordering(733) 00:14:53.478 fused_ordering(734) 00:14:53.478 fused_ordering(735) 00:14:53.478 fused_ordering(736) 00:14:53.478 fused_ordering(737) 00:14:53.478 fused_ordering(738) 00:14:53.478 fused_ordering(739) 00:14:53.478 fused_ordering(740) 00:14:53.478 fused_ordering(741) 00:14:53.478 fused_ordering(742) 00:14:53.478 fused_ordering(743) 00:14:53.478 fused_ordering(744) 00:14:53.478 fused_ordering(745) 00:14:53.478 fused_ordering(746) 00:14:53.478 fused_ordering(747) 00:14:53.478 fused_ordering(748) 00:14:53.478 fused_ordering(749) 00:14:53.478 fused_ordering(750) 00:14:53.478 fused_ordering(751) 00:14:53.478 fused_ordering(752) 00:14:53.478 fused_ordering(753) 00:14:53.478 fused_ordering(754) 00:14:53.478 fused_ordering(755) 00:14:53.478 fused_ordering(756) 00:14:53.478 fused_ordering(757) 00:14:53.478 fused_ordering(758) 00:14:53.478 fused_ordering(759) 00:14:53.478 fused_ordering(760) 00:14:53.478 fused_ordering(761) 00:14:53.478 fused_ordering(762) 00:14:53.478 fused_ordering(763) 00:14:53.478 fused_ordering(764) 00:14:53.478 fused_ordering(765) 00:14:53.478 fused_ordering(766) 00:14:53.478 fused_ordering(767) 00:14:53.478 fused_ordering(768) 00:14:53.478 fused_ordering(769) 00:14:53.478 fused_ordering(770) 00:14:53.478 fused_ordering(771) 00:14:53.478 fused_ordering(772) 00:14:53.478 fused_ordering(773) 00:14:53.478 fused_ordering(774) 00:14:53.478 fused_ordering(775) 00:14:53.478 fused_ordering(776) 00:14:53.478 fused_ordering(777) 00:14:53.478 fused_ordering(778) 00:14:53.478 fused_ordering(779) 00:14:53.478 fused_ordering(780) 00:14:53.478 fused_ordering(781) 00:14:53.478 fused_ordering(782) 00:14:53.478 fused_ordering(783) 00:14:53.478 fused_ordering(784) 00:14:53.478 fused_ordering(785) 00:14:53.478 fused_ordering(786) 00:14:53.478 fused_ordering(787) 00:14:53.478 fused_ordering(788) 00:14:53.478 fused_ordering(789) 00:14:53.478 fused_ordering(790) 00:14:53.478 fused_ordering(791) 00:14:53.478 fused_ordering(792) 00:14:53.478 fused_ordering(793) 00:14:53.478 fused_ordering(794) 00:14:53.478 fused_ordering(795) 00:14:53.478 fused_ordering(796) 00:14:53.478 fused_ordering(797) 00:14:53.478 fused_ordering(798) 00:14:53.478 fused_ordering(799) 00:14:53.478 fused_ordering(800) 00:14:53.478 fused_ordering(801) 00:14:53.478 fused_ordering(802) 00:14:53.478 fused_ordering(803) 00:14:53.478 fused_ordering(804) 00:14:53.478 fused_ordering(805) 00:14:53.478 fused_ordering(806) 00:14:53.478 fused_ordering(807) 00:14:53.478 fused_ordering(808) 00:14:53.478 fused_ordering(809) 00:14:53.478 fused_ordering(810) 00:14:53.478 fused_ordering(811) 00:14:53.478 fused_ordering(812) 00:14:53.478 fused_ordering(813) 00:14:53.478 fused_ordering(814) 00:14:53.478 fused_ordering(815) 00:14:53.478 fused_ordering(816) 00:14:53.478 fused_ordering(817) 00:14:53.478 fused_ordering(818) 00:14:53.478 fused_ordering(819) 00:14:53.478 fused_ordering(820) 00:14:54.051 fused_ordering(821) 00:14:54.051 fused_ordering(822) 00:14:54.051 fused_ordering(823) 00:14:54.051 fused_ordering(824) 00:14:54.051 fused_ordering(825) 00:14:54.051 fused_ordering(826) 00:14:54.051 fused_ordering(827) 00:14:54.051 fused_ordering(828) 00:14:54.051 fused_ordering(829) 00:14:54.051 fused_ordering(830) 00:14:54.051 fused_ordering(831) 00:14:54.051 fused_ordering(832) 00:14:54.051 fused_ordering(833) 00:14:54.051 fused_ordering(834) 00:14:54.051 fused_ordering(835) 00:14:54.051 fused_ordering(836) 00:14:54.051 fused_ordering(837) 00:14:54.051 fused_ordering(838) 00:14:54.051 fused_ordering(839) 00:14:54.051 fused_ordering(840) 00:14:54.051 fused_ordering(841) 00:14:54.051 fused_ordering(842) 00:14:54.051 fused_ordering(843) 00:14:54.051 fused_ordering(844) 00:14:54.051 fused_ordering(845) 00:14:54.051 fused_ordering(846) 00:14:54.051 fused_ordering(847) 00:14:54.051 fused_ordering(848) 00:14:54.051 fused_ordering(849) 00:14:54.051 fused_ordering(850) 00:14:54.051 fused_ordering(851) 00:14:54.051 fused_ordering(852) 00:14:54.051 fused_ordering(853) 00:14:54.051 fused_ordering(854) 00:14:54.051 fused_ordering(855) 00:14:54.051 fused_ordering(856) 00:14:54.051 fused_ordering(857) 00:14:54.051 fused_ordering(858) 00:14:54.051 fused_ordering(859) 00:14:54.051 fused_ordering(860) 00:14:54.051 fused_ordering(861) 00:14:54.051 fused_ordering(862) 00:14:54.051 fused_ordering(863) 00:14:54.051 fused_ordering(864) 00:14:54.051 fused_ordering(865) 00:14:54.051 fused_ordering(866) 00:14:54.051 fused_ordering(867) 00:14:54.051 fused_ordering(868) 00:14:54.051 fused_ordering(869) 00:14:54.051 fused_ordering(870) 00:14:54.051 fused_ordering(871) 00:14:54.051 fused_ordering(872) 00:14:54.051 fused_ordering(873) 00:14:54.051 fused_ordering(874) 00:14:54.051 fused_ordering(875) 00:14:54.051 fused_ordering(876) 00:14:54.051 fused_ordering(877) 00:14:54.051 fused_ordering(878) 00:14:54.051 fused_ordering(879) 00:14:54.051 fused_ordering(880) 00:14:54.051 fused_ordering(881) 00:14:54.051 fused_ordering(882) 00:14:54.051 fused_ordering(883) 00:14:54.051 fused_ordering(884) 00:14:54.051 fused_ordering(885) 00:14:54.051 fused_ordering(886) 00:14:54.051 fused_ordering(887) 00:14:54.051 fused_ordering(888) 00:14:54.051 fused_ordering(889) 00:14:54.051 fused_ordering(890) 00:14:54.051 fused_ordering(891) 00:14:54.051 fused_ordering(892) 00:14:54.051 fused_ordering(893) 00:14:54.051 fused_ordering(894) 00:14:54.051 fused_ordering(895) 00:14:54.051 fused_ordering(896) 00:14:54.051 fused_ordering(897) 00:14:54.051 fused_ordering(898) 00:14:54.051 fused_ordering(899) 00:14:54.051 fused_ordering(900) 00:14:54.051 fused_ordering(901) 00:14:54.051 fused_ordering(902) 00:14:54.051 fused_ordering(903) 00:14:54.051 fused_ordering(904) 00:14:54.051 fused_ordering(905) 00:14:54.051 fused_ordering(906) 00:14:54.051 fused_ordering(907) 00:14:54.051 fused_ordering(908) 00:14:54.051 fused_ordering(909) 00:14:54.051 fused_ordering(910) 00:14:54.051 fused_ordering(911) 00:14:54.051 fused_ordering(912) 00:14:54.051 fused_ordering(913) 00:14:54.051 fused_ordering(914) 00:14:54.051 fused_ordering(915) 00:14:54.051 fused_ordering(916) 00:14:54.051 fused_ordering(917) 00:14:54.051 fused_ordering(918) 00:14:54.051 fused_ordering(919) 00:14:54.051 fused_ordering(920) 00:14:54.051 fused_ordering(921) 00:14:54.051 fused_ordering(922) 00:14:54.051 fused_ordering(923) 00:14:54.051 fused_ordering(924) 00:14:54.051 fused_ordering(925) 00:14:54.051 fused_ordering(926) 00:14:54.051 fused_ordering(927) 00:14:54.051 fused_ordering(928) 00:14:54.051 fused_ordering(929) 00:14:54.051 fused_ordering(930) 00:14:54.051 fused_ordering(931) 00:14:54.051 fused_ordering(932) 00:14:54.051 fused_ordering(933) 00:14:54.051 fused_ordering(934) 00:14:54.051 fused_ordering(935) 00:14:54.051 fused_ordering(936) 00:14:54.051 fused_ordering(937) 00:14:54.051 fused_ordering(938) 00:14:54.051 fused_ordering(939) 00:14:54.051 fused_ordering(940) 00:14:54.051 fused_ordering(941) 00:14:54.051 fused_ordering(942) 00:14:54.051 fused_ordering(943) 00:14:54.051 fused_ordering(944) 00:14:54.051 fused_ordering(945) 00:14:54.051 fused_ordering(946) 00:14:54.051 fused_ordering(947) 00:14:54.051 fused_ordering(948) 00:14:54.051 fused_ordering(949) 00:14:54.051 fused_ordering(950) 00:14:54.051 fused_ordering(951) 00:14:54.051 fused_ordering(952) 00:14:54.051 fused_ordering(953) 00:14:54.051 fused_ordering(954) 00:14:54.051 fused_ordering(955) 00:14:54.051 fused_ordering(956) 00:14:54.051 fused_ordering(957) 00:14:54.051 fused_ordering(958) 00:14:54.051 fused_ordering(959) 00:14:54.051 fused_ordering(960) 00:14:54.051 fused_ordering(961) 00:14:54.051 fused_ordering(962) 00:14:54.051 fused_ordering(963) 00:14:54.051 fused_ordering(964) 00:14:54.051 fused_ordering(965) 00:14:54.051 fused_ordering(966) 00:14:54.051 fused_ordering(967) 00:14:54.051 fused_ordering(968) 00:14:54.051 fused_ordering(969) 00:14:54.051 fused_ordering(970) 00:14:54.051 fused_ordering(971) 00:14:54.051 fused_ordering(972) 00:14:54.051 fused_ordering(973) 00:14:54.051 fused_ordering(974) 00:14:54.051 fused_ordering(975) 00:14:54.051 fused_ordering(976) 00:14:54.051 fused_ordering(977) 00:14:54.051 fused_ordering(978) 00:14:54.051 fused_ordering(979) 00:14:54.051 fused_ordering(980) 00:14:54.051 fused_ordering(981) 00:14:54.051 fused_ordering(982) 00:14:54.051 fused_ordering(983) 00:14:54.051 fused_ordering(984) 00:14:54.051 fused_ordering(985) 00:14:54.051 fused_ordering(986) 00:14:54.051 fused_ordering(987) 00:14:54.051 fused_ordering(988) 00:14:54.051 fused_ordering(989) 00:14:54.051 fused_ordering(990) 00:14:54.051 fused_ordering(991) 00:14:54.051 fused_ordering(992) 00:14:54.051 fused_ordering(993) 00:14:54.051 fused_ordering(994) 00:14:54.051 fused_ordering(995) 00:14:54.051 fused_ordering(996) 00:14:54.051 fused_ordering(997) 00:14:54.051 fused_ordering(998) 00:14:54.052 fused_ordering(999) 00:14:54.052 fused_ordering(1000) 00:14:54.052 fused_ordering(1001) 00:14:54.052 fused_ordering(1002) 00:14:54.052 fused_ordering(1003) 00:14:54.052 fused_ordering(1004) 00:14:54.052 fused_ordering(1005) 00:14:54.052 fused_ordering(1006) 00:14:54.052 fused_ordering(1007) 00:14:54.052 fused_ordering(1008) 00:14:54.052 fused_ordering(1009) 00:14:54.052 fused_ordering(1010) 00:14:54.052 fused_ordering(1011) 00:14:54.052 fused_ordering(1012) 00:14:54.052 fused_ordering(1013) 00:14:54.052 fused_ordering(1014) 00:14:54.052 fused_ordering(1015) 00:14:54.052 fused_ordering(1016) 00:14:54.052 fused_ordering(1017) 00:14:54.052 fused_ordering(1018) 00:14:54.052 fused_ordering(1019) 00:14:54.052 fused_ordering(1020) 00:14:54.052 fused_ordering(1021) 00:14:54.052 fused_ordering(1022) 00:14:54.052 fused_ordering(1023) 00:14:54.052 06:13:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:54.052 06:13:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:54.052 06:13:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:54.052 06:13:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:14:54.052 06:13:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:54.052 06:13:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:14:54.052 06:13:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:54.052 06:13:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:54.052 rmmod nvme_tcp 00:14:54.052 rmmod nvme_fabrics 00:14:54.052 rmmod nvme_keyring 00:14:54.052 06:13:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:54.052 06:13:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:14:54.052 06:13:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:14:54.052 06:13:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 274466 ']' 00:14:54.052 06:13:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 274466 00:14:54.052 06:13:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 274466 ']' 00:14:54.052 06:13:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 274466 00:14:54.052 06:13:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:14:54.052 06:13:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:54.052 06:13:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 274466 00:14:54.052 06:13:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:54.052 06:13:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:54.052 06:13:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 274466' 00:14:54.052 killing process with pid 274466 00:14:54.052 06:13:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 274466 00:14:54.052 06:13:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 274466 00:14:54.313 06:13:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:54.314 06:13:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:54.314 06:13:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:54.314 06:13:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:14:54.314 06:13:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:14:54.314 06:13:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:54.314 06:13:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:14:54.314 06:13:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:54.314 06:13:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:54.314 06:13:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:54.314 06:13:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:54.314 06:13:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:56.228 06:13:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:56.228 00:14:56.228 real 0m13.277s 00:14:56.228 user 0m7.131s 00:14:56.228 sys 0m6.868s 00:14:56.228 06:13:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:56.228 06:13:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:56.228 ************************************ 00:14:56.228 END TEST nvmf_fused_ordering 00:14:56.228 ************************************ 00:14:56.228 06:13:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:56.228 06:13:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:56.228 06:13:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:56.228 06:13:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:56.489 ************************************ 00:14:56.489 START TEST nvmf_ns_masking 00:14:56.489 ************************************ 00:14:56.489 06:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:56.489 * Looking for test storage... 00:14:56.489 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:56.489 06:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:56.490 06:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:14:56.490 06:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:56.490 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:56.490 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:56.490 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:56.490 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:56.490 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:14:56.490 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:14:56.490 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:14:56.490 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:14:56.490 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:14:56.490 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:14:56.490 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:14:56.490 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:56.490 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:14:56.490 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:14:56.490 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:56.490 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:56.490 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:14:56.490 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:14:56.490 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:56.490 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:14:56.490 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:14:56.490 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:14:56.490 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:14:56.490 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:56.490 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:14:56.490 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:14:56.490 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:56.490 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:56.490 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:14:56.490 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:56.490 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:56.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.490 --rc genhtml_branch_coverage=1 00:14:56.490 --rc genhtml_function_coverage=1 00:14:56.490 --rc genhtml_legend=1 00:14:56.490 --rc geninfo_all_blocks=1 00:14:56.490 --rc geninfo_unexecuted_blocks=1 00:14:56.490 00:14:56.490 ' 00:14:56.490 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:56.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.490 --rc genhtml_branch_coverage=1 00:14:56.490 --rc genhtml_function_coverage=1 00:14:56.490 --rc genhtml_legend=1 00:14:56.490 --rc geninfo_all_blocks=1 00:14:56.490 --rc geninfo_unexecuted_blocks=1 00:14:56.490 00:14:56.490 ' 00:14:56.490 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:56.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.490 --rc genhtml_branch_coverage=1 00:14:56.490 --rc genhtml_function_coverage=1 00:14:56.490 --rc genhtml_legend=1 00:14:56.490 --rc geninfo_all_blocks=1 00:14:56.490 --rc geninfo_unexecuted_blocks=1 00:14:56.490 00:14:56.490 ' 00:14:56.490 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:56.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.490 --rc genhtml_branch_coverage=1 00:14:56.490 --rc genhtml_function_coverage=1 00:14:56.490 --rc genhtml_legend=1 00:14:56.490 --rc geninfo_all_blocks=1 00:14:56.490 --rc geninfo_unexecuted_blocks=1 00:14:56.490 00:14:56.490 ' 00:14:56.490 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:56.490 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:56.490 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:56.490 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:56.490 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:56.490 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:56.490 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:56.490 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:56.490 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:56.490 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:56.490 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:56.490 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:56.490 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:14:56.490 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:14:56.490 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:56.490 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:56.490 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:56.490 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:56.490 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:56.490 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:14:56.490 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:56.490 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:56.490 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:56.490 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.490 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.490 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.490 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:56.490 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.490 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:14:56.490 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:56.490 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:56.490 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:56.490 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:56.490 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:56.490 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:56.490 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:56.490 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:56.490 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:56.490 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:56.490 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:56.490 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:14:56.490 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:14:56.490 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:14:56.751 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=ecea7692-2769-4f8b-88b1-2bf4a6416c1c 00:14:56.751 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:14:56.751 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=aec3bd43-4705-4a7d-9e41-593ba9931448 00:14:56.751 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:56.751 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:14:56.751 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:14:56.751 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:14:56.751 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=46a085a1-b20a-4e64-bfff-97664e08f924 00:14:56.751 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:14:56.751 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:56.751 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:56.751 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:56.751 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:56.751 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:56.751 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:56.751 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:56.751 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:56.751 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:56.751 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:56.751 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:14:56.751 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:04.886 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:04.886 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:15:04.886 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:04.886 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:04.886 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:04.886 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:04.886 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:04.886 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:15:04.886 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:04.886 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:15:04.886 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:15:04.886 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:15:04.886 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:15:04.886 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:15:04.886 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:15:04.886 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:04.886 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:04.886 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:04.886 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:04.886 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:04.886 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:04.886 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:04.886 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:04.886 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:04.886 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:04.886 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:04.886 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:04.886 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:04.886 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:04.886 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:04.886 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:04.886 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:04.886 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:04.886 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:04.886 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:04.886 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:04.886 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:04.886 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:04.886 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:04.886 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:04.886 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:04.886 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:04.886 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:04.886 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:04.886 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:04.886 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:04.886 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:04.886 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:04.886 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:04.886 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:04.886 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:04.886 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:04.886 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:04.886 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:04.886 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:04.886 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:04.886 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:04.886 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:04.886 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:04.886 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:04.886 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:04.886 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:04.886 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:04.886 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:04.886 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:04.886 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:04.886 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:04.887 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:04.887 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:04.887 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:04.887 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:04.887 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:04.887 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:04.887 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:15:04.887 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:04.887 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:04.887 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:04.887 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:04.887 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:04.887 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:04.887 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:04.887 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:04.887 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:04.887 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:04.887 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:04.887 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:04.887 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:04.887 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:04.887 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:04.887 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:04.887 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:04.887 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:04.887 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:04.887 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:04.887 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:04.887 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:04.887 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:04.887 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:04.887 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:04.887 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:04.887 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:04.887 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.549 ms 00:15:04.887 00:15:04.887 --- 10.0.0.2 ping statistics --- 00:15:04.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:04.887 rtt min/avg/max/mdev = 0.549/0.549/0.549/0.000 ms 00:15:04.887 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:04.887 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:04.887 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:15:04.887 00:15:04.887 --- 10.0.0.1 ping statistics --- 00:15:04.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:04.887 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:15:04.887 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:04.887 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:15:04.887 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:04.887 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:04.887 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:04.887 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:04.887 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:04.887 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:04.887 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:04.887 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:15:04.887 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:04.887 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:04.887 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:04.887 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=279016 00:15:04.887 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 279016 00:15:04.887 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:04.887 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 279016 ']' 00:15:04.887 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:04.887 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:04.887 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:04.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:04.887 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:04.887 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:04.887 [2024-12-09 06:13:58.666775] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:15:04.887 [2024-12-09 06:13:58.666838] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:04.887 [2024-12-09 06:13:58.764710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:04.887 [2024-12-09 06:13:58.816051] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:04.887 [2024-12-09 06:13:58.816106] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:04.887 [2024-12-09 06:13:58.816114] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:04.887 [2024-12-09 06:13:58.816120] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:04.887 [2024-12-09 06:13:58.816126] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:04.887 [2024-12-09 06:13:58.816823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:05.148 06:13:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:05.148 06:13:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:15:05.148 06:13:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:05.148 06:13:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:05.148 06:13:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:05.148 06:13:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:05.148 06:13:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:05.148 [2024-12-09 06:13:59.703708] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:05.148 06:13:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:15:05.148 06:13:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:15:05.148 06:13:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:05.408 Malloc1 00:15:05.408 06:13:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:05.668 Malloc2 00:15:05.668 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:05.928 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:15:05.928 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:06.188 [2024-12-09 06:14:00.665495] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:06.188 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:15:06.188 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 46a085a1-b20a-4e64-bfff-97664e08f924 -a 10.0.0.2 -s 4420 -i 4 00:15:06.449 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:15:06.449 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:15:06.449 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:06.449 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:06.449 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:15:08.359 06:14:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:08.359 06:14:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:08.359 06:14:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:08.359 06:14:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:08.359 06:14:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:08.359 06:14:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:15:08.359 06:14:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:08.359 06:14:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:08.619 06:14:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:08.619 06:14:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:08.619 06:14:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:15:08.619 06:14:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:08.619 06:14:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:08.619 [ 0]:0x1 00:15:08.619 06:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:08.619 06:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:08.619 06:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1579cf31a310496c91819a93e26176e5 00:15:08.619 06:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1579cf31a310496c91819a93e26176e5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:08.619 06:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:15:08.881 06:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:15:08.881 06:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:08.881 06:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:08.881 [ 0]:0x1 00:15:08.881 06:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:08.881 06:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:08.881 06:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1579cf31a310496c91819a93e26176e5 00:15:08.881 06:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1579cf31a310496c91819a93e26176e5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:08.881 06:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:15:08.881 06:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:08.881 06:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:08.881 [ 1]:0x2 00:15:08.881 06:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:08.881 06:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:08.881 06:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ee3242688f9545ed9bcaa98ef9b8496c 00:15:08.881 06:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ee3242688f9545ed9bcaa98ef9b8496c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:08.881 06:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:15:08.881 06:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:08.881 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:08.881 06:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:09.141 06:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:15:09.403 06:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:15:09.403 06:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 46a085a1-b20a-4e64-bfff-97664e08f924 -a 10.0.0.2 -s 4420 -i 4 00:15:09.403 06:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:15:09.403 06:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:15:09.403 06:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:09.403 06:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:15:09.403 06:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:15:09.403 06:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:15:11.949 06:14:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:11.949 06:14:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:11.949 06:14:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:11.949 06:14:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:11.949 06:14:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:11.949 06:14:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:15:11.949 06:14:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:11.949 06:14:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:11.949 06:14:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:11.949 06:14:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:11.949 06:14:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:15:11.949 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:11.949 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:15:11.949 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:15:11.949 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:11.949 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:15:11.949 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:11.949 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:15:11.949 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:11.949 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:11.949 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:11.949 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:11.949 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:11.949 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:11.949 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:11.949 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:11.949 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:11.949 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:11.949 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:15:11.949 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:11.949 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:11.949 [ 0]:0x2 00:15:11.949 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:11.949 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:11.949 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ee3242688f9545ed9bcaa98ef9b8496c 00:15:11.949 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ee3242688f9545ed9bcaa98ef9b8496c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:11.949 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:11.949 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:15:11.949 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:11.949 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:11.949 [ 0]:0x1 00:15:11.949 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:11.949 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:11.949 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1579cf31a310496c91819a93e26176e5 00:15:11.949 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1579cf31a310496c91819a93e26176e5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:11.949 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:15:11.949 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:11.949 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:11.949 [ 1]:0x2 00:15:11.949 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:11.949 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:11.949 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ee3242688f9545ed9bcaa98ef9b8496c 00:15:11.949 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ee3242688f9545ed9bcaa98ef9b8496c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:11.949 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:12.212 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:15:12.212 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:12.212 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:15:12.212 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:15:12.212 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:12.212 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:15:12.212 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:12.212 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:15:12.212 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:12.212 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:12.212 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:12.212 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:12.212 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:12.212 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:12.212 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:12.212 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:12.212 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:12.212 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:12.212 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:15:12.212 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:12.212 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:12.212 [ 0]:0x2 00:15:12.212 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:12.212 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:12.212 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ee3242688f9545ed9bcaa98ef9b8496c 00:15:12.212 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ee3242688f9545ed9bcaa98ef9b8496c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:12.212 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:15:12.212 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:12.212 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:12.212 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:12.473 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:15:12.473 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 46a085a1-b20a-4e64-bfff-97664e08f924 -a 10.0.0.2 -s 4420 -i 4 00:15:12.473 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:12.473 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:15:12.473 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:12.473 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:15:12.473 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:15:12.473 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:15:15.018 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:15.018 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:15.018 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:15.018 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:15:15.018 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:15.018 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:15:15.018 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:15.018 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:15.018 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:15.018 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:15.018 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:15:15.018 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:15.018 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:15.018 [ 0]:0x1 00:15:15.018 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:15.018 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:15.018 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1579cf31a310496c91819a93e26176e5 00:15:15.018 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1579cf31a310496c91819a93e26176e5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:15.018 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:15:15.018 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:15.018 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:15.018 [ 1]:0x2 00:15:15.018 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:15.018 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:15.019 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ee3242688f9545ed9bcaa98ef9b8496c 00:15:15.019 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ee3242688f9545ed9bcaa98ef9b8496c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:15.019 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:15.019 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:15:15.019 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:15.019 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:15:15.019 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:15:15.019 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:15.019 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:15:15.019 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:15.019 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:15:15.019 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:15.019 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:15.019 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:15.019 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:15.019 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:15.019 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:15.019 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:15.019 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:15.019 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:15.019 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:15.019 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:15:15.019 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:15.019 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:15.019 [ 0]:0x2 00:15:15.019 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:15.019 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:15.019 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ee3242688f9545ed9bcaa98ef9b8496c 00:15:15.019 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ee3242688f9545ed9bcaa98ef9b8496c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:15.019 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:15.019 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:15.019 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:15.019 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:15.019 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:15.019 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:15.019 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:15.019 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:15.019 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:15.019 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:15.019 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:15.019 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:15.280 [2024-12-09 06:14:09.634576] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:15:15.280 request: 00:15:15.280 { 00:15:15.280 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:15.280 "nsid": 2, 00:15:15.280 "host": "nqn.2016-06.io.spdk:host1", 00:15:15.280 "method": "nvmf_ns_remove_host", 00:15:15.280 "req_id": 1 00:15:15.280 } 00:15:15.280 Got JSON-RPC error response 00:15:15.280 response: 00:15:15.280 { 00:15:15.280 "code": -32602, 00:15:15.280 "message": "Invalid parameters" 00:15:15.280 } 00:15:15.280 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:15.280 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:15.280 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:15.280 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:15.280 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:15:15.280 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:15.280 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:15:15.280 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:15:15.280 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:15.280 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:15:15.280 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:15.280 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:15:15.280 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:15.280 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:15.280 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:15.280 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:15.280 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:15.280 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:15.280 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:15.280 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:15.280 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:15.280 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:15.280 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:15:15.280 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:15.280 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:15.280 [ 0]:0x2 00:15:15.280 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:15.280 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:15.280 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ee3242688f9545ed9bcaa98ef9b8496c 00:15:15.280 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ee3242688f9545ed9bcaa98ef9b8496c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:15.280 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:15:15.280 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:15.280 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:15.280 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=281012 00:15:15.280 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:15:15.280 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:15:15.280 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 281012 /var/tmp/host.sock 00:15:15.280 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 281012 ']' 00:15:15.280 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:15:15.280 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:15.280 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:15.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:15.280 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:15.280 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:15.280 [2024-12-09 06:14:09.852372] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:15:15.280 [2024-12-09 06:14:09.852421] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid281012 ] 00:15:15.540 [2024-12-09 06:14:09.920151] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:15.540 [2024-12-09 06:14:09.954667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:15.800 06:14:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:15.800 06:14:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:15:15.800 06:14:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:15.800 06:14:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:16.060 06:14:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid ecea7692-2769-4f8b-88b1-2bf4a6416c1c 00:15:16.060 06:14:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:16.060 06:14:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g ECEA769227694F8B88B12BF4A6416C1C -i 00:15:16.321 06:14:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid aec3bd43-4705-4a7d-9e41-593ba9931448 00:15:16.321 06:14:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:16.321 06:14:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g AEC3BD4347054A7D9E41593BA9931448 -i 00:15:16.322 06:14:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:16.582 06:14:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:15:16.841 06:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:16.842 06:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:16.842 nvme0n1 00:15:16.842 06:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:16.842 06:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:17.101 nvme1n2 00:15:17.101 06:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:15:17.101 06:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:15:17.102 06:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:15:17.102 06:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:15:17.102 06:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:15:17.361 06:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:15:17.361 06:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:15:17.361 06:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:15:17.361 06:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:15:17.622 06:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ ecea7692-2769-4f8b-88b1-2bf4a6416c1c == \e\c\e\a\7\6\9\2\-\2\7\6\9\-\4\f\8\b\-\8\8\b\1\-\2\b\f\4\a\6\4\1\6\c\1\c ]] 00:15:17.622 06:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:15:17.622 06:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:15:17.622 06:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:15:17.622 06:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ aec3bd43-4705-4a7d-9e41-593ba9931448 == \a\e\c\3\b\d\4\3\-\4\7\0\5\-\4\a\7\d\-\9\e\4\1\-\5\9\3\b\a\9\9\3\1\4\4\8 ]] 00:15:17.622 06:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:17.882 06:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:18.142 06:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid ecea7692-2769-4f8b-88b1-2bf4a6416c1c 00:15:18.142 06:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:18.142 06:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g ECEA769227694F8B88B12BF4A6416C1C 00:15:18.142 06:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:18.143 06:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g ECEA769227694F8B88B12BF4A6416C1C 00:15:18.143 06:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:18.143 06:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:18.143 06:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:18.143 06:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:18.143 06:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:18.143 06:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:18.143 06:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:18.143 06:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:18.143 06:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g ECEA769227694F8B88B12BF4A6416C1C 00:15:18.143 [2024-12-09 06:14:12.726722] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:15:18.143 [2024-12-09 06:14:12.726751] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:15:18.143 [2024-12-09 06:14:12.726763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.404 request: 00:15:18.404 { 00:15:18.404 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:18.404 "namespace": { 00:15:18.404 "bdev_name": "invalid", 00:15:18.404 "nsid": 1, 00:15:18.404 "nguid": "ECEA769227694F8B88B12BF4A6416C1C", 00:15:18.404 "no_auto_visible": false, 00:15:18.404 "hide_metadata": false 00:15:18.404 }, 00:15:18.404 "method": "nvmf_subsystem_add_ns", 00:15:18.404 "req_id": 1 00:15:18.404 } 00:15:18.404 Got JSON-RPC error response 00:15:18.404 response: 00:15:18.404 { 00:15:18.404 "code": -32602, 00:15:18.404 "message": "Invalid parameters" 00:15:18.404 } 00:15:18.404 06:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:18.404 06:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:18.404 06:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:18.404 06:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:18.404 06:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid ecea7692-2769-4f8b-88b1-2bf4a6416c1c 00:15:18.404 06:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:18.404 06:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g ECEA769227694F8B88B12BF4A6416C1C -i 00:15:18.404 06:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:15:20.949 06:14:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:15:20.949 06:14:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:15:20.949 06:14:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:15:20.949 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:15:20.949 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 281012 00:15:20.949 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 281012 ']' 00:15:20.949 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 281012 00:15:20.949 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:15:20.949 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:20.949 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 281012 00:15:20.949 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:20.949 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:20.949 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 281012' 00:15:20.949 killing process with pid 281012 00:15:20.949 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 281012 00:15:20.949 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 281012 00:15:20.949 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:21.209 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:15:21.209 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:15:21.209 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:21.209 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:15:21.209 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:21.209 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:15:21.210 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:21.210 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:21.210 rmmod nvme_tcp 00:15:21.210 rmmod nvme_fabrics 00:15:21.210 rmmod nvme_keyring 00:15:21.210 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:21.210 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:15:21.210 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:15:21.210 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 279016 ']' 00:15:21.210 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 279016 00:15:21.210 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 279016 ']' 00:15:21.210 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 279016 00:15:21.210 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:15:21.210 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:21.210 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 279016 00:15:21.210 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:21.210 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:21.210 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 279016' 00:15:21.210 killing process with pid 279016 00:15:21.210 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 279016 00:15:21.210 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 279016 00:15:21.470 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:21.470 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:21.470 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:21.470 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:15:21.470 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:15:21.470 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:21.470 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:15:21.470 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:21.470 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:21.470 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:21.470 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:21.470 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:23.392 06:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:23.392 00:15:23.392 real 0m27.058s 00:15:23.392 user 0m29.847s 00:15:23.392 sys 0m8.153s 00:15:23.392 06:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:23.392 06:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:23.392 ************************************ 00:15:23.392 END TEST nvmf_ns_masking 00:15:23.392 ************************************ 00:15:23.392 06:14:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:15:23.392 06:14:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:23.392 06:14:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:23.392 06:14:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:23.392 06:14:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:23.392 ************************************ 00:15:23.392 START TEST nvmf_nvme_cli 00:15:23.392 ************************************ 00:15:23.392 06:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:23.655 * Looking for test storage... 00:15:23.655 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:23.655 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:23.655 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version 00:15:23.655 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:23.655 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:23.655 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:23.655 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:23.655 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:23.655 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:15:23.655 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:15:23.655 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:15:23.655 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:15:23.655 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:15:23.655 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:15:23.655 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:15:23.655 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:23.655 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:15:23.655 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:15:23.655 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:23.655 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:23.655 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:15:23.655 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:15:23.655 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:23.655 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:15:23.655 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:15:23.655 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:15:23.655 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:15:23.655 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:23.655 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:15:23.655 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:15:23.655 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:23.655 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:23.655 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:15:23.655 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:23.655 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:23.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.655 --rc genhtml_branch_coverage=1 00:15:23.655 --rc genhtml_function_coverage=1 00:15:23.655 --rc genhtml_legend=1 00:15:23.655 --rc geninfo_all_blocks=1 00:15:23.655 --rc geninfo_unexecuted_blocks=1 00:15:23.655 00:15:23.655 ' 00:15:23.655 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:23.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.655 --rc genhtml_branch_coverage=1 00:15:23.655 --rc genhtml_function_coverage=1 00:15:23.655 --rc genhtml_legend=1 00:15:23.655 --rc geninfo_all_blocks=1 00:15:23.655 --rc geninfo_unexecuted_blocks=1 00:15:23.655 00:15:23.655 ' 00:15:23.655 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:23.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.655 --rc genhtml_branch_coverage=1 00:15:23.655 --rc genhtml_function_coverage=1 00:15:23.655 --rc genhtml_legend=1 00:15:23.655 --rc geninfo_all_blocks=1 00:15:23.655 --rc geninfo_unexecuted_blocks=1 00:15:23.655 00:15:23.655 ' 00:15:23.655 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:23.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.655 --rc genhtml_branch_coverage=1 00:15:23.655 --rc genhtml_function_coverage=1 00:15:23.655 --rc genhtml_legend=1 00:15:23.655 --rc geninfo_all_blocks=1 00:15:23.655 --rc geninfo_unexecuted_blocks=1 00:15:23.655 00:15:23.655 ' 00:15:23.655 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:23.655 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:15:23.655 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:23.655 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:23.655 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:23.655 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:23.655 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:23.655 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:23.655 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:23.655 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:23.655 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:23.655 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:23.655 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:15:23.655 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:15:23.655 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:23.655 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:23.655 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:23.655 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:23.655 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:23.655 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:15:23.655 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:23.655 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:23.655 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:23.655 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.655 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.655 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.655 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:15:23.656 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.656 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:15:23.656 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:23.656 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:23.656 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:23.656 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:23.656 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:23.656 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:23.656 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:23.656 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:23.656 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:23.656 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:23.656 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:23.656 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:23.656 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:15:23.656 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:15:23.656 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:23.656 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:23.656 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:23.656 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:23.656 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:23.656 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:23.656 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:23.656 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:23.656 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:23.656 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:23.656 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:15:23.656 06:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:31.796 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:31.796 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:15:31.796 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:31.796 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:31.796 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:31.796 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:31.796 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:31.796 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:15:31.796 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:31.796 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:15:31.796 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:15:31.796 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:15:31.796 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:15:31.796 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:15:31.796 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:15:31.796 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:31.796 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:31.796 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:31.796 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:31.796 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:31.796 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:31.796 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:31.796 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:31.796 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:31.797 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:31.797 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:31.797 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:31.797 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:31.797 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:31.797 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.550 ms 00:15:31.797 00:15:31.797 --- 10.0.0.2 ping statistics --- 00:15:31.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:31.797 rtt min/avg/max/mdev = 0.550/0.550/0.550/0.000 ms 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:31.797 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:31.797 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:15:31.797 00:15:31.797 --- 10.0.0.1 ping statistics --- 00:15:31.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:31.797 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=286150 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 286150 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 286150 ']' 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:31.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:31.797 06:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:31.797 [2024-12-09 06:14:25.660111] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:15:31.798 [2024-12-09 06:14:25.660178] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:31.798 [2024-12-09 06:14:25.756064] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:31.798 [2024-12-09 06:14:25.808970] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:31.798 [2024-12-09 06:14:25.809027] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:31.798 [2024-12-09 06:14:25.809035] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:31.798 [2024-12-09 06:14:25.809042] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:31.798 [2024-12-09 06:14:25.809049] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:31.798 [2024-12-09 06:14:25.811012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:31.798 [2024-12-09 06:14:25.811168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:31.798 [2024-12-09 06:14:25.811318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:31.798 [2024-12-09 06:14:25.811319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:32.059 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:32.059 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:15:32.059 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:32.059 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:32.059 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:32.059 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:32.059 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:32.059 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.059 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:32.059 [2024-12-09 06:14:26.531318] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:32.059 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.059 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:32.059 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.059 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:32.059 Malloc0 00:15:32.059 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.059 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:32.059 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.059 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:32.059 Malloc1 00:15:32.059 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.059 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:15:32.059 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.059 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:32.059 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.059 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:32.059 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.059 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:32.059 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.059 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:32.059 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.059 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:32.320 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.320 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:32.320 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.320 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:32.320 [2024-12-09 06:14:26.653773] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:32.320 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.320 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:32.320 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.320 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:32.320 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.320 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -a 10.0.0.2 -s 4420 00:15:32.320 00:15:32.320 Discovery Log Number of Records 2, Generation counter 2 00:15:32.320 =====Discovery Log Entry 0====== 00:15:32.320 trtype: tcp 00:15:32.320 adrfam: ipv4 00:15:32.320 subtype: current discovery subsystem 00:15:32.320 treq: not required 00:15:32.320 portid: 0 00:15:32.320 trsvcid: 4420 00:15:32.320 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:32.320 traddr: 10.0.0.2 00:15:32.320 eflags: explicit discovery connections, duplicate discovery information 00:15:32.320 sectype: none 00:15:32.320 =====Discovery Log Entry 1====== 00:15:32.320 trtype: tcp 00:15:32.320 adrfam: ipv4 00:15:32.320 subtype: nvme subsystem 00:15:32.320 treq: not required 00:15:32.320 portid: 0 00:15:32.320 trsvcid: 4420 00:15:32.320 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:32.320 traddr: 10.0.0.2 00:15:32.320 eflags: none 00:15:32.320 sectype: none 00:15:32.320 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:15:32.320 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:15:32.320 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:15:32.320 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:32.320 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:15:32.320 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:15:32.320 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:32.320 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:15:32.320 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:32.321 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:15:32.321 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:34.230 06:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:34.230 06:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:15:34.230 06:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:34.230 06:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:15:34.230 06:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:15:34.230 06:14:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:15:36.163 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:36.163 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:36.163 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:36.163 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:15:36.163 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:36.163 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:15:36.163 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:15:36.163 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:15:36.163 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:36.163 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:15:36.163 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:15:36.163 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:36.163 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:15:36.163 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:36.163 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:36.163 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:15:36.163 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:36.163 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:36.163 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:15:36.163 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:36.163 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:15:36.163 /dev/nvme0n2 ]] 00:15:36.163 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:15:36.163 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:15:36.163 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:15:36.163 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:36.163 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:15:36.163 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:15:36.163 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:36.163 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:15:36.163 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:36.163 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:36.163 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:15:36.163 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:36.163 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:36.163 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:15:36.163 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:36.163 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:15:36.163 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:36.422 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:36.422 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:36.422 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:15:36.422 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:36.422 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:36.422 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:36.422 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:36.422 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:15:36.422 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:15:36.422 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:36.422 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.422 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:36.422 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.423 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:36.423 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:15:36.423 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:36.423 06:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:15:36.423 06:14:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:36.423 06:14:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:15:36.423 06:14:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:36.423 06:14:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:36.682 rmmod nvme_tcp 00:15:36.682 rmmod nvme_fabrics 00:15:36.682 rmmod nvme_keyring 00:15:36.682 06:14:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:36.682 06:14:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:15:36.682 06:14:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:15:36.682 06:14:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 286150 ']' 00:15:36.682 06:14:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 286150 00:15:36.682 06:14:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 286150 ']' 00:15:36.682 06:14:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 286150 00:15:36.682 06:14:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:15:36.682 06:14:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:36.682 06:14:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 286150 00:15:36.682 06:14:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:36.682 06:14:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:36.682 06:14:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 286150' 00:15:36.682 killing process with pid 286150 00:15:36.682 06:14:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 286150 00:15:36.682 06:14:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 286150 00:15:36.682 06:14:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:36.682 06:14:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:36.682 06:14:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:36.682 06:14:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:15:36.682 06:14:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:15:36.682 06:14:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:15:36.682 06:14:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:36.943 06:14:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:36.943 06:14:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:36.943 06:14:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:36.943 06:14:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:36.943 06:14:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:38.853 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:38.853 00:15:38.853 real 0m15.365s 00:15:38.853 user 0m23.923s 00:15:38.853 sys 0m6.321s 00:15:38.853 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:38.853 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:38.853 ************************************ 00:15:38.853 END TEST nvmf_nvme_cli 00:15:38.853 ************************************ 00:15:38.853 06:14:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:15:38.853 06:14:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:38.853 06:14:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:38.853 06:14:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:38.853 06:14:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:38.853 ************************************ 00:15:38.853 START TEST nvmf_vfio_user 00:15:38.853 ************************************ 00:15:38.853 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:39.115 * Looking for test storage... 00:15:39.115 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:39.115 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:39.115 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lcov --version 00:15:39.115 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:39.115 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:39.115 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:39.115 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:39.115 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:39.115 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:15:39.115 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:15:39.115 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:15:39.115 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:15:39.115 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:15:39.115 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:15:39.115 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:15:39.115 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:39.115 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:15:39.115 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:15:39.115 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:39.115 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:39.115 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:15:39.115 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:15:39.115 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:39.115 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:15:39.115 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:15:39.115 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:15:39.115 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:15:39.115 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:39.115 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:15:39.115 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:15:39.115 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:39.115 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:39.115 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:15:39.116 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:39.116 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:39.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.116 --rc genhtml_branch_coverage=1 00:15:39.116 --rc genhtml_function_coverage=1 00:15:39.116 --rc genhtml_legend=1 00:15:39.116 --rc geninfo_all_blocks=1 00:15:39.116 --rc geninfo_unexecuted_blocks=1 00:15:39.116 00:15:39.116 ' 00:15:39.116 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:39.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.116 --rc genhtml_branch_coverage=1 00:15:39.116 --rc genhtml_function_coverage=1 00:15:39.116 --rc genhtml_legend=1 00:15:39.116 --rc geninfo_all_blocks=1 00:15:39.116 --rc geninfo_unexecuted_blocks=1 00:15:39.116 00:15:39.116 ' 00:15:39.116 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:39.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.116 --rc genhtml_branch_coverage=1 00:15:39.116 --rc genhtml_function_coverage=1 00:15:39.116 --rc genhtml_legend=1 00:15:39.116 --rc geninfo_all_blocks=1 00:15:39.116 --rc geninfo_unexecuted_blocks=1 00:15:39.116 00:15:39.116 ' 00:15:39.116 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:39.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.116 --rc genhtml_branch_coverage=1 00:15:39.116 --rc genhtml_function_coverage=1 00:15:39.116 --rc genhtml_legend=1 00:15:39.116 --rc geninfo_all_blocks=1 00:15:39.116 --rc geninfo_unexecuted_blocks=1 00:15:39.116 00:15:39.116 ' 00:15:39.116 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:39.116 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:15:39.116 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:39.116 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:39.116 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:39.116 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:39.116 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:39.116 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:39.116 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:39.116 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:39.116 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:39.116 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:39.116 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:15:39.116 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:15:39.116 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:39.116 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:39.116 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:39.116 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:39.116 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:39.116 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:15:39.116 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:39.116 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:39.116 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:39.116 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.116 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.116 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.116 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:15:39.116 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.116 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:15:39.116 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:39.116 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:39.116 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:39.116 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:39.116 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:39.116 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:39.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:39.116 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:39.116 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:39.116 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:39.116 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:39.116 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:39.116 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:15:39.116 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:39.116 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:39.116 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:39.116 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:15:39.116 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:15:39.116 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:15:39.116 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:15:39.116 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=287522 00:15:39.116 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 287522' 00:15:39.116 Process pid: 287522 00:15:39.116 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:39.116 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 287522 00:15:39.116 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 287522 ']' 00:15:39.116 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:15:39.116 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:39.116 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:39.116 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:39.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:39.116 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:39.116 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:39.378 [2024-12-09 06:14:33.707967] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:15:39.378 [2024-12-09 06:14:33.708040] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:39.378 [2024-12-09 06:14:33.797155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:39.378 [2024-12-09 06:14:33.837736] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:39.378 [2024-12-09 06:14:33.837781] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:39.378 [2024-12-09 06:14:33.837787] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:39.378 [2024-12-09 06:14:33.837792] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:39.378 [2024-12-09 06:14:33.837797] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:39.378 [2024-12-09 06:14:33.839388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:39.378 [2024-12-09 06:14:33.839545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:39.378 [2024-12-09 06:14:33.839585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:39.378 [2024-12-09 06:14:33.839586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:39.949 06:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:39.950 06:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:15:39.950 06:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:41.333 06:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:15:41.333 06:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:41.333 06:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:41.333 06:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:41.333 06:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:41.333 06:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:41.333 Malloc1 00:15:41.333 06:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:41.593 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:41.855 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:41.855 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:41.855 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:41.855 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:42.117 Malloc2 00:15:42.117 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:42.378 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:42.378 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:42.639 06:14:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:15:42.639 06:14:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:15:42.639 06:14:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:42.639 06:14:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:42.639 06:14:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:15:42.639 06:14:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:42.639 [2024-12-09 06:14:37.150443] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:15:42.639 [2024-12-09 06:14:37.150493] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid288152 ] 00:15:42.639 [2024-12-09 06:14:37.193270] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:15:42.639 [2024-12-09 06:14:37.200699] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:42.639 [2024-12-09 06:14:37.200717] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fb2ff54c000 00:15:42.639 [2024-12-09 06:14:37.201696] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:42.639 [2024-12-09 06:14:37.202700] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:42.639 [2024-12-09 06:14:37.203707] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:42.639 [2024-12-09 06:14:37.204710] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:42.639 [2024-12-09 06:14:37.205719] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:42.639 [2024-12-09 06:14:37.206722] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:42.639 [2024-12-09 06:14:37.207721] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:42.639 [2024-12-09 06:14:37.208735] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:42.639 [2024-12-09 06:14:37.209741] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:42.639 [2024-12-09 06:14:37.209748] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fb2ff541000 00:15:42.639 [2024-12-09 06:14:37.210687] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:42.902 [2024-12-09 06:14:37.224909] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:15:42.902 [2024-12-09 06:14:37.224929] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:15:42.902 [2024-12-09 06:14:37.227853] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:42.902 [2024-12-09 06:14:37.227885] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:42.902 [2024-12-09 06:14:37.227946] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:15:42.902 [2024-12-09 06:14:37.227957] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:15:42.902 [2024-12-09 06:14:37.227961] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:15:42.902 [2024-12-09 06:14:37.228852] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:15:42.902 [2024-12-09 06:14:37.228860] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:15:42.902 [2024-12-09 06:14:37.228865] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:15:42.902 [2024-12-09 06:14:37.229860] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:42.902 [2024-12-09 06:14:37.229866] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:15:42.902 [2024-12-09 06:14:37.229872] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:15:42.902 [2024-12-09 06:14:37.230868] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:15:42.902 [2024-12-09 06:14:37.230874] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:42.902 [2024-12-09 06:14:37.231869] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:15:42.902 [2024-12-09 06:14:37.231875] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:15:42.902 [2024-12-09 06:14:37.231879] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:15:42.902 [2024-12-09 06:14:37.231884] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:42.902 [2024-12-09 06:14:37.231990] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:15:42.902 [2024-12-09 06:14:37.231994] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:42.902 [2024-12-09 06:14:37.231997] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:15:42.902 [2024-12-09 06:14:37.232881] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:15:42.902 [2024-12-09 06:14:37.233880] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:15:42.902 [2024-12-09 06:14:37.234891] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:42.902 [2024-12-09 06:14:37.235891] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:42.902 [2024-12-09 06:14:37.235956] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:42.902 [2024-12-09 06:14:37.236903] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:15:42.902 [2024-12-09 06:14:37.236909] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:42.902 [2024-12-09 06:14:37.236913] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:15:42.902 [2024-12-09 06:14:37.236928] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:15:42.902 [2024-12-09 06:14:37.236933] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:15:42.902 [2024-12-09 06:14:37.236949] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:42.902 [2024-12-09 06:14:37.236953] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:42.902 [2024-12-09 06:14:37.236956] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:42.902 [2024-12-09 06:14:37.236966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:42.902 [2024-12-09 06:14:37.237010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:42.902 [2024-12-09 06:14:37.237018] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:15:42.902 [2024-12-09 06:14:37.237024] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:15:42.902 [2024-12-09 06:14:37.237028] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:15:42.902 [2024-12-09 06:14:37.237031] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:42.902 [2024-12-09 06:14:37.237035] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:15:42.902 [2024-12-09 06:14:37.237039] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:15:42.902 [2024-12-09 06:14:37.237042] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:15:42.902 [2024-12-09 06:14:37.237048] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:15:42.902 [2024-12-09 06:14:37.237056] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:42.902 [2024-12-09 06:14:37.237066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:42.902 [2024-12-09 06:14:37.237074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:42.902 [2024-12-09 06:14:37.237081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:42.902 [2024-12-09 06:14:37.237087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:42.902 [2024-12-09 06:14:37.237093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:42.902 [2024-12-09 06:14:37.237096] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:15:42.902 [2024-12-09 06:14:37.237103] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:42.902 [2024-12-09 06:14:37.237109] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:42.902 [2024-12-09 06:14:37.237121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:42.902 [2024-12-09 06:14:37.237125] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:15:42.902 [2024-12-09 06:14:37.237129] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:42.902 [2024-12-09 06:14:37.237134] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:15:42.902 [2024-12-09 06:14:37.237138] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:15:42.903 [2024-12-09 06:14:37.237144] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:42.903 [2024-12-09 06:14:37.237151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:42.903 [2024-12-09 06:14:37.237196] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:15:42.903 [2024-12-09 06:14:37.237203] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:15:42.903 [2024-12-09 06:14:37.237209] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:42.903 [2024-12-09 06:14:37.237212] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:42.903 [2024-12-09 06:14:37.237215] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:42.903 [2024-12-09 06:14:37.237219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:42.903 [2024-12-09 06:14:37.237231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:42.903 [2024-12-09 06:14:37.237237] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:15:42.903 [2024-12-09 06:14:37.237244] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:15:42.903 [2024-12-09 06:14:37.237250] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:15:42.903 [2024-12-09 06:14:37.237255] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:42.903 [2024-12-09 06:14:37.237258] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:42.903 [2024-12-09 06:14:37.237260] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:42.903 [2024-12-09 06:14:37.237265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:42.903 [2024-12-09 06:14:37.237281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:42.903 [2024-12-09 06:14:37.237290] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:42.903 [2024-12-09 06:14:37.237296] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:42.903 [2024-12-09 06:14:37.237301] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:42.903 [2024-12-09 06:14:37.237304] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:42.903 [2024-12-09 06:14:37.237307] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:42.903 [2024-12-09 06:14:37.237311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:42.903 [2024-12-09 06:14:37.237321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:42.903 [2024-12-09 06:14:37.237327] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:42.903 [2024-12-09 06:14:37.237332] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:15:42.903 [2024-12-09 06:14:37.237338] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:15:42.903 [2024-12-09 06:14:37.237344] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:15:42.903 [2024-12-09 06:14:37.237348] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:42.903 [2024-12-09 06:14:37.237353] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:15:42.903 [2024-12-09 06:14:37.237357] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:15:42.903 [2024-12-09 06:14:37.237360] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:15:42.903 [2024-12-09 06:14:37.237364] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:15:42.903 [2024-12-09 06:14:37.237378] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:42.903 [2024-12-09 06:14:37.237387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:42.903 [2024-12-09 06:14:37.237396] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:42.903 [2024-12-09 06:14:37.237405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:42.903 [2024-12-09 06:14:37.237413] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:42.903 [2024-12-09 06:14:37.237423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:42.903 [2024-12-09 06:14:37.237431] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:42.903 [2024-12-09 06:14:37.237436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:42.903 [2024-12-09 06:14:37.237446] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:42.903 [2024-12-09 06:14:37.237453] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:42.903 [2024-12-09 06:14:37.237456] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:42.903 [2024-12-09 06:14:37.237458] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:42.903 [2024-12-09 06:14:37.237460] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:42.903 [2024-12-09 06:14:37.237465] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:42.903 [2024-12-09 06:14:37.237471] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:42.903 [2024-12-09 06:14:37.237474] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:42.903 [2024-12-09 06:14:37.237476] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:42.903 [2024-12-09 06:14:37.237480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:42.903 [2024-12-09 06:14:37.237486] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:42.903 [2024-12-09 06:14:37.237489] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:42.903 [2024-12-09 06:14:37.237491] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:42.903 [2024-12-09 06:14:37.237496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:42.903 [2024-12-09 06:14:37.237501] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:42.903 [2024-12-09 06:14:37.237505] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:42.903 [2024-12-09 06:14:37.237509] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:42.903 [2024-12-09 06:14:37.237514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:42.903 [2024-12-09 06:14:37.237519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:42.903 [2024-12-09 06:14:37.237528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:42.903 [2024-12-09 06:14:37.237536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:42.903 [2024-12-09 06:14:37.237541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:42.903 ===================================================== 00:15:42.903 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:42.903 ===================================================== 00:15:42.903 Controller Capabilities/Features 00:15:42.903 ================================ 00:15:42.903 Vendor ID: 4e58 00:15:42.903 Subsystem Vendor ID: 4e58 00:15:42.903 Serial Number: SPDK1 00:15:42.903 Model Number: SPDK bdev Controller 00:15:42.903 Firmware Version: 25.01 00:15:42.903 Recommended Arb Burst: 6 00:15:42.903 IEEE OUI Identifier: 8d 6b 50 00:15:42.903 Multi-path I/O 00:15:42.903 May have multiple subsystem ports: Yes 00:15:42.903 May have multiple controllers: Yes 00:15:42.903 Associated with SR-IOV VF: No 00:15:42.903 Max Data Transfer Size: 131072 00:15:42.903 Max Number of Namespaces: 32 00:15:42.903 Max Number of I/O Queues: 127 00:15:42.903 NVMe Specification Version (VS): 1.3 00:15:42.903 NVMe Specification Version (Identify): 1.3 00:15:42.903 Maximum Queue Entries: 256 00:15:42.903 Contiguous Queues Required: Yes 00:15:42.903 Arbitration Mechanisms Supported 00:15:42.903 Weighted Round Robin: Not Supported 00:15:42.903 Vendor Specific: Not Supported 00:15:42.903 Reset Timeout: 15000 ms 00:15:42.903 Doorbell Stride: 4 bytes 00:15:42.903 NVM Subsystem Reset: Not Supported 00:15:42.903 Command Sets Supported 00:15:42.903 NVM Command Set: Supported 00:15:42.903 Boot Partition: Not Supported 00:15:42.903 Memory Page Size Minimum: 4096 bytes 00:15:42.903 Memory Page Size Maximum: 4096 bytes 00:15:42.903 Persistent Memory Region: Not Supported 00:15:42.903 Optional Asynchronous Events Supported 00:15:42.903 Namespace Attribute Notices: Supported 00:15:42.903 Firmware Activation Notices: Not Supported 00:15:42.903 ANA Change Notices: Not Supported 00:15:42.903 PLE Aggregate Log Change Notices: Not Supported 00:15:42.903 LBA Status Info Alert Notices: Not Supported 00:15:42.903 EGE Aggregate Log Change Notices: Not Supported 00:15:42.904 Normal NVM Subsystem Shutdown event: Not Supported 00:15:42.904 Zone Descriptor Change Notices: Not Supported 00:15:42.904 Discovery Log Change Notices: Not Supported 00:15:42.904 Controller Attributes 00:15:42.904 128-bit Host Identifier: Supported 00:15:42.904 Non-Operational Permissive Mode: Not Supported 00:15:42.904 NVM Sets: Not Supported 00:15:42.904 Read Recovery Levels: Not Supported 00:15:42.904 Endurance Groups: Not Supported 00:15:42.904 Predictable Latency Mode: Not Supported 00:15:42.904 Traffic Based Keep ALive: Not Supported 00:15:42.904 Namespace Granularity: Not Supported 00:15:42.904 SQ Associations: Not Supported 00:15:42.904 UUID List: Not Supported 00:15:42.904 Multi-Domain Subsystem: Not Supported 00:15:42.904 Fixed Capacity Management: Not Supported 00:15:42.904 Variable Capacity Management: Not Supported 00:15:42.904 Delete Endurance Group: Not Supported 00:15:42.904 Delete NVM Set: Not Supported 00:15:42.904 Extended LBA Formats Supported: Not Supported 00:15:42.904 Flexible Data Placement Supported: Not Supported 00:15:42.904 00:15:42.904 Controller Memory Buffer Support 00:15:42.904 ================================ 00:15:42.904 Supported: No 00:15:42.904 00:15:42.904 Persistent Memory Region Support 00:15:42.904 ================================ 00:15:42.904 Supported: No 00:15:42.904 00:15:42.904 Admin Command Set Attributes 00:15:42.904 ============================ 00:15:42.904 Security Send/Receive: Not Supported 00:15:42.904 Format NVM: Not Supported 00:15:42.904 Firmware Activate/Download: Not Supported 00:15:42.904 Namespace Management: Not Supported 00:15:42.904 Device Self-Test: Not Supported 00:15:42.904 Directives: Not Supported 00:15:42.904 NVMe-MI: Not Supported 00:15:42.904 Virtualization Management: Not Supported 00:15:42.904 Doorbell Buffer Config: Not Supported 00:15:42.904 Get LBA Status Capability: Not Supported 00:15:42.904 Command & Feature Lockdown Capability: Not Supported 00:15:42.904 Abort Command Limit: 4 00:15:42.904 Async Event Request Limit: 4 00:15:42.904 Number of Firmware Slots: N/A 00:15:42.904 Firmware Slot 1 Read-Only: N/A 00:15:42.904 Firmware Activation Without Reset: N/A 00:15:42.904 Multiple Update Detection Support: N/A 00:15:42.904 Firmware Update Granularity: No Information Provided 00:15:42.904 Per-Namespace SMART Log: No 00:15:42.904 Asymmetric Namespace Access Log Page: Not Supported 00:15:42.904 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:15:42.904 Command Effects Log Page: Supported 00:15:42.904 Get Log Page Extended Data: Supported 00:15:42.904 Telemetry Log Pages: Not Supported 00:15:42.904 Persistent Event Log Pages: Not Supported 00:15:42.904 Supported Log Pages Log Page: May Support 00:15:42.904 Commands Supported & Effects Log Page: Not Supported 00:15:42.904 Feature Identifiers & Effects Log Page:May Support 00:15:42.904 NVMe-MI Commands & Effects Log Page: May Support 00:15:42.904 Data Area 4 for Telemetry Log: Not Supported 00:15:42.904 Error Log Page Entries Supported: 128 00:15:42.904 Keep Alive: Supported 00:15:42.904 Keep Alive Granularity: 10000 ms 00:15:42.904 00:15:42.904 NVM Command Set Attributes 00:15:42.904 ========================== 00:15:42.904 Submission Queue Entry Size 00:15:42.904 Max: 64 00:15:42.904 Min: 64 00:15:42.904 Completion Queue Entry Size 00:15:42.904 Max: 16 00:15:42.904 Min: 16 00:15:42.904 Number of Namespaces: 32 00:15:42.904 Compare Command: Supported 00:15:42.904 Write Uncorrectable Command: Not Supported 00:15:42.904 Dataset Management Command: Supported 00:15:42.904 Write Zeroes Command: Supported 00:15:42.904 Set Features Save Field: Not Supported 00:15:42.904 Reservations: Not Supported 00:15:42.904 Timestamp: Not Supported 00:15:42.904 Copy: Supported 00:15:42.904 Volatile Write Cache: Present 00:15:42.904 Atomic Write Unit (Normal): 1 00:15:42.904 Atomic Write Unit (PFail): 1 00:15:42.904 Atomic Compare & Write Unit: 1 00:15:42.904 Fused Compare & Write: Supported 00:15:42.904 Scatter-Gather List 00:15:42.904 SGL Command Set: Supported (Dword aligned) 00:15:42.904 SGL Keyed: Not Supported 00:15:42.904 SGL Bit Bucket Descriptor: Not Supported 00:15:42.904 SGL Metadata Pointer: Not Supported 00:15:42.904 Oversized SGL: Not Supported 00:15:42.904 SGL Metadata Address: Not Supported 00:15:42.904 SGL Offset: Not Supported 00:15:42.904 Transport SGL Data Block: Not Supported 00:15:42.904 Replay Protected Memory Block: Not Supported 00:15:42.904 00:15:42.904 Firmware Slot Information 00:15:42.904 ========================= 00:15:42.904 Active slot: 1 00:15:42.904 Slot 1 Firmware Revision: 25.01 00:15:42.904 00:15:42.904 00:15:42.904 Commands Supported and Effects 00:15:42.904 ============================== 00:15:42.904 Admin Commands 00:15:42.904 -------------- 00:15:42.904 Get Log Page (02h): Supported 00:15:42.904 Identify (06h): Supported 00:15:42.904 Abort (08h): Supported 00:15:42.904 Set Features (09h): Supported 00:15:42.904 Get Features (0Ah): Supported 00:15:42.904 Asynchronous Event Request (0Ch): Supported 00:15:42.904 Keep Alive (18h): Supported 00:15:42.904 I/O Commands 00:15:42.904 ------------ 00:15:42.904 Flush (00h): Supported LBA-Change 00:15:42.904 Write (01h): Supported LBA-Change 00:15:42.904 Read (02h): Supported 00:15:42.904 Compare (05h): Supported 00:15:42.904 Write Zeroes (08h): Supported LBA-Change 00:15:42.904 Dataset Management (09h): Supported LBA-Change 00:15:42.904 Copy (19h): Supported LBA-Change 00:15:42.904 00:15:42.904 Error Log 00:15:42.904 ========= 00:15:42.904 00:15:42.904 Arbitration 00:15:42.904 =========== 00:15:42.904 Arbitration Burst: 1 00:15:42.904 00:15:42.904 Power Management 00:15:42.904 ================ 00:15:42.904 Number of Power States: 1 00:15:42.904 Current Power State: Power State #0 00:15:42.904 Power State #0: 00:15:42.904 Max Power: 0.00 W 00:15:42.904 Non-Operational State: Operational 00:15:42.904 Entry Latency: Not Reported 00:15:42.904 Exit Latency: Not Reported 00:15:42.904 Relative Read Throughput: 0 00:15:42.904 Relative Read Latency: 0 00:15:42.904 Relative Write Throughput: 0 00:15:42.904 Relative Write Latency: 0 00:15:42.904 Idle Power: Not Reported 00:15:42.904 Active Power: Not Reported 00:15:42.904 Non-Operational Permissive Mode: Not Supported 00:15:42.904 00:15:42.904 Health Information 00:15:42.904 ================== 00:15:42.904 Critical Warnings: 00:15:42.904 Available Spare Space: OK 00:15:42.904 Temperature: OK 00:15:42.904 Device Reliability: OK 00:15:42.904 Read Only: No 00:15:42.904 Volatile Memory Backup: OK 00:15:42.904 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:42.904 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:42.904 Available Spare: 0% 00:15:42.904 Available Sp[2024-12-09 06:14:37.237617] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:42.904 [2024-12-09 06:14:37.237627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:42.904 [2024-12-09 06:14:37.237650] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:15:42.904 [2024-12-09 06:14:37.237657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:42.904 [2024-12-09 06:14:37.237662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:42.904 [2024-12-09 06:14:37.237666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:42.904 [2024-12-09 06:14:37.237671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:42.904 [2024-12-09 06:14:37.239454] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:42.904 [2024-12-09 06:14:37.239462] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:15:42.904 [2024-12-09 06:14:37.239917] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:42.904 [2024-12-09 06:14:37.239957] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:15:42.904 [2024-12-09 06:14:37.239962] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:15:42.904 [2024-12-09 06:14:37.240927] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:15:42.904 [2024-12-09 06:14:37.240935] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:15:42.904 [2024-12-09 06:14:37.240981] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:15:42.904 [2024-12-09 06:14:37.242953] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:42.904 are Threshold: 0% 00:15:42.904 Life Percentage Used: 0% 00:15:42.904 Data Units Read: 0 00:15:42.904 Data Units Written: 0 00:15:42.905 Host Read Commands: 0 00:15:42.905 Host Write Commands: 0 00:15:42.905 Controller Busy Time: 0 minutes 00:15:42.905 Power Cycles: 0 00:15:42.905 Power On Hours: 0 hours 00:15:42.905 Unsafe Shutdowns: 0 00:15:42.905 Unrecoverable Media Errors: 0 00:15:42.905 Lifetime Error Log Entries: 0 00:15:42.905 Warning Temperature Time: 0 minutes 00:15:42.905 Critical Temperature Time: 0 minutes 00:15:42.905 00:15:42.905 Number of Queues 00:15:42.905 ================ 00:15:42.905 Number of I/O Submission Queues: 127 00:15:42.905 Number of I/O Completion Queues: 127 00:15:42.905 00:15:42.905 Active Namespaces 00:15:42.905 ================= 00:15:42.905 Namespace ID:1 00:15:42.905 Error Recovery Timeout: Unlimited 00:15:42.905 Command Set Identifier: NVM (00h) 00:15:42.905 Deallocate: Supported 00:15:42.905 Deallocated/Unwritten Error: Not Supported 00:15:42.905 Deallocated Read Value: Unknown 00:15:42.905 Deallocate in Write Zeroes: Not Supported 00:15:42.905 Deallocated Guard Field: 0xFFFF 00:15:42.905 Flush: Supported 00:15:42.905 Reservation: Supported 00:15:42.905 Namespace Sharing Capabilities: Multiple Controllers 00:15:42.905 Size (in LBAs): 131072 (0GiB) 00:15:42.905 Capacity (in LBAs): 131072 (0GiB) 00:15:42.905 Utilization (in LBAs): 131072 (0GiB) 00:15:42.905 NGUID: 5DBD4A6462AC447F989E80710CFA724C 00:15:42.905 UUID: 5dbd4a64-62ac-447f-989e-80710cfa724c 00:15:42.905 Thin Provisioning: Not Supported 00:15:42.905 Per-NS Atomic Units: Yes 00:15:42.905 Atomic Boundary Size (Normal): 0 00:15:42.905 Atomic Boundary Size (PFail): 0 00:15:42.905 Atomic Boundary Offset: 0 00:15:42.905 Maximum Single Source Range Length: 65535 00:15:42.905 Maximum Copy Length: 65535 00:15:42.905 Maximum Source Range Count: 1 00:15:42.905 NGUID/EUI64 Never Reused: No 00:15:42.905 Namespace Write Protected: No 00:15:42.905 Number of LBA Formats: 1 00:15:42.905 Current LBA Format: LBA Format #00 00:15:42.905 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:42.905 00:15:42.905 06:14:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:42.905 [2024-12-09 06:14:37.431871] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:48.191 Initializing NVMe Controllers 00:15:48.191 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:48.191 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:48.191 Initialization complete. Launching workers. 00:15:48.191 ======================================================== 00:15:48.191 Latency(us) 00:15:48.191 Device Information : IOPS MiB/s Average min max 00:15:48.191 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39957.58 156.08 3203.07 901.54 8651.84 00:15:48.191 ======================================================== 00:15:48.191 Total : 39957.58 156.08 3203.07 901.54 8651.84 00:15:48.191 00:15:48.191 [2024-12-09 06:14:42.449957] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:48.191 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:48.191 [2024-12-09 06:14:42.639828] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:53.482 Initializing NVMe Controllers 00:15:53.482 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:53.482 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:53.482 Initialization complete. Launching workers. 00:15:53.482 ======================================================== 00:15:53.482 Latency(us) 00:15:53.482 Device Information : IOPS MiB/s Average min max 00:15:53.482 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16036.77 62.64 7987.20 5985.92 14964.14 00:15:53.482 ======================================================== 00:15:53.482 Total : 16036.77 62.64 7987.20 5985.92 14964.14 00:15:53.482 00:15:53.482 [2024-12-09 06:14:47.680205] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:53.482 06:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:53.482 [2024-12-09 06:14:47.886058] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:58.785 [2024-12-09 06:14:52.942618] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:58.785 Initializing NVMe Controllers 00:15:58.785 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:58.785 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:58.785 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:58.785 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:58.785 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:58.785 Initialization complete. Launching workers. 00:15:58.785 Starting thread on core 2 00:15:58.785 Starting thread on core 3 00:15:58.785 Starting thread on core 1 00:15:58.785 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:58.785 [2024-12-09 06:14:53.191255] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:02.097 [2024-12-09 06:14:56.249573] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:02.097 Initializing NVMe Controllers 00:16:02.097 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:02.097 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:02.097 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:16:02.098 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:16:02.098 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:16:02.098 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:16:02.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:02.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:02.098 Initialization complete. Launching workers. 00:16:02.098 Starting thread on core 1 with urgent priority queue 00:16:02.098 Starting thread on core 2 with urgent priority queue 00:16:02.098 Starting thread on core 3 with urgent priority queue 00:16:02.098 Starting thread on core 0 with urgent priority queue 00:16:02.098 SPDK bdev Controller (SPDK1 ) core 0: 10313.33 IO/s 9.70 secs/100000 ios 00:16:02.098 SPDK bdev Controller (SPDK1 ) core 1: 14338.33 IO/s 6.97 secs/100000 ios 00:16:02.098 SPDK bdev Controller (SPDK1 ) core 2: 11105.33 IO/s 9.00 secs/100000 ios 00:16:02.098 SPDK bdev Controller (SPDK1 ) core 3: 12972.00 IO/s 7.71 secs/100000 ios 00:16:02.098 ======================================================== 00:16:02.098 00:16:02.098 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:02.098 [2024-12-09 06:14:56.496868] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:02.098 Initializing NVMe Controllers 00:16:02.098 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:02.098 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:02.098 Namespace ID: 1 size: 0GB 00:16:02.098 Initialization complete. 00:16:02.098 INFO: using host memory buffer for IO 00:16:02.098 Hello world! 00:16:02.098 [2024-12-09 06:14:56.531068] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:02.098 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:02.357 [2024-12-09 06:14:56.772847] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:03.298 Initializing NVMe Controllers 00:16:03.298 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:03.298 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:03.298 Initialization complete. Launching workers. 00:16:03.298 submit (in ns) avg, min, max = 6507.9, 2878.5, 3997332.3 00:16:03.298 complete (in ns) avg, min, max = 17059.7, 1690.0, 4002966.2 00:16:03.298 00:16:03.298 Submit histogram 00:16:03.298 ================ 00:16:03.298 Range in us Cumulative Count 00:16:03.298 2.868 - 2.880: 0.0051% ( 1) 00:16:03.298 2.880 - 2.892: 0.0459% ( 8) 00:16:03.298 2.892 - 2.905: 0.2193% ( 34) 00:16:03.298 2.905 - 2.917: 0.6680% ( 88) 00:16:03.298 2.917 - 2.929: 2.1161% ( 284) 00:16:03.298 2.929 - 2.942: 4.5533% ( 478) 00:16:03.298 2.942 - 2.954: 8.3826% ( 751) 00:16:03.298 2.954 - 2.966: 13.0940% ( 924) 00:16:03.298 2.966 - 2.978: 18.2541% ( 1012) 00:16:03.298 2.978 - 2.991: 23.3989% ( 1009) 00:16:03.298 2.991 - 3.003: 29.4310% ( 1183) 00:16:03.298 3.003 - 3.015: 34.6268% ( 1019) 00:16:03.298 3.015 - 3.028: 40.1132% ( 1076) 00:16:03.298 3.028 - 3.040: 45.9922% ( 1153) 00:16:03.298 3.040 - 3.052: 53.1562% ( 1405) 00:16:03.298 3.052 - 3.065: 60.3508% ( 1411) 00:16:03.298 3.065 - 3.077: 69.2841% ( 1752) 00:16:03.298 3.077 - 3.089: 77.3506% ( 1582) 00:16:03.298 3.089 - 3.102: 84.8205% ( 1465) 00:16:03.298 3.102 - 3.114: 90.0469% ( 1025) 00:16:03.298 3.114 - 3.126: 93.8405% ( 744) 00:16:03.298 3.126 - 3.138: 96.5888% ( 539) 00:16:03.298 3.138 - 3.151: 98.0165% ( 280) 00:16:03.298 3.151 - 3.175: 99.3218% ( 256) 00:16:03.298 3.175 - 3.200: 99.6278% ( 60) 00:16:03.298 3.200 - 3.225: 99.6635% ( 7) 00:16:03.298 3.274 - 3.298: 99.6686% ( 1) 00:16:03.298 4.431 - 4.455: 99.6737% ( 1) 00:16:03.298 4.677 - 4.702: 99.6788% ( 1) 00:16:03.298 4.726 - 4.751: 99.6839% ( 1) 00:16:03.298 4.849 - 4.874: 99.6890% ( 1) 00:16:03.298 4.923 - 4.948: 99.6941% ( 1) 00:16:03.298 4.948 - 4.972: 99.6992% ( 1) 00:16:03.298 4.972 - 4.997: 99.7043% ( 1) 00:16:03.298 5.046 - 5.071: 99.7094% ( 1) 00:16:03.298 5.120 - 5.145: 99.7196% ( 2) 00:16:03.298 5.169 - 5.194: 99.7247% ( 1) 00:16:03.298 5.194 - 5.218: 99.7298% ( 1) 00:16:03.298 5.218 - 5.243: 99.7349% ( 1) 00:16:03.298 5.243 - 5.268: 99.7400% ( 1) 00:16:03.298 5.268 - 5.292: 99.7451% ( 1) 00:16:03.298 5.342 - 5.366: 99.7502% ( 1) 00:16:03.298 5.366 - 5.391: 99.7553% ( 1) 00:16:03.298 5.391 - 5.415: 99.7756% ( 4) 00:16:03.298 5.415 - 5.440: 99.7807% ( 1) 00:16:03.298 5.440 - 5.465: 99.7960% ( 3) 00:16:03.298 5.465 - 5.489: 99.8113% ( 3) 00:16:03.298 5.514 - 5.538: 99.8164% ( 1) 00:16:03.298 5.538 - 5.563: 99.8215% ( 1) 00:16:03.298 5.563 - 5.588: 99.8317% ( 2) 00:16:03.298 5.612 - 5.637: 99.8419% ( 2) 00:16:03.298 5.662 - 5.686: 99.8521% ( 2) 00:16:03.298 5.686 - 5.711: 99.8623% ( 2) 00:16:03.298 5.760 - 5.785: 99.8674% ( 1) 00:16:03.298 5.785 - 5.809: 99.8725% ( 1) 00:16:03.298 5.858 - 5.883: 99.8776% ( 1) 00:16:03.298 5.908 - 5.932: 99.8929% ( 3) 00:16:03.298 5.957 - 5.982: 99.8980% ( 1) 00:16:03.298 6.031 - 6.055: 99.9031% ( 1) 00:16:03.298 6.228 - 6.252: 99.9082% ( 1) 00:16:03.298 8.665 - 8.714: 99.9133% ( 1) 00:16:03.298 3982.572 - 4007.778: 100.0000% ( 17) 00:16:03.298 00:16:03.298 Complete histogram 00:16:03.298 ================== 00:16:03.298 Range in us Cumulative Count 00:16:03.298 1.686 - 1.698: 0.3467% ( 68) 00:16:03.298 1.698 - 1.711: 0.8872% ( 106) 00:16:03.298 1.711 - 1.723: 1.0606% ( 34) 00:16:03.298 1.723 - 1.735: 1.1422% ( 16) 00:16:03.298 1.735 - 1.748: 28.7987% ( 5424) 00:16:03.298 1.748 - 1.760: 46.0279% ( 3379) 00:16:03.298 1.760 - 1.772: 66.9233% ( 4098) 00:16:03.298 1.772 - 1.785: 80.2774% ( 2619) 00:16:03.298 1.785 - 1.797: 83.3418% ( 601) 00:16:03.298 1.797 - 1.809: 85.5650% ( 436) 00:16:03.298 1.809 - 1.822: 89.9347% ( 857) 00:16:03.298 1.822 - 1.834: 94.3198% ( 860) 00:16:03.298 1.834 - 1.846: 97.5627% ( 636) 00:16:03.298 1.846 - 1.858: 98.9904% ( 280) 00:16:03.298 1.858 - 1.871: 99.3371% ( 68) 00:16:03.298 1.871 - [2024-12-09 06:14:57.794406] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:03.298 1.883: 99.4085% ( 14) 00:16:03.298 1.908 - 1.920: 99.4136% ( 1) 00:16:03.298 3.225 - 3.249: 99.4187% ( 1) 00:16:03.298 3.446 - 3.471: 99.4238% ( 1) 00:16:03.298 3.545 - 3.569: 99.4289% ( 1) 00:16:03.298 3.618 - 3.643: 99.4340% ( 1) 00:16:03.298 3.791 - 3.815: 99.4391% ( 1) 00:16:03.298 3.815 - 3.840: 99.4442% ( 1) 00:16:03.298 3.865 - 3.889: 99.4493% ( 1) 00:16:03.298 3.889 - 3.914: 99.4544% ( 1) 00:16:03.298 3.914 - 3.938: 99.4595% ( 1) 00:16:03.298 3.938 - 3.963: 99.4646% ( 1) 00:16:03.298 4.086 - 4.111: 99.4748% ( 2) 00:16:03.298 4.111 - 4.135: 99.4799% ( 1) 00:16:03.298 4.135 - 4.160: 99.4850% ( 1) 00:16:03.298 4.160 - 4.185: 99.4901% ( 1) 00:16:03.298 4.209 - 4.234: 99.4952% ( 1) 00:16:03.298 4.258 - 4.283: 99.5054% ( 2) 00:16:03.298 4.308 - 4.332: 99.5105% ( 1) 00:16:03.298 4.480 - 4.505: 99.5156% ( 1) 00:16:03.298 4.505 - 4.529: 99.5207% ( 1) 00:16:03.298 4.529 - 4.554: 99.5258% ( 1) 00:16:03.298 4.702 - 4.726: 99.5360% ( 2) 00:16:03.298 4.726 - 4.751: 99.5411% ( 1) 00:16:03.298 4.751 - 4.775: 99.5462% ( 1) 00:16:03.298 4.825 - 4.849: 99.5513% ( 1) 00:16:03.298 4.923 - 4.948: 99.5666% ( 3) 00:16:03.298 5.046 - 5.071: 99.5717% ( 1) 00:16:03.298 5.218 - 5.243: 99.5768% ( 1) 00:16:03.298 5.883 - 5.908: 99.5819% ( 1) 00:16:03.298 7.089 - 7.138: 99.5870% ( 1) 00:16:03.298 7.877 - 7.926: 99.5921% ( 1) 00:16:03.298 8.369 - 8.418: 99.5972% ( 1) 00:16:03.298 9.994 - 10.043: 99.6023% ( 1) 00:16:03.298 34.462 - 34.658: 99.6074% ( 1) 00:16:03.298 81.526 - 81.920: 99.6125% ( 1) 00:16:03.298 129.969 - 130.757: 99.6176% ( 1) 00:16:03.298 3982.572 - 4007.778: 100.0000% ( 75) 00:16:03.298 00:16:03.298 06:14:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:16:03.299 06:14:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:03.299 06:14:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:16:03.299 06:14:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:16:03.299 06:14:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:03.559 [ 00:16:03.559 { 00:16:03.559 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:03.559 "subtype": "Discovery", 00:16:03.559 "listen_addresses": [], 00:16:03.559 "allow_any_host": true, 00:16:03.559 "hosts": [] 00:16:03.559 }, 00:16:03.559 { 00:16:03.559 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:03.559 "subtype": "NVMe", 00:16:03.559 "listen_addresses": [ 00:16:03.559 { 00:16:03.559 "trtype": "VFIOUSER", 00:16:03.559 "adrfam": "IPv4", 00:16:03.559 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:03.559 "trsvcid": "0" 00:16:03.559 } 00:16:03.559 ], 00:16:03.559 "allow_any_host": true, 00:16:03.559 "hosts": [], 00:16:03.559 "serial_number": "SPDK1", 00:16:03.559 "model_number": "SPDK bdev Controller", 00:16:03.559 "max_namespaces": 32, 00:16:03.559 "min_cntlid": 1, 00:16:03.559 "max_cntlid": 65519, 00:16:03.559 "namespaces": [ 00:16:03.559 { 00:16:03.559 "nsid": 1, 00:16:03.559 "bdev_name": "Malloc1", 00:16:03.559 "name": "Malloc1", 00:16:03.559 "nguid": "5DBD4A6462AC447F989E80710CFA724C", 00:16:03.559 "uuid": "5dbd4a64-62ac-447f-989e-80710cfa724c" 00:16:03.559 } 00:16:03.559 ] 00:16:03.559 }, 00:16:03.559 { 00:16:03.559 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:03.559 "subtype": "NVMe", 00:16:03.559 "listen_addresses": [ 00:16:03.559 { 00:16:03.559 "trtype": "VFIOUSER", 00:16:03.559 "adrfam": "IPv4", 00:16:03.559 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:03.559 "trsvcid": "0" 00:16:03.559 } 00:16:03.559 ], 00:16:03.559 "allow_any_host": true, 00:16:03.559 "hosts": [], 00:16:03.559 "serial_number": "SPDK2", 00:16:03.559 "model_number": "SPDK bdev Controller", 00:16:03.559 "max_namespaces": 32, 00:16:03.559 "min_cntlid": 1, 00:16:03.559 "max_cntlid": 65519, 00:16:03.559 "namespaces": [ 00:16:03.559 { 00:16:03.559 "nsid": 1, 00:16:03.559 "bdev_name": "Malloc2", 00:16:03.559 "name": "Malloc2", 00:16:03.559 "nguid": "6D020AA9BC104643A8956F4AE9B072D1", 00:16:03.559 "uuid": "6d020aa9-bc10-4643-a895-6f4ae9b072d1" 00:16:03.559 } 00:16:03.559 ] 00:16:03.559 } 00:16:03.559 ] 00:16:03.559 06:14:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:03.559 06:14:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=291528 00:16:03.559 06:14:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:03.559 06:14:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:16:03.559 06:14:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:16:03.559 06:14:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:03.559 06:14:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:16:03.559 06:14:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=1 00:16:03.559 06:14:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:16:03.559 06:14:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:03.559 06:14:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:16:03.559 06:14:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=2 00:16:03.559 06:14:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:16:03.819 [2024-12-09 06:14:58.164806] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:03.819 06:14:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:03.820 06:14:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:03.820 06:14:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:16:03.820 06:14:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:03.820 06:14:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:16:03.820 Malloc3 00:16:04.080 06:14:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:16:04.080 [2024-12-09 06:14:58.574701] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:04.080 06:14:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:04.080 Asynchronous Event Request test 00:16:04.080 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:04.080 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:04.080 Registering asynchronous event callbacks... 00:16:04.080 Starting namespace attribute notice tests for all controllers... 00:16:04.080 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:04.080 aer_cb - Changed Namespace 00:16:04.080 Cleaning up... 00:16:04.341 [ 00:16:04.341 { 00:16:04.341 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:04.341 "subtype": "Discovery", 00:16:04.341 "listen_addresses": [], 00:16:04.341 "allow_any_host": true, 00:16:04.341 "hosts": [] 00:16:04.341 }, 00:16:04.341 { 00:16:04.341 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:04.341 "subtype": "NVMe", 00:16:04.341 "listen_addresses": [ 00:16:04.341 { 00:16:04.341 "trtype": "VFIOUSER", 00:16:04.341 "adrfam": "IPv4", 00:16:04.341 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:04.341 "trsvcid": "0" 00:16:04.341 } 00:16:04.341 ], 00:16:04.341 "allow_any_host": true, 00:16:04.341 "hosts": [], 00:16:04.341 "serial_number": "SPDK1", 00:16:04.341 "model_number": "SPDK bdev Controller", 00:16:04.341 "max_namespaces": 32, 00:16:04.341 "min_cntlid": 1, 00:16:04.341 "max_cntlid": 65519, 00:16:04.341 "namespaces": [ 00:16:04.341 { 00:16:04.341 "nsid": 1, 00:16:04.341 "bdev_name": "Malloc1", 00:16:04.341 "name": "Malloc1", 00:16:04.341 "nguid": "5DBD4A6462AC447F989E80710CFA724C", 00:16:04.341 "uuid": "5dbd4a64-62ac-447f-989e-80710cfa724c" 00:16:04.341 }, 00:16:04.341 { 00:16:04.341 "nsid": 2, 00:16:04.341 "bdev_name": "Malloc3", 00:16:04.341 "name": "Malloc3", 00:16:04.341 "nguid": "184FF70B288F43AB9374DD02AD84BB0A", 00:16:04.341 "uuid": "184ff70b-288f-43ab-9374-dd02ad84bb0a" 00:16:04.341 } 00:16:04.341 ] 00:16:04.341 }, 00:16:04.342 { 00:16:04.342 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:04.342 "subtype": "NVMe", 00:16:04.342 "listen_addresses": [ 00:16:04.342 { 00:16:04.342 "trtype": "VFIOUSER", 00:16:04.342 "adrfam": "IPv4", 00:16:04.342 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:04.342 "trsvcid": "0" 00:16:04.342 } 00:16:04.342 ], 00:16:04.342 "allow_any_host": true, 00:16:04.342 "hosts": [], 00:16:04.342 "serial_number": "SPDK2", 00:16:04.342 "model_number": "SPDK bdev Controller", 00:16:04.342 "max_namespaces": 32, 00:16:04.342 "min_cntlid": 1, 00:16:04.342 "max_cntlid": 65519, 00:16:04.342 "namespaces": [ 00:16:04.342 { 00:16:04.342 "nsid": 1, 00:16:04.342 "bdev_name": "Malloc2", 00:16:04.342 "name": "Malloc2", 00:16:04.342 "nguid": "6D020AA9BC104643A8956F4AE9B072D1", 00:16:04.342 "uuid": "6d020aa9-bc10-4643-a895-6f4ae9b072d1" 00:16:04.342 } 00:16:04.342 ] 00:16:04.342 } 00:16:04.342 ] 00:16:04.342 06:14:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 291528 00:16:04.342 06:14:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:04.342 06:14:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:04.342 06:14:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:16:04.342 06:14:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:04.342 [2024-12-09 06:14:58.794863] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:16:04.342 [2024-12-09 06:14:58.794904] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid291812 ] 00:16:04.342 [2024-12-09 06:14:58.832411] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:16:04.342 [2024-12-09 06:14:58.841668] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:04.342 [2024-12-09 06:14:58.841689] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f059b400000 00:16:04.342 [2024-12-09 06:14:58.842668] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:04.342 [2024-12-09 06:14:58.843674] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:04.342 [2024-12-09 06:14:58.844681] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:04.342 [2024-12-09 06:14:58.845692] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:04.342 [2024-12-09 06:14:58.846699] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:04.342 [2024-12-09 06:14:58.850479] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:04.342 [2024-12-09 06:14:58.850726] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:04.342 [2024-12-09 06:14:58.851735] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:04.342 [2024-12-09 06:14:58.852738] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:04.342 [2024-12-09 06:14:58.852748] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f059b3f5000 00:16:04.342 [2024-12-09 06:14:58.853686] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:04.342 [2024-12-09 06:14:58.865222] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:16:04.342 [2024-12-09 06:14:58.865240] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:16:04.342 [2024-12-09 06:14:58.867287] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:04.342 [2024-12-09 06:14:58.867321] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:04.342 [2024-12-09 06:14:58.867383] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:16:04.342 [2024-12-09 06:14:58.867392] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:16:04.342 [2024-12-09 06:14:58.867397] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:16:04.342 [2024-12-09 06:14:58.868294] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:16:04.342 [2024-12-09 06:14:58.868302] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:16:04.342 [2024-12-09 06:14:58.868308] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:16:04.342 [2024-12-09 06:14:58.869298] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:04.342 [2024-12-09 06:14:58.869305] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:16:04.342 [2024-12-09 06:14:58.869311] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:16:04.342 [2024-12-09 06:14:58.870307] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:16:04.342 [2024-12-09 06:14:58.870314] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:04.342 [2024-12-09 06:14:58.871312] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:16:04.342 [2024-12-09 06:14:58.871318] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:16:04.342 [2024-12-09 06:14:58.871322] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:16:04.342 [2024-12-09 06:14:58.871327] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:04.342 [2024-12-09 06:14:58.871433] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:16:04.342 [2024-12-09 06:14:58.871436] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:04.342 [2024-12-09 06:14:58.871442] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:16:04.342 [2024-12-09 06:14:58.875453] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:16:04.342 [2024-12-09 06:14:58.876356] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:16:04.342 [2024-12-09 06:14:58.877362] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:04.342 [2024-12-09 06:14:58.878364] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:04.342 [2024-12-09 06:14:58.878395] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:04.342 [2024-12-09 06:14:58.879372] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:16:04.342 [2024-12-09 06:14:58.879379] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:04.342 [2024-12-09 06:14:58.879383] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:16:04.342 [2024-12-09 06:14:58.879398] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:16:04.342 [2024-12-09 06:14:58.879404] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:16:04.342 [2024-12-09 06:14:58.879415] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:04.342 [2024-12-09 06:14:58.879419] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:04.342 [2024-12-09 06:14:58.879422] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:04.342 [2024-12-09 06:14:58.879431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:04.342 [2024-12-09 06:14:58.886455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:04.342 [2024-12-09 06:14:58.886464] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:16:04.342 [2024-12-09 06:14:58.886470] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:16:04.342 [2024-12-09 06:14:58.886473] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:16:04.342 [2024-12-09 06:14:58.886477] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:04.342 [2024-12-09 06:14:58.886481] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:16:04.342 [2024-12-09 06:14:58.886484] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:16:04.342 [2024-12-09 06:14:58.886488] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:16:04.342 [2024-12-09 06:14:58.886493] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:16:04.342 [2024-12-09 06:14:58.886501] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:04.342 [2024-12-09 06:14:58.894453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:04.343 [2024-12-09 06:14:58.894464] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:04.343 [2024-12-09 06:14:58.894470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:04.343 [2024-12-09 06:14:58.894477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:04.343 [2024-12-09 06:14:58.894483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:04.343 [2024-12-09 06:14:58.894486] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:16:04.343 [2024-12-09 06:14:58.894493] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:04.343 [2024-12-09 06:14:58.894500] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:04.343 [2024-12-09 06:14:58.902454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:04.343 [2024-12-09 06:14:58.902460] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:16:04.343 [2024-12-09 06:14:58.902464] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:04.343 [2024-12-09 06:14:58.902469] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:16:04.343 [2024-12-09 06:14:58.902473] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:16:04.343 [2024-12-09 06:14:58.902480] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:04.343 [2024-12-09 06:14:58.910452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:04.343 [2024-12-09 06:14:58.910499] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:16:04.343 [2024-12-09 06:14:58.910505] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:16:04.343 [2024-12-09 06:14:58.910511] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:04.343 [2024-12-09 06:14:58.910514] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:04.343 [2024-12-09 06:14:58.910517] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:04.343 [2024-12-09 06:14:58.910521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:04.343 [2024-12-09 06:14:58.918454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:04.343 [2024-12-09 06:14:58.918462] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:16:04.343 [2024-12-09 06:14:58.918474] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:16:04.343 [2024-12-09 06:14:58.918480] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:16:04.343 [2024-12-09 06:14:58.918485] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:04.343 [2024-12-09 06:14:58.918490] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:04.343 [2024-12-09 06:14:58.918493] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:04.343 [2024-12-09 06:14:58.918497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:04.605 [2024-12-09 06:14:58.926453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:04.605 [2024-12-09 06:14:58.926464] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:04.605 [2024-12-09 06:14:58.926470] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:04.605 [2024-12-09 06:14:58.926476] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:04.605 [2024-12-09 06:14:58.926479] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:04.605 [2024-12-09 06:14:58.926482] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:04.605 [2024-12-09 06:14:58.926486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:04.605 [2024-12-09 06:14:58.934452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:04.605 [2024-12-09 06:14:58.934459] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:04.605 [2024-12-09 06:14:58.934464] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:16:04.605 [2024-12-09 06:14:58.934471] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:16:04.605 [2024-12-09 06:14:58.934476] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:16:04.605 [2024-12-09 06:14:58.934480] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:04.605 [2024-12-09 06:14:58.934484] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:16:04.605 [2024-12-09 06:14:58.934488] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:16:04.605 [2024-12-09 06:14:58.934491] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:16:04.605 [2024-12-09 06:14:58.934495] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:16:04.605 [2024-12-09 06:14:58.934509] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:04.605 [2024-12-09 06:14:58.942452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:04.605 [2024-12-09 06:14:58.942463] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:04.605 [2024-12-09 06:14:58.950452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:04.605 [2024-12-09 06:14:58.950462] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:04.605 [2024-12-09 06:14:58.958454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:04.605 [2024-12-09 06:14:58.958464] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:04.605 [2024-12-09 06:14:58.966452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:04.605 [2024-12-09 06:14:58.966464] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:04.605 [2024-12-09 06:14:58.966468] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:04.605 [2024-12-09 06:14:58.966470] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:04.605 [2024-12-09 06:14:58.966473] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:04.605 [2024-12-09 06:14:58.966475] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:16:04.605 [2024-12-09 06:14:58.966480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:04.605 [2024-12-09 06:14:58.966486] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:04.605 [2024-12-09 06:14:58.966489] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:04.605 [2024-12-09 06:14:58.966491] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:04.605 [2024-12-09 06:14:58.966496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:04.605 [2024-12-09 06:14:58.966501] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:04.605 [2024-12-09 06:14:58.966504] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:04.605 [2024-12-09 06:14:58.966507] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:04.605 [2024-12-09 06:14:58.966511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:04.605 [2024-12-09 06:14:58.966517] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:04.605 [2024-12-09 06:14:58.966520] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:04.605 [2024-12-09 06:14:58.966523] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:04.605 [2024-12-09 06:14:58.966527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:04.605 [2024-12-09 06:14:58.974452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:04.605 [2024-12-09 06:14:58.974463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:04.605 [2024-12-09 06:14:58.974470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:04.605 [2024-12-09 06:14:58.974475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:04.605 ===================================================== 00:16:04.605 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:04.605 ===================================================== 00:16:04.605 Controller Capabilities/Features 00:16:04.605 ================================ 00:16:04.605 Vendor ID: 4e58 00:16:04.605 Subsystem Vendor ID: 4e58 00:16:04.605 Serial Number: SPDK2 00:16:04.605 Model Number: SPDK bdev Controller 00:16:04.605 Firmware Version: 25.01 00:16:04.605 Recommended Arb Burst: 6 00:16:04.605 IEEE OUI Identifier: 8d 6b 50 00:16:04.605 Multi-path I/O 00:16:04.605 May have multiple subsystem ports: Yes 00:16:04.605 May have multiple controllers: Yes 00:16:04.605 Associated with SR-IOV VF: No 00:16:04.605 Max Data Transfer Size: 131072 00:16:04.605 Max Number of Namespaces: 32 00:16:04.605 Max Number of I/O Queues: 127 00:16:04.605 NVMe Specification Version (VS): 1.3 00:16:04.605 NVMe Specification Version (Identify): 1.3 00:16:04.605 Maximum Queue Entries: 256 00:16:04.605 Contiguous Queues Required: Yes 00:16:04.605 Arbitration Mechanisms Supported 00:16:04.605 Weighted Round Robin: Not Supported 00:16:04.605 Vendor Specific: Not Supported 00:16:04.605 Reset Timeout: 15000 ms 00:16:04.605 Doorbell Stride: 4 bytes 00:16:04.605 NVM Subsystem Reset: Not Supported 00:16:04.605 Command Sets Supported 00:16:04.605 NVM Command Set: Supported 00:16:04.605 Boot Partition: Not Supported 00:16:04.605 Memory Page Size Minimum: 4096 bytes 00:16:04.605 Memory Page Size Maximum: 4096 bytes 00:16:04.605 Persistent Memory Region: Not Supported 00:16:04.605 Optional Asynchronous Events Supported 00:16:04.605 Namespace Attribute Notices: Supported 00:16:04.605 Firmware Activation Notices: Not Supported 00:16:04.605 ANA Change Notices: Not Supported 00:16:04.605 PLE Aggregate Log Change Notices: Not Supported 00:16:04.605 LBA Status Info Alert Notices: Not Supported 00:16:04.605 EGE Aggregate Log Change Notices: Not Supported 00:16:04.605 Normal NVM Subsystem Shutdown event: Not Supported 00:16:04.605 Zone Descriptor Change Notices: Not Supported 00:16:04.605 Discovery Log Change Notices: Not Supported 00:16:04.605 Controller Attributes 00:16:04.605 128-bit Host Identifier: Supported 00:16:04.605 Non-Operational Permissive Mode: Not Supported 00:16:04.605 NVM Sets: Not Supported 00:16:04.605 Read Recovery Levels: Not Supported 00:16:04.605 Endurance Groups: Not Supported 00:16:04.605 Predictable Latency Mode: Not Supported 00:16:04.605 Traffic Based Keep ALive: Not Supported 00:16:04.605 Namespace Granularity: Not Supported 00:16:04.605 SQ Associations: Not Supported 00:16:04.605 UUID List: Not Supported 00:16:04.605 Multi-Domain Subsystem: Not Supported 00:16:04.606 Fixed Capacity Management: Not Supported 00:16:04.606 Variable Capacity Management: Not Supported 00:16:04.606 Delete Endurance Group: Not Supported 00:16:04.606 Delete NVM Set: Not Supported 00:16:04.606 Extended LBA Formats Supported: Not Supported 00:16:04.606 Flexible Data Placement Supported: Not Supported 00:16:04.606 00:16:04.606 Controller Memory Buffer Support 00:16:04.606 ================================ 00:16:04.606 Supported: No 00:16:04.606 00:16:04.606 Persistent Memory Region Support 00:16:04.606 ================================ 00:16:04.606 Supported: No 00:16:04.606 00:16:04.606 Admin Command Set Attributes 00:16:04.606 ============================ 00:16:04.606 Security Send/Receive: Not Supported 00:16:04.606 Format NVM: Not Supported 00:16:04.606 Firmware Activate/Download: Not Supported 00:16:04.606 Namespace Management: Not Supported 00:16:04.606 Device Self-Test: Not Supported 00:16:04.606 Directives: Not Supported 00:16:04.606 NVMe-MI: Not Supported 00:16:04.606 Virtualization Management: Not Supported 00:16:04.606 Doorbell Buffer Config: Not Supported 00:16:04.606 Get LBA Status Capability: Not Supported 00:16:04.606 Command & Feature Lockdown Capability: Not Supported 00:16:04.606 Abort Command Limit: 4 00:16:04.606 Async Event Request Limit: 4 00:16:04.606 Number of Firmware Slots: N/A 00:16:04.606 Firmware Slot 1 Read-Only: N/A 00:16:04.606 Firmware Activation Without Reset: N/A 00:16:04.606 Multiple Update Detection Support: N/A 00:16:04.606 Firmware Update Granularity: No Information Provided 00:16:04.606 Per-Namespace SMART Log: No 00:16:04.606 Asymmetric Namespace Access Log Page: Not Supported 00:16:04.606 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:16:04.606 Command Effects Log Page: Supported 00:16:04.606 Get Log Page Extended Data: Supported 00:16:04.606 Telemetry Log Pages: Not Supported 00:16:04.606 Persistent Event Log Pages: Not Supported 00:16:04.606 Supported Log Pages Log Page: May Support 00:16:04.606 Commands Supported & Effects Log Page: Not Supported 00:16:04.606 Feature Identifiers & Effects Log Page:May Support 00:16:04.606 NVMe-MI Commands & Effects Log Page: May Support 00:16:04.606 Data Area 4 for Telemetry Log: Not Supported 00:16:04.606 Error Log Page Entries Supported: 128 00:16:04.606 Keep Alive: Supported 00:16:04.606 Keep Alive Granularity: 10000 ms 00:16:04.606 00:16:04.606 NVM Command Set Attributes 00:16:04.606 ========================== 00:16:04.606 Submission Queue Entry Size 00:16:04.606 Max: 64 00:16:04.606 Min: 64 00:16:04.606 Completion Queue Entry Size 00:16:04.606 Max: 16 00:16:04.606 Min: 16 00:16:04.606 Number of Namespaces: 32 00:16:04.606 Compare Command: Supported 00:16:04.606 Write Uncorrectable Command: Not Supported 00:16:04.606 Dataset Management Command: Supported 00:16:04.606 Write Zeroes Command: Supported 00:16:04.606 Set Features Save Field: Not Supported 00:16:04.606 Reservations: Not Supported 00:16:04.606 Timestamp: Not Supported 00:16:04.606 Copy: Supported 00:16:04.606 Volatile Write Cache: Present 00:16:04.606 Atomic Write Unit (Normal): 1 00:16:04.606 Atomic Write Unit (PFail): 1 00:16:04.606 Atomic Compare & Write Unit: 1 00:16:04.606 Fused Compare & Write: Supported 00:16:04.606 Scatter-Gather List 00:16:04.606 SGL Command Set: Supported (Dword aligned) 00:16:04.606 SGL Keyed: Not Supported 00:16:04.606 SGL Bit Bucket Descriptor: Not Supported 00:16:04.606 SGL Metadata Pointer: Not Supported 00:16:04.606 Oversized SGL: Not Supported 00:16:04.606 SGL Metadata Address: Not Supported 00:16:04.606 SGL Offset: Not Supported 00:16:04.606 Transport SGL Data Block: Not Supported 00:16:04.606 Replay Protected Memory Block: Not Supported 00:16:04.606 00:16:04.606 Firmware Slot Information 00:16:04.606 ========================= 00:16:04.606 Active slot: 1 00:16:04.606 Slot 1 Firmware Revision: 25.01 00:16:04.606 00:16:04.606 00:16:04.606 Commands Supported and Effects 00:16:04.606 ============================== 00:16:04.606 Admin Commands 00:16:04.606 -------------- 00:16:04.606 Get Log Page (02h): Supported 00:16:04.606 Identify (06h): Supported 00:16:04.606 Abort (08h): Supported 00:16:04.606 Set Features (09h): Supported 00:16:04.606 Get Features (0Ah): Supported 00:16:04.606 Asynchronous Event Request (0Ch): Supported 00:16:04.606 Keep Alive (18h): Supported 00:16:04.606 I/O Commands 00:16:04.606 ------------ 00:16:04.606 Flush (00h): Supported LBA-Change 00:16:04.606 Write (01h): Supported LBA-Change 00:16:04.606 Read (02h): Supported 00:16:04.606 Compare (05h): Supported 00:16:04.606 Write Zeroes (08h): Supported LBA-Change 00:16:04.606 Dataset Management (09h): Supported LBA-Change 00:16:04.606 Copy (19h): Supported LBA-Change 00:16:04.606 00:16:04.606 Error Log 00:16:04.606 ========= 00:16:04.606 00:16:04.606 Arbitration 00:16:04.606 =========== 00:16:04.606 Arbitration Burst: 1 00:16:04.606 00:16:04.606 Power Management 00:16:04.606 ================ 00:16:04.606 Number of Power States: 1 00:16:04.606 Current Power State: Power State #0 00:16:04.606 Power State #0: 00:16:04.606 Max Power: 0.00 W 00:16:04.606 Non-Operational State: Operational 00:16:04.606 Entry Latency: Not Reported 00:16:04.606 Exit Latency: Not Reported 00:16:04.606 Relative Read Throughput: 0 00:16:04.606 Relative Read Latency: 0 00:16:04.606 Relative Write Throughput: 0 00:16:04.606 Relative Write Latency: 0 00:16:04.606 Idle Power: Not Reported 00:16:04.606 Active Power: Not Reported 00:16:04.606 Non-Operational Permissive Mode: Not Supported 00:16:04.606 00:16:04.606 Health Information 00:16:04.606 ================== 00:16:04.606 Critical Warnings: 00:16:04.606 Available Spare Space: OK 00:16:04.606 Temperature: OK 00:16:04.606 Device Reliability: OK 00:16:04.606 Read Only: No 00:16:04.606 Volatile Memory Backup: OK 00:16:04.606 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:04.606 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:04.606 Available Spare: 0% 00:16:04.606 Available Sp[2024-12-09 06:14:58.974553] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:04.606 [2024-12-09 06:14:58.982453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:04.606 [2024-12-09 06:14:58.982480] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:16:04.606 [2024-12-09 06:14:58.982487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.606 [2024-12-09 06:14:58.982493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.606 [2024-12-09 06:14:58.982498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.606 [2024-12-09 06:14:58.982503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.606 [2024-12-09 06:14:58.982535] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:04.606 [2024-12-09 06:14:58.982542] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:16:04.606 [2024-12-09 06:14:58.983539] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:04.606 [2024-12-09 06:14:58.983574] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:16:04.606 [2024-12-09 06:14:58.983579] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:16:04.606 [2024-12-09 06:14:58.984539] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:16:04.606 [2024-12-09 06:14:58.984548] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:16:04.606 [2024-12-09 06:14:58.984592] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:16:04.606 [2024-12-09 06:14:58.985573] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:04.606 are Threshold: 0% 00:16:04.606 Life Percentage Used: 0% 00:16:04.606 Data Units Read: 0 00:16:04.606 Data Units Written: 0 00:16:04.606 Host Read Commands: 0 00:16:04.606 Host Write Commands: 0 00:16:04.606 Controller Busy Time: 0 minutes 00:16:04.606 Power Cycles: 0 00:16:04.606 Power On Hours: 0 hours 00:16:04.606 Unsafe Shutdowns: 0 00:16:04.606 Unrecoverable Media Errors: 0 00:16:04.606 Lifetime Error Log Entries: 0 00:16:04.606 Warning Temperature Time: 0 minutes 00:16:04.606 Critical Temperature Time: 0 minutes 00:16:04.606 00:16:04.606 Number of Queues 00:16:04.606 ================ 00:16:04.606 Number of I/O Submission Queues: 127 00:16:04.606 Number of I/O Completion Queues: 127 00:16:04.606 00:16:04.606 Active Namespaces 00:16:04.606 ================= 00:16:04.606 Namespace ID:1 00:16:04.606 Error Recovery Timeout: Unlimited 00:16:04.606 Command Set Identifier: NVM (00h) 00:16:04.606 Deallocate: Supported 00:16:04.607 Deallocated/Unwritten Error: Not Supported 00:16:04.607 Deallocated Read Value: Unknown 00:16:04.607 Deallocate in Write Zeroes: Not Supported 00:16:04.607 Deallocated Guard Field: 0xFFFF 00:16:04.607 Flush: Supported 00:16:04.607 Reservation: Supported 00:16:04.607 Namespace Sharing Capabilities: Multiple Controllers 00:16:04.607 Size (in LBAs): 131072 (0GiB) 00:16:04.607 Capacity (in LBAs): 131072 (0GiB) 00:16:04.607 Utilization (in LBAs): 131072 (0GiB) 00:16:04.607 NGUID: 6D020AA9BC104643A8956F4AE9B072D1 00:16:04.607 UUID: 6d020aa9-bc10-4643-a895-6f4ae9b072d1 00:16:04.607 Thin Provisioning: Not Supported 00:16:04.607 Per-NS Atomic Units: Yes 00:16:04.607 Atomic Boundary Size (Normal): 0 00:16:04.607 Atomic Boundary Size (PFail): 0 00:16:04.607 Atomic Boundary Offset: 0 00:16:04.607 Maximum Single Source Range Length: 65535 00:16:04.607 Maximum Copy Length: 65535 00:16:04.607 Maximum Source Range Count: 1 00:16:04.607 NGUID/EUI64 Never Reused: No 00:16:04.607 Namespace Write Protected: No 00:16:04.607 Number of LBA Formats: 1 00:16:04.607 Current LBA Format: LBA Format #00 00:16:04.607 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:04.607 00:16:04.607 06:14:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:04.607 [2024-12-09 06:14:59.174692] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:09.892 Initializing NVMe Controllers 00:16:09.892 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:09.892 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:09.892 Initialization complete. Launching workers. 00:16:09.892 ======================================================== 00:16:09.892 Latency(us) 00:16:09.892 Device Information : IOPS MiB/s Average min max 00:16:09.892 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39954.54 156.07 3203.31 892.70 8646.92 00:16:09.892 ======================================================== 00:16:09.892 Total : 39954.54 156.07 3203.31 892.70 8646.92 00:16:09.892 00:16:09.892 [2024-12-09 06:15:04.282638] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:09.892 06:15:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:09.892 [2024-12-09 06:15:04.472201] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:15.182 Initializing NVMe Controllers 00:16:15.182 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:15.182 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:15.182 Initialization complete. Launching workers. 00:16:15.182 ======================================================== 00:16:15.182 Latency(us) 00:16:15.182 Device Information : IOPS MiB/s Average min max 00:16:15.182 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39973.33 156.15 3201.81 890.92 9724.84 00:16:15.182 ======================================================== 00:16:15.182 Total : 39973.33 156.15 3201.81 890.92 9724.84 00:16:15.182 00:16:15.182 [2024-12-09 06:15:09.489243] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:15.182 06:15:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:15.182 [2024-12-09 06:15:09.704024] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:20.469 [2024-12-09 06:15:14.829529] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:20.469 Initializing NVMe Controllers 00:16:20.469 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:20.469 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:20.469 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:16:20.469 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:16:20.469 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:16:20.469 Initialization complete. Launching workers. 00:16:20.469 Starting thread on core 2 00:16:20.469 Starting thread on core 3 00:16:20.469 Starting thread on core 1 00:16:20.469 06:15:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:16:20.730 [2024-12-09 06:15:15.086823] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:24.028 [2024-12-09 06:15:18.148146] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:24.029 Initializing NVMe Controllers 00:16:24.029 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:24.029 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:24.029 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:16:24.029 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:16:24.029 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:16:24.029 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:16:24.029 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:24.029 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:24.029 Initialization complete. Launching workers. 00:16:24.029 Starting thread on core 1 with urgent priority queue 00:16:24.029 Starting thread on core 2 with urgent priority queue 00:16:24.029 Starting thread on core 3 with urgent priority queue 00:16:24.029 Starting thread on core 0 with urgent priority queue 00:16:24.029 SPDK bdev Controller (SPDK2 ) core 0: 12833.00 IO/s 7.79 secs/100000 ios 00:16:24.029 SPDK bdev Controller (SPDK2 ) core 1: 11910.67 IO/s 8.40 secs/100000 ios 00:16:24.029 SPDK bdev Controller (SPDK2 ) core 2: 14762.67 IO/s 6.77 secs/100000 ios 00:16:24.029 SPDK bdev Controller (SPDK2 ) core 3: 8020.67 IO/s 12.47 secs/100000 ios 00:16:24.029 ======================================================== 00:16:24.029 00:16:24.029 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:24.029 [2024-12-09 06:15:18.390794] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:24.029 Initializing NVMe Controllers 00:16:24.029 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:24.029 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:24.029 Namespace ID: 1 size: 0GB 00:16:24.029 Initialization complete. 00:16:24.029 INFO: using host memory buffer for IO 00:16:24.029 Hello world! 00:16:24.029 [2024-12-09 06:15:18.399857] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:24.029 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:24.289 [2024-12-09 06:15:18.644372] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:25.229 Initializing NVMe Controllers 00:16:25.229 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:25.229 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:25.229 Initialization complete. Launching workers. 00:16:25.229 submit (in ns) avg, min, max = 4440.8, 2881.5, 3998619.2 00:16:25.229 complete (in ns) avg, min, max = 18001.2, 1680.8, 6989140.8 00:16:25.229 00:16:25.229 Submit histogram 00:16:25.229 ================ 00:16:25.229 Range in us Cumulative Count 00:16:25.229 2.880 - 2.892: 0.1761% ( 35) 00:16:25.229 2.892 - 2.905: 0.7145% ( 107) 00:16:25.229 2.905 - 2.917: 2.7978% ( 414) 00:16:25.229 2.917 - 2.929: 5.6760% ( 572) 00:16:25.229 2.929 - 2.942: 10.3306% ( 925) 00:16:25.229 2.942 - 2.954: 14.7688% ( 882) 00:16:25.229 2.954 - 2.966: 19.6951% ( 979) 00:16:25.229 2.966 - 2.978: 24.6465% ( 984) 00:16:25.229 2.978 - 2.991: 30.5087% ( 1165) 00:16:25.229 2.991 - 3.003: 36.0590% ( 1103) 00:16:25.229 3.003 - 3.015: 41.2922% ( 1040) 00:16:25.229 3.015 - 3.028: 46.5808% ( 1051) 00:16:25.229 3.028 - 3.040: 51.7989% ( 1037) 00:16:25.229 3.040 - 3.052: 58.9141% ( 1414) 00:16:25.229 3.052 - 3.065: 67.4684% ( 1700) 00:16:25.229 3.065 - 3.077: 76.3045% ( 1756) 00:16:25.229 3.077 - 3.089: 83.0725% ( 1345) 00:16:25.229 3.089 - 3.102: 89.0354% ( 1185) 00:16:25.229 3.102 - 3.114: 93.4987% ( 887) 00:16:25.229 3.114 - 3.126: 96.4827% ( 593) 00:16:25.229 3.126 - 3.138: 98.3546% ( 372) 00:16:25.229 3.138 - 3.151: 99.0590% ( 140) 00:16:25.229 3.151 - 3.175: 99.5220% ( 92) 00:16:25.229 3.175 - 3.200: 99.6377% ( 23) 00:16:25.229 3.200 - 3.225: 99.6629% ( 5) 00:16:25.229 3.225 - 3.249: 99.6729% ( 2) 00:16:25.229 3.274 - 3.298: 99.6780% ( 1) 00:16:25.229 3.348 - 3.372: 99.6830% ( 1) 00:16:25.229 3.643 - 3.668: 99.6880% ( 1) 00:16:25.229 4.529 - 4.554: 99.6981% ( 2) 00:16:25.229 4.554 - 4.578: 99.7031% ( 1) 00:16:25.229 4.652 - 4.677: 99.7081% ( 1) 00:16:25.229 4.726 - 4.751: 99.7132% ( 1) 00:16:25.229 4.751 - 4.775: 99.7232% ( 2) 00:16:25.229 4.800 - 4.825: 99.7283% ( 1) 00:16:25.229 4.849 - 4.874: 99.7434% ( 3) 00:16:25.229 4.874 - 4.898: 99.7484% ( 1) 00:16:25.229 4.997 - 5.022: 99.7534% ( 1) 00:16:25.229 5.022 - 5.046: 99.7736% ( 4) 00:16:25.229 5.071 - 5.095: 99.7786% ( 1) 00:16:25.229 5.120 - 5.145: 99.7836% ( 1) 00:16:25.229 5.145 - 5.169: 99.7937% ( 2) 00:16:25.229 5.169 - 5.194: 99.8038% ( 2) 00:16:25.229 5.194 - 5.218: 99.8088% ( 1) 00:16:25.229 5.243 - 5.268: 99.8188% ( 2) 00:16:25.229 5.268 - 5.292: 99.8239% ( 1) 00:16:25.229 5.292 - 5.317: 99.8289% ( 1) 00:16:25.229 5.317 - 5.342: 99.8339% ( 1) 00:16:25.229 5.366 - 5.391: 99.8440% ( 2) 00:16:25.229 5.415 - 5.440: 99.8490% ( 1) 00:16:25.229 5.440 - 5.465: 99.8541% ( 1) 00:16:25.229 5.465 - 5.489: 99.8591% ( 1) 00:16:25.229 5.514 - 5.538: 99.8641% ( 1) 00:16:25.229 5.538 - 5.563: 99.8742% ( 2) 00:16:25.229 5.563 - 5.588: 99.8843% ( 2) 00:16:25.229 5.637 - 5.662: 99.8893% ( 1) 00:16:25.229 5.711 - 5.735: 99.8943% ( 1) 00:16:25.229 5.735 - 5.760: 99.8994% ( 1) 00:16:25.229 5.760 - 5.785: 99.9044% ( 1) 00:16:25.229 5.809 - 5.834: 99.9094% ( 1) 00:16:25.229 5.834 - 5.858: 99.9145% ( 1) 00:16:25.229 6.006 - 6.031: 99.9195% ( 1) 00:16:25.229 6.129 - 6.154: 99.9245% ( 1) 00:16:25.229 6.154 - 6.178: 99.9346% ( 2) 00:16:25.229 6.178 - 6.203: 99.9396% ( 1) 00:16:25.229 6.302 - 6.351: 99.9446% ( 1) 00:16:25.229 6.843 - 6.892: 99.9597% ( 3) 00:16:25.229 8.172 - 8.222: 99.9648% ( 1) 00:16:25.229 3982.572 - 4007.778: 100.0000% ( 7) 00:16:25.229 00:16:25.229 Complete histogram 00:16:25.229 ================== 00:16:25.229 Range in us Cumulative Count 00:16:25.229 1.674 - 1.686: 0.1962% ( 39) 00:16:25.229 1.686 - 1.698: 0.7649% ( 113) 00:16:25.229 1.698 - 1.711: 0.8806% ( 23) 00:16:25.229 1.711 - 1.723: 0.9712% ( 18) 00:16:25.229 1.723 - 1.735: 1.1775% ( 41) 00:16:25.229 1.735 - 1.748: 41.5035% ( 8014) 00:16:25.229 1.748 - 1.760: 56.5491% ( 2990) 00:16:25.229 1.760 - 1.772: 76.1083% ( 3887) 00:16:25.229 1.772 - [2024-12-09 06:15:19.736030] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:25.229 1.785: 82.8612% ( 1342) 00:16:25.229 1.785 - 1.797: 84.8287% ( 391) 00:16:25.229 1.797 - 1.809: 87.4704% ( 525) 00:16:25.229 1.809 - 1.822: 91.8130% ( 863) 00:16:25.229 1.822 - 1.834: 95.5618% ( 745) 00:16:25.229 1.834 - 1.846: 98.1080% ( 506) 00:16:25.229 1.846 - 1.858: 99.1043% ( 198) 00:16:25.229 1.858 - 1.871: 99.3408% ( 47) 00:16:25.229 1.871 - 1.883: 99.3861% ( 9) 00:16:25.229 1.883 - 1.895: 99.4012% ( 3) 00:16:25.229 1.895 - 1.908: 99.4113% ( 2) 00:16:25.229 1.908 - 1.920: 99.4213% ( 2) 00:16:25.229 1.932 - 1.945: 99.4264% ( 1) 00:16:25.229 1.945 - 1.957: 99.4314% ( 1) 00:16:25.229 3.151 - 3.175: 99.4364% ( 1) 00:16:25.229 3.471 - 3.495: 99.4415% ( 1) 00:16:25.229 3.495 - 3.520: 99.4465% ( 1) 00:16:25.229 3.520 - 3.545: 99.4515% ( 1) 00:16:25.229 3.569 - 3.594: 99.4565% ( 1) 00:16:25.229 3.643 - 3.668: 99.4616% ( 1) 00:16:25.229 3.668 - 3.692: 99.4666% ( 1) 00:16:25.229 3.692 - 3.717: 99.4716% ( 1) 00:16:25.229 3.865 - 3.889: 99.4767% ( 1) 00:16:25.229 3.889 - 3.914: 99.4817% ( 1) 00:16:25.229 3.914 - 3.938: 99.4867% ( 1) 00:16:25.229 4.012 - 4.037: 99.4918% ( 1) 00:16:25.229 4.037 - 4.062: 99.5018% ( 2) 00:16:25.229 4.086 - 4.111: 99.5119% ( 2) 00:16:25.229 4.111 - 4.135: 99.5169% ( 1) 00:16:25.229 4.160 - 4.185: 99.5220% ( 1) 00:16:25.229 4.258 - 4.283: 99.5270% ( 1) 00:16:25.229 4.308 - 4.332: 99.5320% ( 1) 00:16:25.229 4.455 - 4.480: 99.5371% ( 1) 00:16:25.229 4.529 - 4.554: 99.5421% ( 1) 00:16:25.229 4.751 - 4.775: 99.5471% ( 1) 00:16:25.229 4.874 - 4.898: 99.5522% ( 1) 00:16:25.229 4.898 - 4.923: 99.5572% ( 1) 00:16:25.229 5.563 - 5.588: 99.5622% ( 1) 00:16:25.229 5.612 - 5.637: 99.5673% ( 1) 00:16:25.229 6.892 - 6.942: 99.5723% ( 1) 00:16:25.229 7.532 - 7.582: 99.5773% ( 1) 00:16:25.229 8.222 - 8.271: 99.5823% ( 1) 00:16:25.229 9.058 - 9.108: 99.5874% ( 1) 00:16:25.229 11.372 - 11.422: 99.5924% ( 1) 00:16:25.229 34.855 - 35.052: 99.5974% ( 1) 00:16:25.229 2066.905 - 2079.508: 99.6025% ( 1) 00:16:25.229 3982.572 - 4007.778: 99.9899% ( 77) 00:16:25.229 5973.858 - 5999.065: 99.9950% ( 1) 00:16:25.229 6956.898 - 7007.311: 100.0000% ( 1) 00:16:25.229 00:16:25.229 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:16:25.229 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:25.230 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:16:25.230 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:16:25.230 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:25.489 [ 00:16:25.489 { 00:16:25.489 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:25.489 "subtype": "Discovery", 00:16:25.489 "listen_addresses": [], 00:16:25.489 "allow_any_host": true, 00:16:25.489 "hosts": [] 00:16:25.489 }, 00:16:25.489 { 00:16:25.489 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:25.489 "subtype": "NVMe", 00:16:25.489 "listen_addresses": [ 00:16:25.489 { 00:16:25.489 "trtype": "VFIOUSER", 00:16:25.489 "adrfam": "IPv4", 00:16:25.489 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:25.489 "trsvcid": "0" 00:16:25.489 } 00:16:25.489 ], 00:16:25.489 "allow_any_host": true, 00:16:25.489 "hosts": [], 00:16:25.489 "serial_number": "SPDK1", 00:16:25.489 "model_number": "SPDK bdev Controller", 00:16:25.489 "max_namespaces": 32, 00:16:25.489 "min_cntlid": 1, 00:16:25.489 "max_cntlid": 65519, 00:16:25.489 "namespaces": [ 00:16:25.489 { 00:16:25.489 "nsid": 1, 00:16:25.489 "bdev_name": "Malloc1", 00:16:25.489 "name": "Malloc1", 00:16:25.489 "nguid": "5DBD4A6462AC447F989E80710CFA724C", 00:16:25.489 "uuid": "5dbd4a64-62ac-447f-989e-80710cfa724c" 00:16:25.489 }, 00:16:25.489 { 00:16:25.489 "nsid": 2, 00:16:25.489 "bdev_name": "Malloc3", 00:16:25.489 "name": "Malloc3", 00:16:25.489 "nguid": "184FF70B288F43AB9374DD02AD84BB0A", 00:16:25.489 "uuid": "184ff70b-288f-43ab-9374-dd02ad84bb0a" 00:16:25.489 } 00:16:25.489 ] 00:16:25.489 }, 00:16:25.489 { 00:16:25.489 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:25.489 "subtype": "NVMe", 00:16:25.489 "listen_addresses": [ 00:16:25.489 { 00:16:25.489 "trtype": "VFIOUSER", 00:16:25.489 "adrfam": "IPv4", 00:16:25.489 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:25.489 "trsvcid": "0" 00:16:25.489 } 00:16:25.489 ], 00:16:25.489 "allow_any_host": true, 00:16:25.489 "hosts": [], 00:16:25.489 "serial_number": "SPDK2", 00:16:25.489 "model_number": "SPDK bdev Controller", 00:16:25.489 "max_namespaces": 32, 00:16:25.489 "min_cntlid": 1, 00:16:25.489 "max_cntlid": 65519, 00:16:25.489 "namespaces": [ 00:16:25.489 { 00:16:25.489 "nsid": 1, 00:16:25.489 "bdev_name": "Malloc2", 00:16:25.489 "name": "Malloc2", 00:16:25.489 "nguid": "6D020AA9BC104643A8956F4AE9B072D1", 00:16:25.489 "uuid": "6d020aa9-bc10-4643-a895-6f4ae9b072d1" 00:16:25.489 } 00:16:25.489 ] 00:16:25.489 } 00:16:25.489 ] 00:16:25.489 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:25.489 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=295758 00:16:25.489 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:25.489 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:16:25.489 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:16:25.489 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:25.489 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:16:25.489 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=1 00:16:25.490 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:16:25.490 06:15:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:25.490 06:15:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:16:25.490 06:15:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=2 00:16:25.490 06:15:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:16:25.750 [2024-12-09 06:15:20.111858] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:25.750 06:15:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:25.750 06:15:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:25.750 06:15:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:16:25.750 06:15:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:25.750 06:15:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:16:26.010 Malloc4 00:16:26.010 06:15:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:16:26.010 [2024-12-09 06:15:20.514645] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:26.010 06:15:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:26.010 Asynchronous Event Request test 00:16:26.010 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:26.010 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:26.010 Registering asynchronous event callbacks... 00:16:26.010 Starting namespace attribute notice tests for all controllers... 00:16:26.010 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:26.010 aer_cb - Changed Namespace 00:16:26.010 Cleaning up... 00:16:26.288 [ 00:16:26.288 { 00:16:26.288 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:26.288 "subtype": "Discovery", 00:16:26.288 "listen_addresses": [], 00:16:26.288 "allow_any_host": true, 00:16:26.288 "hosts": [] 00:16:26.288 }, 00:16:26.288 { 00:16:26.288 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:26.288 "subtype": "NVMe", 00:16:26.288 "listen_addresses": [ 00:16:26.288 { 00:16:26.288 "trtype": "VFIOUSER", 00:16:26.288 "adrfam": "IPv4", 00:16:26.288 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:26.288 "trsvcid": "0" 00:16:26.288 } 00:16:26.288 ], 00:16:26.288 "allow_any_host": true, 00:16:26.288 "hosts": [], 00:16:26.288 "serial_number": "SPDK1", 00:16:26.288 "model_number": "SPDK bdev Controller", 00:16:26.288 "max_namespaces": 32, 00:16:26.288 "min_cntlid": 1, 00:16:26.288 "max_cntlid": 65519, 00:16:26.288 "namespaces": [ 00:16:26.288 { 00:16:26.288 "nsid": 1, 00:16:26.288 "bdev_name": "Malloc1", 00:16:26.288 "name": "Malloc1", 00:16:26.288 "nguid": "5DBD4A6462AC447F989E80710CFA724C", 00:16:26.288 "uuid": "5dbd4a64-62ac-447f-989e-80710cfa724c" 00:16:26.288 }, 00:16:26.288 { 00:16:26.288 "nsid": 2, 00:16:26.288 "bdev_name": "Malloc3", 00:16:26.288 "name": "Malloc3", 00:16:26.288 "nguid": "184FF70B288F43AB9374DD02AD84BB0A", 00:16:26.288 "uuid": "184ff70b-288f-43ab-9374-dd02ad84bb0a" 00:16:26.288 } 00:16:26.288 ] 00:16:26.288 }, 00:16:26.288 { 00:16:26.288 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:26.288 "subtype": "NVMe", 00:16:26.288 "listen_addresses": [ 00:16:26.288 { 00:16:26.288 "trtype": "VFIOUSER", 00:16:26.288 "adrfam": "IPv4", 00:16:26.288 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:26.288 "trsvcid": "0" 00:16:26.288 } 00:16:26.288 ], 00:16:26.288 "allow_any_host": true, 00:16:26.288 "hosts": [], 00:16:26.288 "serial_number": "SPDK2", 00:16:26.288 "model_number": "SPDK bdev Controller", 00:16:26.288 "max_namespaces": 32, 00:16:26.288 "min_cntlid": 1, 00:16:26.288 "max_cntlid": 65519, 00:16:26.288 "namespaces": [ 00:16:26.288 { 00:16:26.288 "nsid": 1, 00:16:26.288 "bdev_name": "Malloc2", 00:16:26.288 "name": "Malloc2", 00:16:26.288 "nguid": "6D020AA9BC104643A8956F4AE9B072D1", 00:16:26.288 "uuid": "6d020aa9-bc10-4643-a895-6f4ae9b072d1" 00:16:26.288 }, 00:16:26.288 { 00:16:26.288 "nsid": 2, 00:16:26.288 "bdev_name": "Malloc4", 00:16:26.288 "name": "Malloc4", 00:16:26.288 "nguid": "84FC8C64807949B78ADBF32049E61915", 00:16:26.288 "uuid": "84fc8c64-8079-49b7-8adb-f32049e61915" 00:16:26.288 } 00:16:26.288 ] 00:16:26.288 } 00:16:26.288 ] 00:16:26.288 06:15:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 295758 00:16:26.288 06:15:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:16:26.288 06:15:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 287522 00:16:26.288 06:15:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 287522 ']' 00:16:26.288 06:15:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 287522 00:16:26.288 06:15:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:16:26.288 06:15:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:26.288 06:15:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 287522 00:16:26.288 06:15:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:26.288 06:15:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:26.288 06:15:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 287522' 00:16:26.288 killing process with pid 287522 00:16:26.288 06:15:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 287522 00:16:26.288 06:15:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 287522 00:16:26.549 06:15:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:26.549 06:15:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:26.549 06:15:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:16:26.549 06:15:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:16:26.549 06:15:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:16:26.549 06:15:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=296052 00:16:26.549 06:15:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 296052' 00:16:26.549 Process pid: 296052 00:16:26.549 06:15:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:26.549 06:15:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:16:26.549 06:15:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 296052 00:16:26.549 06:15:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 296052 ']' 00:16:26.549 06:15:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:26.549 06:15:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:26.549 06:15:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:26.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:26.549 06:15:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:26.549 06:15:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:26.549 [2024-12-09 06:15:20.969987] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:16:26.549 [2024-12-09 06:15:20.970859] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:16:26.549 [2024-12-09 06:15:20.970897] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:26.549 [2024-12-09 06:15:21.051658] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:26.549 [2024-12-09 06:15:21.081296] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:26.549 [2024-12-09 06:15:21.081330] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:26.549 [2024-12-09 06:15:21.081336] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:26.549 [2024-12-09 06:15:21.081341] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:26.549 [2024-12-09 06:15:21.081345] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:26.549 [2024-12-09 06:15:21.082946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:26.549 [2024-12-09 06:15:21.083072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:26.549 [2024-12-09 06:15:21.083214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:26.549 [2024-12-09 06:15:21.083216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:26.549 [2024-12-09 06:15:21.133070] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:16:26.549 [2024-12-09 06:15:21.133354] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:16:26.810 [2024-12-09 06:15:21.134089] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:16:26.810 [2024-12-09 06:15:21.134632] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:16:26.810 [2024-12-09 06:15:21.134664] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:16:27.380 06:15:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:27.380 06:15:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:16:27.380 06:15:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:28.321 06:15:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:16:28.582 06:15:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:28.582 06:15:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:28.582 06:15:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:28.582 06:15:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:28.582 06:15:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:28.582 Malloc1 00:16:28.582 06:15:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:28.854 06:15:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:29.114 06:15:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:29.114 06:15:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:29.114 06:15:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:29.114 06:15:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:29.375 Malloc2 00:16:29.375 06:15:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:29.636 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:29.636 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:29.898 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:16:29.898 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 296052 00:16:29.898 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 296052 ']' 00:16:29.898 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 296052 00:16:29.898 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:16:29.898 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:29.898 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 296052 00:16:29.898 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:29.898 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:29.898 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 296052' 00:16:29.898 killing process with pid 296052 00:16:29.898 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 296052 00:16:29.898 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 296052 00:16:30.159 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:30.159 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:30.159 00:16:30.159 real 0m51.146s 00:16:30.159 user 3m15.626s 00:16:30.159 sys 0m3.128s 00:16:30.159 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:30.159 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:30.159 ************************************ 00:16:30.159 END TEST nvmf_vfio_user 00:16:30.159 ************************************ 00:16:30.159 06:15:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:30.159 06:15:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:30.159 06:15:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:30.159 06:15:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:30.159 ************************************ 00:16:30.159 START TEST nvmf_vfio_user_nvme_compliance 00:16:30.159 ************************************ 00:16:30.159 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:30.159 * Looking for test storage... 00:16:30.159 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:16:30.159 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:30.159 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lcov --version 00:16:30.159 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:30.421 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:30.421 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:30.421 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:30.421 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:30.421 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:16:30.421 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:16:30.421 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:16:30.421 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:16:30.421 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:16:30.421 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:16:30.421 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:16:30.421 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:30.421 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:16:30.421 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:16:30.422 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:30.422 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:30.422 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:16:30.422 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:16:30.422 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:30.422 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:16:30.422 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:16:30.422 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:16:30.422 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:16:30.422 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:30.422 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:16:30.422 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:16:30.422 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:30.422 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:30.422 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:16:30.422 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:30.422 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:30.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.422 --rc genhtml_branch_coverage=1 00:16:30.422 --rc genhtml_function_coverage=1 00:16:30.422 --rc genhtml_legend=1 00:16:30.422 --rc geninfo_all_blocks=1 00:16:30.422 --rc geninfo_unexecuted_blocks=1 00:16:30.422 00:16:30.422 ' 00:16:30.422 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:30.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.422 --rc genhtml_branch_coverage=1 00:16:30.422 --rc genhtml_function_coverage=1 00:16:30.422 --rc genhtml_legend=1 00:16:30.422 --rc geninfo_all_blocks=1 00:16:30.422 --rc geninfo_unexecuted_blocks=1 00:16:30.422 00:16:30.422 ' 00:16:30.422 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:30.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.422 --rc genhtml_branch_coverage=1 00:16:30.422 --rc genhtml_function_coverage=1 00:16:30.422 --rc genhtml_legend=1 00:16:30.422 --rc geninfo_all_blocks=1 00:16:30.422 --rc geninfo_unexecuted_blocks=1 00:16:30.422 00:16:30.422 ' 00:16:30.422 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:30.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.422 --rc genhtml_branch_coverage=1 00:16:30.422 --rc genhtml_function_coverage=1 00:16:30.422 --rc genhtml_legend=1 00:16:30.422 --rc geninfo_all_blocks=1 00:16:30.422 --rc geninfo_unexecuted_blocks=1 00:16:30.422 00:16:30.422 ' 00:16:30.422 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:30.422 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:16:30.422 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:30.422 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:30.422 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:30.422 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:30.422 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:30.422 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:30.422 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:30.422 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:30.422 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:30.422 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:30.422 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:16:30.422 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:16:30.422 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:30.422 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:30.422 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:30.422 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:30.422 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:30.422 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:16:30.422 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:30.422 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:30.422 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:30.422 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.422 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.422 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.422 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:16:30.422 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.422 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:16:30.422 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:30.422 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:30.422 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:30.422 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:30.422 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:30.422 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:30.422 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:30.422 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:30.422 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:30.422 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:30.422 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:30.422 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:30.422 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:16:30.422 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:16:30.422 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:16:30.422 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=296744 00:16:30.422 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 296744' 00:16:30.422 Process pid: 296744 00:16:30.422 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:30.422 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:30.423 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 296744 00:16:30.423 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 296744 ']' 00:16:30.423 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:30.423 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:30.423 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:30.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:30.423 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:30.423 06:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:30.423 [2024-12-09 06:15:24.923243] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:16:30.423 [2024-12-09 06:15:24.923311] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:30.683 [2024-12-09 06:15:25.012114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:30.683 [2024-12-09 06:15:25.045746] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:30.683 [2024-12-09 06:15:25.045781] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:30.683 [2024-12-09 06:15:25.045787] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:30.683 [2024-12-09 06:15:25.045792] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:30.683 [2024-12-09 06:15:25.045797] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:30.683 [2024-12-09 06:15:25.046957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:30.683 [2024-12-09 06:15:25.047105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:30.683 [2024-12-09 06:15:25.047108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:31.252 06:15:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:31.252 06:15:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:16:31.253 06:15:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:16:32.194 06:15:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:32.194 06:15:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:16:32.194 06:15:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:32.194 06:15:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.194 06:15:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:32.194 06:15:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.194 06:15:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:16:32.194 06:15:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:32.194 06:15:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.194 06:15:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:32.454 malloc0 00:16:32.454 06:15:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.454 06:15:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:16:32.454 06:15:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.454 06:15:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:32.454 06:15:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.454 06:15:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:32.454 06:15:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.454 06:15:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:32.454 06:15:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.454 06:15:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:32.454 06:15:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.454 06:15:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:32.454 06:15:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.454 06:15:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:16:32.454 00:16:32.454 00:16:32.454 CUnit - A unit testing framework for C - Version 2.1-3 00:16:32.454 http://cunit.sourceforge.net/ 00:16:32.454 00:16:32.454 00:16:32.454 Suite: nvme_compliance 00:16:32.454 Test: admin_identify_ctrlr_verify_dptr ...[2024-12-09 06:15:26.986862] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:32.454 [2024-12-09 06:15:26.988171] vfio_user.c: 832:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:16:32.454 [2024-12-09 06:15:26.988182] vfio_user.c:5544:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:16:32.454 [2024-12-09 06:15:26.988187] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:16:32.454 [2024-12-09 06:15:26.989882] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:32.454 passed 00:16:32.715 Test: admin_identify_ctrlr_verify_fused ...[2024-12-09 06:15:27.068394] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:32.715 [2024-12-09 06:15:27.071419] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:32.715 passed 00:16:32.715 Test: admin_identify_ns ...[2024-12-09 06:15:27.149980] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:32.715 [2024-12-09 06:15:27.210460] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:16:32.715 [2024-12-09 06:15:27.218461] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:16:32.715 [2024-12-09 06:15:27.239541] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:32.715 passed 00:16:32.975 Test: admin_get_features_mandatory_features ...[2024-12-09 06:15:27.317110] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:32.975 [2024-12-09 06:15:27.320124] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:32.975 passed 00:16:32.975 Test: admin_get_features_optional_features ...[2024-12-09 06:15:27.397600] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:32.975 [2024-12-09 06:15:27.400616] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:32.975 passed 00:16:32.975 Test: admin_set_features_number_of_queues ...[2024-12-09 06:15:27.476995] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:33.235 [2024-12-09 06:15:27.581533] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:33.235 passed 00:16:33.235 Test: admin_get_log_page_mandatory_logs ...[2024-12-09 06:15:27.657901] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:33.235 [2024-12-09 06:15:27.660926] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:33.235 passed 00:16:33.235 Test: admin_get_log_page_with_lpo ...[2024-12-09 06:15:27.737002] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:33.235 [2024-12-09 06:15:27.808458] ctrlr.c:2699:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:16:33.495 [2024-12-09 06:15:27.821493] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:33.495 passed 00:16:33.495 Test: fabric_property_get ...[2024-12-09 06:15:27.898087] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:33.495 [2024-12-09 06:15:27.899291] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:16:33.495 [2024-12-09 06:15:27.901096] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:33.495 passed 00:16:33.495 Test: admin_delete_io_sq_use_admin_qid ...[2024-12-09 06:15:27.976569] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:33.495 [2024-12-09 06:15:27.977777] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:16:33.495 [2024-12-09 06:15:27.979589] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:33.495 passed 00:16:33.495 Test: admin_delete_io_sq_delete_sq_twice ...[2024-12-09 06:15:28.055979] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:33.755 [2024-12-09 06:15:28.143456] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:33.755 [2024-12-09 06:15:28.159454] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:33.755 [2024-12-09 06:15:28.163550] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:33.755 passed 00:16:33.755 Test: admin_delete_io_cq_use_admin_qid ...[2024-12-09 06:15:28.240097] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:33.755 [2024-12-09 06:15:28.241304] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:16:33.755 [2024-12-09 06:15:28.243123] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:33.755 passed 00:16:33.755 Test: admin_delete_io_cq_delete_cq_first ...[2024-12-09 06:15:28.318030] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:34.015 [2024-12-09 06:15:28.394463] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:34.015 [2024-12-09 06:15:28.418456] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:34.015 [2024-12-09 06:15:28.423526] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:34.015 passed 00:16:34.015 Test: admin_create_io_cq_verify_iv_pc ...[2024-12-09 06:15:28.500070] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:34.015 [2024-12-09 06:15:28.501271] vfio_user.c:2178:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:16:34.015 [2024-12-09 06:15:28.501288] vfio_user.c:2172:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:16:34.015 [2024-12-09 06:15:28.503083] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:34.015 passed 00:16:34.015 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-12-09 06:15:28.577994] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:34.275 [2024-12-09 06:15:28.670454] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:16:34.275 [2024-12-09 06:15:28.678453] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:16:34.275 [2024-12-09 06:15:28.686454] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:16:34.275 [2024-12-09 06:15:28.694457] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:16:34.276 [2024-12-09 06:15:28.723521] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:34.276 passed 00:16:34.276 Test: admin_create_io_sq_verify_pc ...[2024-12-09 06:15:28.800038] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:34.276 [2024-12-09 06:15:28.816461] vfio_user.c:2071:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:16:34.276 [2024-12-09 06:15:28.834206] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:34.276 passed 00:16:34.536 Test: admin_create_io_qp_max_qps ...[2024-12-09 06:15:28.909681] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:35.476 [2024-12-09 06:15:30.032457] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:16:36.046 [2024-12-09 06:15:30.412795] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:36.046 passed 00:16:36.046 Test: admin_create_io_sq_shared_cq ...[2024-12-09 06:15:30.486800] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:36.046 [2024-12-09 06:15:30.619452] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:36.306 [2024-12-09 06:15:30.656503] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:36.306 passed 00:16:36.306 00:16:36.306 Run Summary: Type Total Ran Passed Failed Inactive 00:16:36.306 suites 1 1 n/a 0 0 00:16:36.306 tests 18 18 18 0 0 00:16:36.306 asserts 360 360 360 0 n/a 00:16:36.306 00:16:36.306 Elapsed time = 1.511 seconds 00:16:36.306 06:15:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 296744 00:16:36.306 06:15:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 296744 ']' 00:16:36.306 06:15:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 296744 00:16:36.306 06:15:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:16:36.306 06:15:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:36.306 06:15:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 296744 00:16:36.306 06:15:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:36.306 06:15:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:36.306 06:15:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 296744' 00:16:36.306 killing process with pid 296744 00:16:36.306 06:15:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 296744 00:16:36.306 06:15:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 296744 00:16:36.306 06:15:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:16:36.306 06:15:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:16:36.306 00:16:36.306 real 0m6.246s 00:16:36.306 user 0m17.722s 00:16:36.306 sys 0m0.526s 00:16:36.306 06:15:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:36.306 06:15:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:36.306 ************************************ 00:16:36.306 END TEST nvmf_vfio_user_nvme_compliance 00:16:36.306 ************************************ 00:16:36.567 06:15:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:36.567 06:15:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:36.567 06:15:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:36.567 06:15:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:36.567 ************************************ 00:16:36.567 START TEST nvmf_vfio_user_fuzz 00:16:36.567 ************************************ 00:16:36.567 06:15:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:36.567 * Looking for test storage... 00:16:36.567 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:36.567 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:36.567 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:16:36.567 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:36.567 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:36.567 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:36.567 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:36.567 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:36.567 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:16:36.567 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:16:36.567 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:16:36.567 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:16:36.567 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:16:36.567 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:16:36.567 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:16:36.567 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:36.567 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:16:36.567 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:16:36.567 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:36.567 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:36.567 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:16:36.567 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:16:36.567 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:36.567 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:16:36.567 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:16:36.567 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:16:36.567 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:16:36.567 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:36.567 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:16:36.567 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:16:36.828 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:36.828 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:36.828 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:16:36.828 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:36.828 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:36.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:36.828 --rc genhtml_branch_coverage=1 00:16:36.828 --rc genhtml_function_coverage=1 00:16:36.828 --rc genhtml_legend=1 00:16:36.828 --rc geninfo_all_blocks=1 00:16:36.828 --rc geninfo_unexecuted_blocks=1 00:16:36.828 00:16:36.828 ' 00:16:36.828 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:36.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:36.828 --rc genhtml_branch_coverage=1 00:16:36.828 --rc genhtml_function_coverage=1 00:16:36.828 --rc genhtml_legend=1 00:16:36.828 --rc geninfo_all_blocks=1 00:16:36.828 --rc geninfo_unexecuted_blocks=1 00:16:36.828 00:16:36.828 ' 00:16:36.828 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:36.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:36.828 --rc genhtml_branch_coverage=1 00:16:36.828 --rc genhtml_function_coverage=1 00:16:36.828 --rc genhtml_legend=1 00:16:36.828 --rc geninfo_all_blocks=1 00:16:36.828 --rc geninfo_unexecuted_blocks=1 00:16:36.828 00:16:36.829 ' 00:16:36.829 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:36.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:36.829 --rc genhtml_branch_coverage=1 00:16:36.829 --rc genhtml_function_coverage=1 00:16:36.829 --rc genhtml_legend=1 00:16:36.829 --rc geninfo_all_blocks=1 00:16:36.829 --rc geninfo_unexecuted_blocks=1 00:16:36.829 00:16:36.829 ' 00:16:36.829 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:36.829 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:16:36.829 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:36.829 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:36.829 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:36.829 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:36.829 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:36.829 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:36.829 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:36.829 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:36.829 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:36.829 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:36.829 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:16:36.829 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:16:36.829 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:36.829 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:36.829 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:36.829 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:36.829 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:36.829 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:16:36.829 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:36.829 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:36.829 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:36.829 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.829 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.829 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.829 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:16:36.829 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.829 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:16:36.829 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:36.829 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:36.829 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:36.829 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:36.829 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:36.829 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:36.829 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:36.829 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:36.829 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:36.829 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:36.829 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:36.829 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:36.829 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:36.829 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:16:36.829 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:36.829 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:36.829 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:16:36.829 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=297782 00:16:36.829 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 297782' 00:16:36.829 Process pid: 297782 00:16:36.829 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:36.829 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 297782 00:16:36.829 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:36.829 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 297782 ']' 00:16:36.829 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:36.829 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:36.829 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:36.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:36.829 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:36.829 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:37.770 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:37.770 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:16:37.770 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:16:38.714 06:15:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:38.714 06:15:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.714 06:15:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:38.714 06:15:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.714 06:15:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:16:38.714 06:15:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:38.714 06:15:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.714 06:15:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:38.714 malloc0 00:16:38.714 06:15:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.714 06:15:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:16:38.714 06:15:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.714 06:15:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:38.714 06:15:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.714 06:15:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:38.714 06:15:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.714 06:15:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:38.714 06:15:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.714 06:15:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:38.714 06:15:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.714 06:15:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:38.714 06:15:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.714 06:15:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:16:38.714 06:15:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:17:10.825 Fuzzing completed. Shutting down the fuzz application 00:17:10.825 00:17:10.825 Dumping successful admin opcodes: 00:17:10.825 9, 10, 00:17:10.825 Dumping successful io opcodes: 00:17:10.825 0, 00:17:10.825 NS: 0x20000081ef00 I/O qp, Total commands completed: 1339924, total successful commands: 5255, random_seed: 2708561728 00:17:10.825 NS: 0x20000081ef00 admin qp, Total commands completed: 299632, total successful commands: 73, random_seed: 257165696 00:17:10.825 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:17:10.825 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.825 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:10.825 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.825 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 297782 00:17:10.825 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 297782 ']' 00:17:10.825 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 297782 00:17:10.825 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:17:10.825 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:10.825 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 297782 00:17:10.825 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:10.825 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:10.825 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 297782' 00:17:10.825 killing process with pid 297782 00:17:10.825 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 297782 00:17:10.825 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 297782 00:17:10.825 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:17:10.825 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:17:10.825 00:17:10.825 real 0m32.768s 00:17:10.825 user 0m38.101s 00:17:10.825 sys 0m23.885s 00:17:10.825 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:10.825 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:10.825 ************************************ 00:17:10.826 END TEST nvmf_vfio_user_fuzz 00:17:10.826 ************************************ 00:17:10.826 06:16:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:10.826 06:16:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:10.826 06:16:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:10.826 06:16:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:10.826 ************************************ 00:17:10.826 START TEST nvmf_auth_target 00:17:10.826 ************************************ 00:17:10.826 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:10.826 * Looking for test storage... 00:17:10.826 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:10.826 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:10.826 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:17:10.826 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:10.826 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:10.826 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:10.826 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:10.826 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:10.826 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:17:10.826 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:17:10.826 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:17:10.826 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:17:10.826 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:17:10.826 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:17:10.826 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:17:10.826 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:10.826 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:17:10.826 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:17:10.826 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:10.826 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:10.826 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:17:10.826 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:17:10.826 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:10.826 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:17:10.826 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:17:10.826 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:17:10.826 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:17:10.826 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:10.826 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:17:10.826 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:17:10.826 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:10.826 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:10.826 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:17:10.826 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:10.826 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:10.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.826 --rc genhtml_branch_coverage=1 00:17:10.826 --rc genhtml_function_coverage=1 00:17:10.826 --rc genhtml_legend=1 00:17:10.826 --rc geninfo_all_blocks=1 00:17:10.826 --rc geninfo_unexecuted_blocks=1 00:17:10.826 00:17:10.826 ' 00:17:10.826 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:10.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.826 --rc genhtml_branch_coverage=1 00:17:10.826 --rc genhtml_function_coverage=1 00:17:10.826 --rc genhtml_legend=1 00:17:10.826 --rc geninfo_all_blocks=1 00:17:10.826 --rc geninfo_unexecuted_blocks=1 00:17:10.826 00:17:10.826 ' 00:17:10.826 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:10.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.826 --rc genhtml_branch_coverage=1 00:17:10.826 --rc genhtml_function_coverage=1 00:17:10.826 --rc genhtml_legend=1 00:17:10.826 --rc geninfo_all_blocks=1 00:17:10.826 --rc geninfo_unexecuted_blocks=1 00:17:10.826 00:17:10.826 ' 00:17:10.826 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:10.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.826 --rc genhtml_branch_coverage=1 00:17:10.826 --rc genhtml_function_coverage=1 00:17:10.826 --rc genhtml_legend=1 00:17:10.826 --rc geninfo_all_blocks=1 00:17:10.826 --rc geninfo_unexecuted_blocks=1 00:17:10.826 00:17:10.826 ' 00:17:10.826 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:10.826 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:17:10.826 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:10.826 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:10.826 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:10.826 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:10.826 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:10.826 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:10.826 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:10.826 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:10.826 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:10.826 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:10.826 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:17:10.826 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:17:10.826 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:10.826 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:10.826 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:10.826 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:10.826 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:10.826 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:17:10.826 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:10.826 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:10.826 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:10.826 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.826 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.826 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.826 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:17:10.827 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.827 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:17:10.827 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:10.827 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:10.827 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:10.827 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:10.827 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:10.827 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:10.827 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:10.827 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:10.827 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:10.827 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:10.827 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:10.827 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:10.827 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:17:10.827 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:17:10.827 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:17:10.827 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:17:10.827 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:17:10.827 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:17:10.827 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:10.827 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:10.827 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:10.827 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:10.827 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:10.827 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:10.827 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:10.827 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:10.827 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:10.827 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:10.827 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:17:10.827 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:17.417 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:17.417 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:17.417 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:17.417 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:17.417 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:17.418 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:17.418 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:17.418 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:17.418 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:17.418 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:17.418 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:17.418 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:17.418 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:17.418 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:17.418 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.591 ms 00:17:17.418 00:17:17.418 --- 10.0.0.2 ping statistics --- 00:17:17.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:17.418 rtt min/avg/max/mdev = 0.591/0.591/0.591/0.000 ms 00:17:17.418 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:17.418 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:17.418 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.257 ms 00:17:17.418 00:17:17.418 --- 10.0.0.1 ping statistics --- 00:17:17.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:17.418 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:17:17.418 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:17.418 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:17:17.418 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:17.418 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:17.418 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:17.418 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:17.418 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:17.418 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:17.418 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:17.418 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:17:17.418 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:17.418 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:17.418 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.418 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:17:17.418 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=306767 00:17:17.418 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 306767 00:17:17.418 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 306767 ']' 00:17:17.418 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:17.418 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:17.418 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:17.418 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:17.418 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.991 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:17.991 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:17.991 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:17.991 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:17.991 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.991 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:17.991 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=306976 00:17:17.991 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:17.991 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:17:17.991 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:17:17.991 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:17.991 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:17.991 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:17.991 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:17:17.991 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:17:17.991 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:17.991 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=0018a949502a73681bc52cc1fe698f90b930f57d44f186e5 00:17:17.991 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:17:17.991 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.JU8 00:17:17.991 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 0018a949502a73681bc52cc1fe698f90b930f57d44f186e5 0 00:17:17.992 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 0018a949502a73681bc52cc1fe698f90b930f57d44f186e5 0 00:17:17.992 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:17.992 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:17.992 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=0018a949502a73681bc52cc1fe698f90b930f57d44f186e5 00:17:17.992 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:17:17.992 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:17.992 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.JU8 00:17:17.992 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.JU8 00:17:17.992 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.JU8 00:17:17.992 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:17:17.992 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:17.992 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:17.992 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:17.992 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:17:17.992 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:17:17.992 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:17.992 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=1328130ef5a9717abe739e132f943e00b0838fbe292f89463b101eb591061e37 00:17:17.992 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:17:17.992 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.s7j 00:17:17.992 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 1328130ef5a9717abe739e132f943e00b0838fbe292f89463b101eb591061e37 3 00:17:17.992 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 1328130ef5a9717abe739e132f943e00b0838fbe292f89463b101eb591061e37 3 00:17:17.992 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:17.992 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:17.992 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=1328130ef5a9717abe739e132f943e00b0838fbe292f89463b101eb591061e37 00:17:17.992 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:17:17.992 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:17.992 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.s7j 00:17:17.992 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.s7j 00:17:17.992 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.s7j 00:17:17.992 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:17:17.992 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:17.992 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:17.992 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:17.992 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:17:17.992 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:17:17.992 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:17.992 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=766124d7d3a9d9c6bf956edfcf4d4649 00:17:17.992 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:17:17.992 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Brd 00:17:17.992 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 766124d7d3a9d9c6bf956edfcf4d4649 1 00:17:17.992 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 766124d7d3a9d9c6bf956edfcf4d4649 1 00:17:17.992 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:17.992 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:17.992 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=766124d7d3a9d9c6bf956edfcf4d4649 00:17:17.992 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:17:17.992 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:17.992 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Brd 00:17:17.992 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Brd 00:17:17.992 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.Brd 00:17:17.992 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:17:17.992 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:17.992 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:17.992 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:17.992 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:17:17.992 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:17:17.992 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:17.992 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=8550581d96fe22dd2f27122d8ca5608bb30d1ca88537da68 00:17:17.992 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:17:17.992 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.F4x 00:17:17.992 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 8550581d96fe22dd2f27122d8ca5608bb30d1ca88537da68 2 00:17:17.992 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 8550581d96fe22dd2f27122d8ca5608bb30d1ca88537da68 2 00:17:17.992 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:17.992 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:17.992 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=8550581d96fe22dd2f27122d8ca5608bb30d1ca88537da68 00:17:17.992 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:17:17.992 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:18.254 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.F4x 00:17:18.254 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.F4x 00:17:18.254 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.F4x 00:17:18.254 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:17:18.254 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:18.254 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:18.254 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:18.254 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:17:18.254 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:17:18.254 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:18.254 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=6559e38734b46e78f565e0a9c08d3375e54749d1c6fc079c 00:17:18.254 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:17:18.254 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.jjr 00:17:18.254 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 6559e38734b46e78f565e0a9c08d3375e54749d1c6fc079c 2 00:17:18.255 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 6559e38734b46e78f565e0a9c08d3375e54749d1c6fc079c 2 00:17:18.255 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:18.255 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:18.255 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=6559e38734b46e78f565e0a9c08d3375e54749d1c6fc079c 00:17:18.255 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:17:18.255 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:18.255 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.jjr 00:17:18.255 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.jjr 00:17:18.255 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.jjr 00:17:18.255 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:17:18.255 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:18.255 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:18.255 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:18.255 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:17:18.255 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:17:18.255 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:18.255 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=83c708c1ae82671a83ad8b11ccfbb59f 00:17:18.255 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:17:18.255 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.p9x 00:17:18.255 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 83c708c1ae82671a83ad8b11ccfbb59f 1 00:17:18.255 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 83c708c1ae82671a83ad8b11ccfbb59f 1 00:17:18.255 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:18.255 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:18.255 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=83c708c1ae82671a83ad8b11ccfbb59f 00:17:18.255 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:17:18.255 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:18.255 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.p9x 00:17:18.255 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.p9x 00:17:18.255 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.p9x 00:17:18.255 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:17:18.255 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:18.255 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:18.255 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:18.255 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:17:18.255 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:17:18.255 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:18.255 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=d00cc36c8babb18e89928073daec16bb8529b429eeeb55652ff92ac322724ed8 00:17:18.255 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:17:18.255 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.dcR 00:17:18.255 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key d00cc36c8babb18e89928073daec16bb8529b429eeeb55652ff92ac322724ed8 3 00:17:18.255 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 d00cc36c8babb18e89928073daec16bb8529b429eeeb55652ff92ac322724ed8 3 00:17:18.255 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:18.255 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:18.255 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=d00cc36c8babb18e89928073daec16bb8529b429eeeb55652ff92ac322724ed8 00:17:18.255 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:17:18.255 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:18.255 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.dcR 00:17:18.255 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.dcR 00:17:18.255 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.dcR 00:17:18.255 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:17:18.255 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 306767 00:17:18.255 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 306767 ']' 00:17:18.255 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:18.255 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:18.255 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:18.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:18.255 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:18.255 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.515 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:18.515 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:18.515 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 306976 /var/tmp/host.sock 00:17:18.515 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 306976 ']' 00:17:18.515 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:17:18.515 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:18.515 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:18.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:18.515 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:18.515 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.775 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:18.775 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:18.776 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:17:18.776 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.776 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.776 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.776 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:18.776 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.JU8 00:17:18.776 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.776 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.776 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.776 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.JU8 00:17:18.776 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.JU8 00:17:19.036 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.s7j ]] 00:17:19.036 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.s7j 00:17:19.036 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.036 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.036 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.036 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.s7j 00:17:19.036 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.s7j 00:17:19.037 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:19.037 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Brd 00:17:19.037 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.037 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.037 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.037 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.Brd 00:17:19.037 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.Brd 00:17:19.298 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.F4x ]] 00:17:19.298 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.F4x 00:17:19.298 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.299 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.299 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.299 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.F4x 00:17:19.299 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.F4x 00:17:19.560 06:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:19.560 06:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.jjr 00:17:19.560 06:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.560 06:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.560 06:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.560 06:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.jjr 00:17:19.560 06:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.jjr 00:17:19.821 06:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.p9x ]] 00:17:19.821 06:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.p9x 00:17:19.821 06:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.821 06:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.821 06:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.821 06:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.p9x 00:17:19.821 06:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.p9x 00:17:19.821 06:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:19.821 06:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.dcR 00:17:19.821 06:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.821 06:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.821 06:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.821 06:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.dcR 00:17:19.821 06:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.dcR 00:17:20.082 06:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:17:20.082 06:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:20.082 06:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:20.082 06:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:20.082 06:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:20.082 06:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:20.343 06:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:17:20.343 06:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:20.343 06:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:20.343 06:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:20.343 06:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:20.343 06:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.343 06:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.343 06:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.343 06:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.343 06:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.343 06:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.343 06:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.343 06:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.603 00:17:20.603 06:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:20.603 06:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:20.603 06:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.603 06:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.603 06:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.864 06:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.864 06:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.864 06:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.864 06:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:20.864 { 00:17:20.864 "cntlid": 1, 00:17:20.864 "qid": 0, 00:17:20.864 "state": "enabled", 00:17:20.864 "thread": "nvmf_tgt_poll_group_000", 00:17:20.864 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:17:20.864 "listen_address": { 00:17:20.864 "trtype": "TCP", 00:17:20.864 "adrfam": "IPv4", 00:17:20.864 "traddr": "10.0.0.2", 00:17:20.864 "trsvcid": "4420" 00:17:20.864 }, 00:17:20.864 "peer_address": { 00:17:20.864 "trtype": "TCP", 00:17:20.864 "adrfam": "IPv4", 00:17:20.864 "traddr": "10.0.0.1", 00:17:20.864 "trsvcid": "59940" 00:17:20.864 }, 00:17:20.864 "auth": { 00:17:20.864 "state": "completed", 00:17:20.864 "digest": "sha256", 00:17:20.864 "dhgroup": "null" 00:17:20.864 } 00:17:20.864 } 00:17:20.864 ]' 00:17:20.864 06:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:20.864 06:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:20.864 06:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:20.864 06:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:20.864 06:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:20.864 06:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.864 06:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.864 06:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.132 06:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDAxOGE5NDk1MDJhNzM2ODFiYzUyY2MxZmU2OThmOTBiOTMwZjU3ZDQ0ZjE4NmU1ABSl4A==: --dhchap-ctrl-secret DHHC-1:03:MTMyODEzMGVmNWE5NzE3YWJlNzM5ZTEzMmY5NDNlMDBiMDgzOGZiZTI5MmY4OTQ2M2IxMDFlYjU5MTA2MWUzN2CRBlA=: 00:17:21.132 06:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:00:MDAxOGE5NDk1MDJhNzM2ODFiYzUyY2MxZmU2OThmOTBiOTMwZjU3ZDQ0ZjE4NmU1ABSl4A==: --dhchap-ctrl-secret DHHC-1:03:MTMyODEzMGVmNWE5NzE3YWJlNzM5ZTEzMmY5NDNlMDBiMDgzOGZiZTI5MmY4OTQ2M2IxMDFlYjU5MTA2MWUzN2CRBlA=: 00:17:25.348 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.348 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.348 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:17:25.348 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.348 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.348 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.348 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:25.348 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:25.348 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:25.348 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:17:25.348 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:25.348 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:25.348 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:25.348 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:25.348 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.348 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.348 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.348 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.348 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.348 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.348 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.348 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.348 00:17:25.609 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:25.609 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:25.609 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.609 06:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.609 06:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.609 06:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.609 06:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.609 06:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.609 06:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:25.609 { 00:17:25.609 "cntlid": 3, 00:17:25.609 "qid": 0, 00:17:25.609 "state": "enabled", 00:17:25.609 "thread": "nvmf_tgt_poll_group_000", 00:17:25.609 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:17:25.609 "listen_address": { 00:17:25.609 "trtype": "TCP", 00:17:25.609 "adrfam": "IPv4", 00:17:25.609 "traddr": "10.0.0.2", 00:17:25.609 "trsvcid": "4420" 00:17:25.609 }, 00:17:25.609 "peer_address": { 00:17:25.609 "trtype": "TCP", 00:17:25.609 "adrfam": "IPv4", 00:17:25.609 "traddr": "10.0.0.1", 00:17:25.609 "trsvcid": "52456" 00:17:25.609 }, 00:17:25.609 "auth": { 00:17:25.609 "state": "completed", 00:17:25.609 "digest": "sha256", 00:17:25.609 "dhgroup": "null" 00:17:25.609 } 00:17:25.609 } 00:17:25.609 ]' 00:17:25.609 06:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:25.609 06:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:25.609 06:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:25.869 06:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:25.869 06:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:25.869 06:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.869 06:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.869 06:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.130 06:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzY2MTI0ZDdkM2E5ZDljNmJmOTU2ZWRmY2Y0ZDQ2NDko9oR0: --dhchap-ctrl-secret DHHC-1:02:ODU1MDU4MWQ5NmZlMjJkZDJmMjcxMjJkOGNhNTYwOGJiMzBkMWNhODg1MzdkYTY4EBx+bw==: 00:17:26.130 06:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:01:NzY2MTI0ZDdkM2E5ZDljNmJmOTU2ZWRmY2Y0ZDQ2NDko9oR0: --dhchap-ctrl-secret DHHC-1:02:ODU1MDU4MWQ5NmZlMjJkZDJmMjcxMjJkOGNhNTYwOGJiMzBkMWNhODg1MzdkYTY4EBx+bw==: 00:17:26.700 06:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.700 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.700 06:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:17:26.700 06:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.700 06:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.700 06:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.700 06:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:26.700 06:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:26.700 06:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:26.961 06:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:17:26.961 06:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:26.961 06:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:26.961 06:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:26.961 06:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:26.961 06:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.961 06:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.961 06:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.961 06:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.961 06:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.961 06:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.961 06:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.961 06:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:27.222 00:17:27.222 06:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:27.222 06:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:27.222 06:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.222 06:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.222 06:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.222 06:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.222 06:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.222 06:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.222 06:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:27.222 { 00:17:27.222 "cntlid": 5, 00:17:27.222 "qid": 0, 00:17:27.222 "state": "enabled", 00:17:27.222 "thread": "nvmf_tgt_poll_group_000", 00:17:27.222 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:17:27.222 "listen_address": { 00:17:27.222 "trtype": "TCP", 00:17:27.222 "adrfam": "IPv4", 00:17:27.222 "traddr": "10.0.0.2", 00:17:27.222 "trsvcid": "4420" 00:17:27.222 }, 00:17:27.222 "peer_address": { 00:17:27.222 "trtype": "TCP", 00:17:27.222 "adrfam": "IPv4", 00:17:27.222 "traddr": "10.0.0.1", 00:17:27.222 "trsvcid": "52480" 00:17:27.222 }, 00:17:27.222 "auth": { 00:17:27.222 "state": "completed", 00:17:27.222 "digest": "sha256", 00:17:27.222 "dhgroup": "null" 00:17:27.222 } 00:17:27.222 } 00:17:27.222 ]' 00:17:27.222 06:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:27.222 06:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:27.222 06:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:27.482 06:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:27.482 06:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:27.482 06:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.482 06:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.482 06:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.742 06:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjU1OWUzODczNGI0NmU3OGY1NjVlMGE5YzA4ZDMzNzVlNTQ3NDlkMWM2ZmMwNzljte7iiw==: --dhchap-ctrl-secret DHHC-1:01:ODNjNzA4YzFhZTgyNjcxYTgzYWQ4YjExY2NmYmI1OWY6vyUp: 00:17:27.742 06:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:02:NjU1OWUzODczNGI0NmU3OGY1NjVlMGE5YzA4ZDMzNzVlNTQ3NDlkMWM2ZmMwNzljte7iiw==: --dhchap-ctrl-secret DHHC-1:01:ODNjNzA4YzFhZTgyNjcxYTgzYWQ4YjExY2NmYmI1OWY6vyUp: 00:17:28.312 06:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.312 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.312 06:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:17:28.312 06:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.312 06:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.312 06:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.312 06:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:28.312 06:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:28.312 06:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:28.573 06:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:17:28.573 06:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:28.573 06:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:28.573 06:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:28.573 06:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:28.573 06:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.573 06:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key3 00:17:28.573 06:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.573 06:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.574 06:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.574 06:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:28.574 06:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:28.574 06:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:28.574 00:17:28.834 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:28.834 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:28.834 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.834 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.834 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.834 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.834 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.834 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.834 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:28.835 { 00:17:28.835 "cntlid": 7, 00:17:28.835 "qid": 0, 00:17:28.835 "state": "enabled", 00:17:28.835 "thread": "nvmf_tgt_poll_group_000", 00:17:28.835 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:17:28.835 "listen_address": { 00:17:28.835 "trtype": "TCP", 00:17:28.835 "adrfam": "IPv4", 00:17:28.835 "traddr": "10.0.0.2", 00:17:28.835 "trsvcid": "4420" 00:17:28.835 }, 00:17:28.835 "peer_address": { 00:17:28.835 "trtype": "TCP", 00:17:28.835 "adrfam": "IPv4", 00:17:28.835 "traddr": "10.0.0.1", 00:17:28.835 "trsvcid": "52508" 00:17:28.835 }, 00:17:28.835 "auth": { 00:17:28.835 "state": "completed", 00:17:28.835 "digest": "sha256", 00:17:28.835 "dhgroup": "null" 00:17:28.835 } 00:17:28.835 } 00:17:28.835 ]' 00:17:28.835 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:28.835 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:28.835 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:29.095 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:29.095 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:29.095 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.095 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.095 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.095 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDAwY2MzNmM4YmFiYjE4ZTg5OTI4MDczZGFlYzE2YmI4NTI5YjQyOWVlZWI1NTY1MmZmOTJhYzMyMjcyNGVkOBP12V4=: 00:17:29.095 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:03:ZDAwY2MzNmM4YmFiYjE4ZTg5OTI4MDczZGFlYzE2YmI4NTI5YjQyOWVlZWI1NTY1MmZmOTJhYzMyMjcyNGVkOBP12V4=: 00:17:29.666 06:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.666 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.666 06:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:17:29.666 06:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.666 06:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.666 06:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.666 06:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:29.666 06:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:29.666 06:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:29.666 06:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:29.927 06:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:17:29.927 06:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:29.927 06:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:29.927 06:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:29.927 06:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:29.927 06:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.927 06:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.927 06:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.927 06:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.927 06:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.927 06:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.927 06:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.927 06:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.187 00:17:30.187 06:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:30.187 06:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.187 06:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:30.447 06:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.447 06:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.447 06:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.448 06:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.448 06:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.448 06:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:30.448 { 00:17:30.448 "cntlid": 9, 00:17:30.448 "qid": 0, 00:17:30.448 "state": "enabled", 00:17:30.448 "thread": "nvmf_tgt_poll_group_000", 00:17:30.448 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:17:30.448 "listen_address": { 00:17:30.448 "trtype": "TCP", 00:17:30.448 "adrfam": "IPv4", 00:17:30.448 "traddr": "10.0.0.2", 00:17:30.448 "trsvcid": "4420" 00:17:30.448 }, 00:17:30.448 "peer_address": { 00:17:30.448 "trtype": "TCP", 00:17:30.448 "adrfam": "IPv4", 00:17:30.448 "traddr": "10.0.0.1", 00:17:30.448 "trsvcid": "52548" 00:17:30.448 }, 00:17:30.448 "auth": { 00:17:30.448 "state": "completed", 00:17:30.448 "digest": "sha256", 00:17:30.448 "dhgroup": "ffdhe2048" 00:17:30.448 } 00:17:30.448 } 00:17:30.448 ]' 00:17:30.448 06:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:30.448 06:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:30.448 06:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:30.448 06:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:30.448 06:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:30.448 06:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.448 06:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.448 06:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.708 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDAxOGE5NDk1MDJhNzM2ODFiYzUyY2MxZmU2OThmOTBiOTMwZjU3ZDQ0ZjE4NmU1ABSl4A==: --dhchap-ctrl-secret DHHC-1:03:MTMyODEzMGVmNWE5NzE3YWJlNzM5ZTEzMmY5NDNlMDBiMDgzOGZiZTI5MmY4OTQ2M2IxMDFlYjU5MTA2MWUzN2CRBlA=: 00:17:30.708 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:00:MDAxOGE5NDk1MDJhNzM2ODFiYzUyY2MxZmU2OThmOTBiOTMwZjU3ZDQ0ZjE4NmU1ABSl4A==: --dhchap-ctrl-secret DHHC-1:03:MTMyODEzMGVmNWE5NzE3YWJlNzM5ZTEzMmY5NDNlMDBiMDgzOGZiZTI5MmY4OTQ2M2IxMDFlYjU5MTA2MWUzN2CRBlA=: 00:17:31.277 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.537 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.537 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:17:31.537 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.537 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.537 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.537 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:31.537 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:31.537 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:31.537 06:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:17:31.537 06:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:31.537 06:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:31.537 06:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:31.798 06:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:31.798 06:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.799 06:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.799 06:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.799 06:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.799 06:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.799 06:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.799 06:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.799 06:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.799 00:17:32.060 06:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:32.060 06:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:32.060 06:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.060 06:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.060 06:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.060 06:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.060 06:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.060 06:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.060 06:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:32.060 { 00:17:32.060 "cntlid": 11, 00:17:32.060 "qid": 0, 00:17:32.060 "state": "enabled", 00:17:32.060 "thread": "nvmf_tgt_poll_group_000", 00:17:32.060 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:17:32.060 "listen_address": { 00:17:32.060 "trtype": "TCP", 00:17:32.060 "adrfam": "IPv4", 00:17:32.060 "traddr": "10.0.0.2", 00:17:32.060 "trsvcid": "4420" 00:17:32.060 }, 00:17:32.060 "peer_address": { 00:17:32.060 "trtype": "TCP", 00:17:32.060 "adrfam": "IPv4", 00:17:32.060 "traddr": "10.0.0.1", 00:17:32.060 "trsvcid": "52560" 00:17:32.060 }, 00:17:32.060 "auth": { 00:17:32.060 "state": "completed", 00:17:32.060 "digest": "sha256", 00:17:32.060 "dhgroup": "ffdhe2048" 00:17:32.060 } 00:17:32.060 } 00:17:32.060 ]' 00:17:32.060 06:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:32.060 06:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:32.060 06:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:32.321 06:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:32.321 06:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:32.321 06:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.321 06:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.321 06:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.321 06:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzY2MTI0ZDdkM2E5ZDljNmJmOTU2ZWRmY2Y0ZDQ2NDko9oR0: --dhchap-ctrl-secret DHHC-1:02:ODU1MDU4MWQ5NmZlMjJkZDJmMjcxMjJkOGNhNTYwOGJiMzBkMWNhODg1MzdkYTY4EBx+bw==: 00:17:32.321 06:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:01:NzY2MTI0ZDdkM2E5ZDljNmJmOTU2ZWRmY2Y0ZDQ2NDko9oR0: --dhchap-ctrl-secret DHHC-1:02:ODU1MDU4MWQ5NmZlMjJkZDJmMjcxMjJkOGNhNTYwOGJiMzBkMWNhODg1MzdkYTY4EBx+bw==: 00:17:33.265 06:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.265 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.265 06:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:17:33.265 06:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.265 06:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.265 06:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.265 06:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:33.265 06:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:33.265 06:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:33.265 06:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:17:33.265 06:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:33.265 06:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:33.265 06:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:33.265 06:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:33.265 06:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.265 06:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.265 06:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.265 06:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.265 06:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.265 06:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.265 06:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.265 06:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.526 00:17:33.526 06:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:33.526 06:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:33.526 06:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.805 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.806 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.806 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.806 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.806 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.806 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:33.806 { 00:17:33.806 "cntlid": 13, 00:17:33.806 "qid": 0, 00:17:33.806 "state": "enabled", 00:17:33.806 "thread": "nvmf_tgt_poll_group_000", 00:17:33.806 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:17:33.806 "listen_address": { 00:17:33.806 "trtype": "TCP", 00:17:33.806 "adrfam": "IPv4", 00:17:33.806 "traddr": "10.0.0.2", 00:17:33.806 "trsvcid": "4420" 00:17:33.806 }, 00:17:33.806 "peer_address": { 00:17:33.806 "trtype": "TCP", 00:17:33.806 "adrfam": "IPv4", 00:17:33.806 "traddr": "10.0.0.1", 00:17:33.806 "trsvcid": "52590" 00:17:33.806 }, 00:17:33.806 "auth": { 00:17:33.806 "state": "completed", 00:17:33.806 "digest": "sha256", 00:17:33.806 "dhgroup": "ffdhe2048" 00:17:33.806 } 00:17:33.806 } 00:17:33.806 ]' 00:17:33.806 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:33.806 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:33.806 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:33.806 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:33.806 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:33.806 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.806 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.806 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.065 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjU1OWUzODczNGI0NmU3OGY1NjVlMGE5YzA4ZDMzNzVlNTQ3NDlkMWM2ZmMwNzljte7iiw==: --dhchap-ctrl-secret DHHC-1:01:ODNjNzA4YzFhZTgyNjcxYTgzYWQ4YjExY2NmYmI1OWY6vyUp: 00:17:34.065 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:02:NjU1OWUzODczNGI0NmU3OGY1NjVlMGE5YzA4ZDMzNzVlNTQ3NDlkMWM2ZmMwNzljte7iiw==: --dhchap-ctrl-secret DHHC-1:01:ODNjNzA4YzFhZTgyNjcxYTgzYWQ4YjExY2NmYmI1OWY6vyUp: 00:17:34.636 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.636 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.636 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:17:34.636 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.636 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.636 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.636 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:34.636 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:34.637 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:34.897 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:17:34.897 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:34.897 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:34.897 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:34.897 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:34.897 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:34.897 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key3 00:17:34.897 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.897 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.897 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.897 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:34.897 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:34.897 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:35.157 00:17:35.157 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:35.157 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:35.157 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.157 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.157 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.157 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.157 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.417 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.417 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:35.417 { 00:17:35.417 "cntlid": 15, 00:17:35.417 "qid": 0, 00:17:35.417 "state": "enabled", 00:17:35.417 "thread": "nvmf_tgt_poll_group_000", 00:17:35.417 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:17:35.417 "listen_address": { 00:17:35.417 "trtype": "TCP", 00:17:35.417 "adrfam": "IPv4", 00:17:35.417 "traddr": "10.0.0.2", 00:17:35.417 "trsvcid": "4420" 00:17:35.417 }, 00:17:35.417 "peer_address": { 00:17:35.417 "trtype": "TCP", 00:17:35.417 "adrfam": "IPv4", 00:17:35.417 "traddr": "10.0.0.1", 00:17:35.417 "trsvcid": "38282" 00:17:35.417 }, 00:17:35.417 "auth": { 00:17:35.417 "state": "completed", 00:17:35.417 "digest": "sha256", 00:17:35.417 "dhgroup": "ffdhe2048" 00:17:35.417 } 00:17:35.417 } 00:17:35.417 ]' 00:17:35.417 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:35.417 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:35.417 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:35.417 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:35.417 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:35.417 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.417 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.417 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.678 06:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDAwY2MzNmM4YmFiYjE4ZTg5OTI4MDczZGFlYzE2YmI4NTI5YjQyOWVlZWI1NTY1MmZmOTJhYzMyMjcyNGVkOBP12V4=: 00:17:35.678 06:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:03:ZDAwY2MzNmM4YmFiYjE4ZTg5OTI4MDczZGFlYzE2YmI4NTI5YjQyOWVlZWI1NTY1MmZmOTJhYzMyMjcyNGVkOBP12V4=: 00:17:36.284 06:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.284 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.284 06:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:17:36.284 06:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.284 06:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.284 06:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.284 06:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:36.284 06:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:36.284 06:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:36.284 06:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:36.544 06:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:17:36.544 06:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:36.544 06:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:36.544 06:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:36.544 06:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:36.544 06:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:36.544 06:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.544 06:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.544 06:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.544 06:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.544 06:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.545 06:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.545 06:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.804 00:17:36.804 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:36.804 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:36.804 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.804 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.804 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.804 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.804 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.804 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.804 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:36.804 { 00:17:36.804 "cntlid": 17, 00:17:36.804 "qid": 0, 00:17:36.804 "state": "enabled", 00:17:36.804 "thread": "nvmf_tgt_poll_group_000", 00:17:36.804 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:17:36.804 "listen_address": { 00:17:36.804 "trtype": "TCP", 00:17:36.804 "adrfam": "IPv4", 00:17:36.804 "traddr": "10.0.0.2", 00:17:36.804 "trsvcid": "4420" 00:17:36.804 }, 00:17:36.804 "peer_address": { 00:17:36.804 "trtype": "TCP", 00:17:36.804 "adrfam": "IPv4", 00:17:36.804 "traddr": "10.0.0.1", 00:17:36.804 "trsvcid": "38298" 00:17:36.804 }, 00:17:36.804 "auth": { 00:17:36.804 "state": "completed", 00:17:36.804 "digest": "sha256", 00:17:36.804 "dhgroup": "ffdhe3072" 00:17:36.804 } 00:17:36.804 } 00:17:36.804 ]' 00:17:36.804 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:37.063 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:37.063 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:37.063 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:37.063 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:37.063 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.063 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.063 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.323 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDAxOGE5NDk1MDJhNzM2ODFiYzUyY2MxZmU2OThmOTBiOTMwZjU3ZDQ0ZjE4NmU1ABSl4A==: --dhchap-ctrl-secret DHHC-1:03:MTMyODEzMGVmNWE5NzE3YWJlNzM5ZTEzMmY5NDNlMDBiMDgzOGZiZTI5MmY4OTQ2M2IxMDFlYjU5MTA2MWUzN2CRBlA=: 00:17:37.324 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:00:MDAxOGE5NDk1MDJhNzM2ODFiYzUyY2MxZmU2OThmOTBiOTMwZjU3ZDQ0ZjE4NmU1ABSl4A==: --dhchap-ctrl-secret DHHC-1:03:MTMyODEzMGVmNWE5NzE3YWJlNzM5ZTEzMmY5NDNlMDBiMDgzOGZiZTI5MmY4OTQ2M2IxMDFlYjU5MTA2MWUzN2CRBlA=: 00:17:37.894 06:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.894 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.894 06:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:17:37.894 06:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.894 06:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.894 06:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.894 06:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:37.894 06:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:37.894 06:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:38.155 06:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:17:38.155 06:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:38.155 06:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:38.155 06:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:38.155 06:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:38.155 06:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:38.155 06:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.155 06:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.155 06:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.155 06:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.155 06:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.155 06:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.155 06:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.155 00:17:38.155 06:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:38.155 06:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:38.155 06:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.416 06:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.416 06:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.416 06:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.416 06:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.416 06:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.416 06:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:38.416 { 00:17:38.416 "cntlid": 19, 00:17:38.416 "qid": 0, 00:17:38.416 "state": "enabled", 00:17:38.416 "thread": "nvmf_tgt_poll_group_000", 00:17:38.416 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:17:38.416 "listen_address": { 00:17:38.416 "trtype": "TCP", 00:17:38.416 "adrfam": "IPv4", 00:17:38.416 "traddr": "10.0.0.2", 00:17:38.416 "trsvcid": "4420" 00:17:38.416 }, 00:17:38.416 "peer_address": { 00:17:38.416 "trtype": "TCP", 00:17:38.416 "adrfam": "IPv4", 00:17:38.416 "traddr": "10.0.0.1", 00:17:38.416 "trsvcid": "38338" 00:17:38.416 }, 00:17:38.416 "auth": { 00:17:38.416 "state": "completed", 00:17:38.416 "digest": "sha256", 00:17:38.416 "dhgroup": "ffdhe3072" 00:17:38.416 } 00:17:38.416 } 00:17:38.416 ]' 00:17:38.416 06:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:38.416 06:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:38.416 06:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:38.416 06:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:38.416 06:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:38.677 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.677 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.677 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.677 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzY2MTI0ZDdkM2E5ZDljNmJmOTU2ZWRmY2Y0ZDQ2NDko9oR0: --dhchap-ctrl-secret DHHC-1:02:ODU1MDU4MWQ5NmZlMjJkZDJmMjcxMjJkOGNhNTYwOGJiMzBkMWNhODg1MzdkYTY4EBx+bw==: 00:17:38.677 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:01:NzY2MTI0ZDdkM2E5ZDljNmJmOTU2ZWRmY2Y0ZDQ2NDko9oR0: --dhchap-ctrl-secret DHHC-1:02:ODU1MDU4MWQ5NmZlMjJkZDJmMjcxMjJkOGNhNTYwOGJiMzBkMWNhODg1MzdkYTY4EBx+bw==: 00:17:39.248 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.510 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.510 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:17:39.510 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.510 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.510 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.510 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:39.510 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:39.510 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:39.510 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:17:39.510 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:39.510 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:39.510 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:39.510 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:39.510 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.510 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:39.510 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.510 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.510 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.510 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:39.510 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:39.510 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:39.770 00:17:39.770 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:39.770 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:39.770 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.029 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.029 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:40.029 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.029 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.029 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.029 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:40.029 { 00:17:40.029 "cntlid": 21, 00:17:40.029 "qid": 0, 00:17:40.029 "state": "enabled", 00:17:40.029 "thread": "nvmf_tgt_poll_group_000", 00:17:40.029 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:17:40.029 "listen_address": { 00:17:40.029 "trtype": "TCP", 00:17:40.029 "adrfam": "IPv4", 00:17:40.029 "traddr": "10.0.0.2", 00:17:40.029 "trsvcid": "4420" 00:17:40.029 }, 00:17:40.029 "peer_address": { 00:17:40.029 "trtype": "TCP", 00:17:40.029 "adrfam": "IPv4", 00:17:40.029 "traddr": "10.0.0.1", 00:17:40.029 "trsvcid": "38362" 00:17:40.029 }, 00:17:40.029 "auth": { 00:17:40.029 "state": "completed", 00:17:40.029 "digest": "sha256", 00:17:40.029 "dhgroup": "ffdhe3072" 00:17:40.029 } 00:17:40.029 } 00:17:40.029 ]' 00:17:40.029 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:40.029 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:40.029 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:40.029 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:40.029 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:40.290 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.290 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.290 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.290 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjU1OWUzODczNGI0NmU3OGY1NjVlMGE5YzA4ZDMzNzVlNTQ3NDlkMWM2ZmMwNzljte7iiw==: --dhchap-ctrl-secret DHHC-1:01:ODNjNzA4YzFhZTgyNjcxYTgzYWQ4YjExY2NmYmI1OWY6vyUp: 00:17:40.290 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:02:NjU1OWUzODczNGI0NmU3OGY1NjVlMGE5YzA4ZDMzNzVlNTQ3NDlkMWM2ZmMwNzljte7iiw==: --dhchap-ctrl-secret DHHC-1:01:ODNjNzA4YzFhZTgyNjcxYTgzYWQ4YjExY2NmYmI1OWY6vyUp: 00:17:40.860 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.121 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.121 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:17:41.121 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.121 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.121 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.121 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:41.121 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:41.121 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:41.121 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:17:41.121 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:41.121 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:41.121 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:41.121 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:41.121 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.121 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key3 00:17:41.121 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.121 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.121 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.121 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:41.121 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:41.121 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:41.381 00:17:41.381 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:41.381 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:41.381 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.644 06:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.644 06:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.644 06:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.644 06:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.644 06:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.644 06:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:41.644 { 00:17:41.644 "cntlid": 23, 00:17:41.644 "qid": 0, 00:17:41.644 "state": "enabled", 00:17:41.644 "thread": "nvmf_tgt_poll_group_000", 00:17:41.644 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:17:41.644 "listen_address": { 00:17:41.644 "trtype": "TCP", 00:17:41.644 "adrfam": "IPv4", 00:17:41.644 "traddr": "10.0.0.2", 00:17:41.644 "trsvcid": "4420" 00:17:41.644 }, 00:17:41.644 "peer_address": { 00:17:41.644 "trtype": "TCP", 00:17:41.644 "adrfam": "IPv4", 00:17:41.644 "traddr": "10.0.0.1", 00:17:41.644 "trsvcid": "38396" 00:17:41.644 }, 00:17:41.644 "auth": { 00:17:41.644 "state": "completed", 00:17:41.644 "digest": "sha256", 00:17:41.644 "dhgroup": "ffdhe3072" 00:17:41.644 } 00:17:41.644 } 00:17:41.644 ]' 00:17:41.644 06:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:41.644 06:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:41.644 06:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:41.644 06:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:41.644 06:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:41.644 06:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.644 06:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.644 06:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.905 06:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDAwY2MzNmM4YmFiYjE4ZTg5OTI4MDczZGFlYzE2YmI4NTI5YjQyOWVlZWI1NTY1MmZmOTJhYzMyMjcyNGVkOBP12V4=: 00:17:41.905 06:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:03:ZDAwY2MzNmM4YmFiYjE4ZTg5OTI4MDczZGFlYzE2YmI4NTI5YjQyOWVlZWI1NTY1MmZmOTJhYzMyMjcyNGVkOBP12V4=: 00:17:42.476 06:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.476 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.476 06:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:17:42.476 06:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.476 06:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.476 06:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.476 06:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:42.476 06:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:42.476 06:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:42.476 06:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:42.736 06:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:17:42.736 06:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:42.736 06:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:42.736 06:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:42.736 06:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:42.736 06:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.736 06:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.736 06:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.736 06:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.736 06:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.736 06:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.736 06:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.736 06:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.996 00:17:42.996 06:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:42.996 06:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:42.996 06:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.257 06:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.257 06:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.257 06:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.257 06:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.257 06:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.257 06:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:43.257 { 00:17:43.257 "cntlid": 25, 00:17:43.257 "qid": 0, 00:17:43.257 "state": "enabled", 00:17:43.257 "thread": "nvmf_tgt_poll_group_000", 00:17:43.257 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:17:43.257 "listen_address": { 00:17:43.257 "trtype": "TCP", 00:17:43.257 "adrfam": "IPv4", 00:17:43.257 "traddr": "10.0.0.2", 00:17:43.257 "trsvcid": "4420" 00:17:43.257 }, 00:17:43.257 "peer_address": { 00:17:43.257 "trtype": "TCP", 00:17:43.257 "adrfam": "IPv4", 00:17:43.257 "traddr": "10.0.0.1", 00:17:43.257 "trsvcid": "38430" 00:17:43.257 }, 00:17:43.257 "auth": { 00:17:43.257 "state": "completed", 00:17:43.257 "digest": "sha256", 00:17:43.257 "dhgroup": "ffdhe4096" 00:17:43.257 } 00:17:43.257 } 00:17:43.257 ]' 00:17:43.257 06:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:43.257 06:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:43.257 06:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:43.257 06:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:43.257 06:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:43.257 06:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.257 06:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.257 06:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.517 06:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDAxOGE5NDk1MDJhNzM2ODFiYzUyY2MxZmU2OThmOTBiOTMwZjU3ZDQ0ZjE4NmU1ABSl4A==: --dhchap-ctrl-secret DHHC-1:03:MTMyODEzMGVmNWE5NzE3YWJlNzM5ZTEzMmY5NDNlMDBiMDgzOGZiZTI5MmY4OTQ2M2IxMDFlYjU5MTA2MWUzN2CRBlA=: 00:17:43.517 06:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:00:MDAxOGE5NDk1MDJhNzM2ODFiYzUyY2MxZmU2OThmOTBiOTMwZjU3ZDQ0ZjE4NmU1ABSl4A==: --dhchap-ctrl-secret DHHC-1:03:MTMyODEzMGVmNWE5NzE3YWJlNzM5ZTEzMmY5NDNlMDBiMDgzOGZiZTI5MmY4OTQ2M2IxMDFlYjU5MTA2MWUzN2CRBlA=: 00:17:44.087 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.087 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.087 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:17:44.087 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.087 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.087 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.087 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:44.087 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:44.088 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:44.348 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:17:44.348 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:44.348 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:44.348 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:44.348 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:44.348 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:44.348 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.348 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.348 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.348 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.348 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.348 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.348 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.608 00:17:44.608 06:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:44.608 06:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:44.608 06:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.869 06:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.869 06:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.869 06:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.869 06:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.869 06:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.869 06:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:44.869 { 00:17:44.869 "cntlid": 27, 00:17:44.869 "qid": 0, 00:17:44.869 "state": "enabled", 00:17:44.869 "thread": "nvmf_tgt_poll_group_000", 00:17:44.869 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:17:44.869 "listen_address": { 00:17:44.869 "trtype": "TCP", 00:17:44.869 "adrfam": "IPv4", 00:17:44.869 "traddr": "10.0.0.2", 00:17:44.869 "trsvcid": "4420" 00:17:44.869 }, 00:17:44.869 "peer_address": { 00:17:44.869 "trtype": "TCP", 00:17:44.869 "adrfam": "IPv4", 00:17:44.869 "traddr": "10.0.0.1", 00:17:44.869 "trsvcid": "38458" 00:17:44.869 }, 00:17:44.869 "auth": { 00:17:44.869 "state": "completed", 00:17:44.869 "digest": "sha256", 00:17:44.869 "dhgroup": "ffdhe4096" 00:17:44.869 } 00:17:44.869 } 00:17:44.869 ]' 00:17:44.869 06:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:44.869 06:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:44.869 06:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:44.869 06:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:44.869 06:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:44.869 06:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.869 06:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.869 06:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.129 06:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzY2MTI0ZDdkM2E5ZDljNmJmOTU2ZWRmY2Y0ZDQ2NDko9oR0: --dhchap-ctrl-secret DHHC-1:02:ODU1MDU4MWQ5NmZlMjJkZDJmMjcxMjJkOGNhNTYwOGJiMzBkMWNhODg1MzdkYTY4EBx+bw==: 00:17:45.129 06:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:01:NzY2MTI0ZDdkM2E5ZDljNmJmOTU2ZWRmY2Y0ZDQ2NDko9oR0: --dhchap-ctrl-secret DHHC-1:02:ODU1MDU4MWQ5NmZlMjJkZDJmMjcxMjJkOGNhNTYwOGJiMzBkMWNhODg1MzdkYTY4EBx+bw==: 00:17:45.699 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.699 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.699 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:17:45.699 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.699 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.699 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.699 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:45.699 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:45.699 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:45.959 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:17:45.959 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:45.959 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:45.959 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:45.959 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:45.959 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.959 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.959 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.959 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.959 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.959 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.959 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.959 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.219 00:17:46.219 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:46.219 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:46.219 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.479 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.479 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.479 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.479 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.479 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.479 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:46.479 { 00:17:46.479 "cntlid": 29, 00:17:46.479 "qid": 0, 00:17:46.479 "state": "enabled", 00:17:46.479 "thread": "nvmf_tgt_poll_group_000", 00:17:46.479 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:17:46.479 "listen_address": { 00:17:46.479 "trtype": "TCP", 00:17:46.479 "adrfam": "IPv4", 00:17:46.479 "traddr": "10.0.0.2", 00:17:46.479 "trsvcid": "4420" 00:17:46.479 }, 00:17:46.479 "peer_address": { 00:17:46.479 "trtype": "TCP", 00:17:46.479 "adrfam": "IPv4", 00:17:46.479 "traddr": "10.0.0.1", 00:17:46.479 "trsvcid": "53430" 00:17:46.479 }, 00:17:46.479 "auth": { 00:17:46.479 "state": "completed", 00:17:46.479 "digest": "sha256", 00:17:46.479 "dhgroup": "ffdhe4096" 00:17:46.479 } 00:17:46.479 } 00:17:46.479 ]' 00:17:46.479 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:46.479 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:46.479 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:46.479 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:46.479 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:46.479 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.479 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.479 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.739 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjU1OWUzODczNGI0NmU3OGY1NjVlMGE5YzA4ZDMzNzVlNTQ3NDlkMWM2ZmMwNzljte7iiw==: --dhchap-ctrl-secret DHHC-1:01:ODNjNzA4YzFhZTgyNjcxYTgzYWQ4YjExY2NmYmI1OWY6vyUp: 00:17:46.739 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:02:NjU1OWUzODczNGI0NmU3OGY1NjVlMGE5YzA4ZDMzNzVlNTQ3NDlkMWM2ZmMwNzljte7iiw==: --dhchap-ctrl-secret DHHC-1:01:ODNjNzA4YzFhZTgyNjcxYTgzYWQ4YjExY2NmYmI1OWY6vyUp: 00:17:47.310 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.310 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.310 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:17:47.310 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.310 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.310 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.310 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:47.310 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:47.310 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:47.571 06:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:17:47.571 06:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:47.571 06:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:47.571 06:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:47.571 06:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:47.571 06:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.571 06:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key3 00:17:47.571 06:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.571 06:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.571 06:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.571 06:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:47.571 06:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:47.571 06:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:47.832 00:17:47.832 06:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:47.832 06:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:47.832 06:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.092 06:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.092 06:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.092 06:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.092 06:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.092 06:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.092 06:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:48.092 { 00:17:48.092 "cntlid": 31, 00:17:48.092 "qid": 0, 00:17:48.092 "state": "enabled", 00:17:48.092 "thread": "nvmf_tgt_poll_group_000", 00:17:48.092 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:17:48.092 "listen_address": { 00:17:48.092 "trtype": "TCP", 00:17:48.092 "adrfam": "IPv4", 00:17:48.092 "traddr": "10.0.0.2", 00:17:48.093 "trsvcid": "4420" 00:17:48.093 }, 00:17:48.093 "peer_address": { 00:17:48.093 "trtype": "TCP", 00:17:48.093 "adrfam": "IPv4", 00:17:48.093 "traddr": "10.0.0.1", 00:17:48.093 "trsvcid": "53452" 00:17:48.093 }, 00:17:48.093 "auth": { 00:17:48.093 "state": "completed", 00:17:48.093 "digest": "sha256", 00:17:48.093 "dhgroup": "ffdhe4096" 00:17:48.093 } 00:17:48.093 } 00:17:48.093 ]' 00:17:48.093 06:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:48.093 06:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:48.093 06:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:48.093 06:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:48.093 06:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:48.093 06:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.093 06:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.093 06:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.353 06:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDAwY2MzNmM4YmFiYjE4ZTg5OTI4MDczZGFlYzE2YmI4NTI5YjQyOWVlZWI1NTY1MmZmOTJhYzMyMjcyNGVkOBP12V4=: 00:17:48.353 06:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:03:ZDAwY2MzNmM4YmFiYjE4ZTg5OTI4MDczZGFlYzE2YmI4NTI5YjQyOWVlZWI1NTY1MmZmOTJhYzMyMjcyNGVkOBP12V4=: 00:17:48.925 06:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.925 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.925 06:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:17:48.925 06:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.925 06:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.925 06:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.925 06:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:48.925 06:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:48.925 06:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:48.925 06:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:49.205 06:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:17:49.205 06:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:49.205 06:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:49.205 06:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:49.205 06:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:49.205 06:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.205 06:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.205 06:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.205 06:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.205 06:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.205 06:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.205 06:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.205 06:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.466 00:17:49.466 06:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:49.466 06:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:49.466 06:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.727 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.727 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.727 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.727 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.727 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.727 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:49.727 { 00:17:49.727 "cntlid": 33, 00:17:49.727 "qid": 0, 00:17:49.727 "state": "enabled", 00:17:49.727 "thread": "nvmf_tgt_poll_group_000", 00:17:49.727 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:17:49.727 "listen_address": { 00:17:49.727 "trtype": "TCP", 00:17:49.727 "adrfam": "IPv4", 00:17:49.727 "traddr": "10.0.0.2", 00:17:49.727 "trsvcid": "4420" 00:17:49.727 }, 00:17:49.727 "peer_address": { 00:17:49.727 "trtype": "TCP", 00:17:49.727 "adrfam": "IPv4", 00:17:49.727 "traddr": "10.0.0.1", 00:17:49.727 "trsvcid": "53470" 00:17:49.727 }, 00:17:49.727 "auth": { 00:17:49.727 "state": "completed", 00:17:49.727 "digest": "sha256", 00:17:49.727 "dhgroup": "ffdhe6144" 00:17:49.727 } 00:17:49.727 } 00:17:49.727 ]' 00:17:49.727 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:49.727 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:49.727 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:49.727 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:49.727 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:49.727 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.727 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.727 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.988 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDAxOGE5NDk1MDJhNzM2ODFiYzUyY2MxZmU2OThmOTBiOTMwZjU3ZDQ0ZjE4NmU1ABSl4A==: --dhchap-ctrl-secret DHHC-1:03:MTMyODEzMGVmNWE5NzE3YWJlNzM5ZTEzMmY5NDNlMDBiMDgzOGZiZTI5MmY4OTQ2M2IxMDFlYjU5MTA2MWUzN2CRBlA=: 00:17:49.988 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:00:MDAxOGE5NDk1MDJhNzM2ODFiYzUyY2MxZmU2OThmOTBiOTMwZjU3ZDQ0ZjE4NmU1ABSl4A==: --dhchap-ctrl-secret DHHC-1:03:MTMyODEzMGVmNWE5NzE3YWJlNzM5ZTEzMmY5NDNlMDBiMDgzOGZiZTI5MmY4OTQ2M2IxMDFlYjU5MTA2MWUzN2CRBlA=: 00:17:50.559 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.559 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.559 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:17:50.559 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.559 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.559 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.559 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:50.559 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:50.559 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:50.820 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:17:50.820 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:50.820 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:50.820 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:50.820 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:50.820 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.820 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.820 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.820 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.820 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.820 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.820 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.820 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:51.080 00:17:51.080 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:51.080 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:51.080 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.341 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.341 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.341 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.341 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.341 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.341 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:51.341 { 00:17:51.341 "cntlid": 35, 00:17:51.341 "qid": 0, 00:17:51.341 "state": "enabled", 00:17:51.341 "thread": "nvmf_tgt_poll_group_000", 00:17:51.341 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:17:51.341 "listen_address": { 00:17:51.341 "trtype": "TCP", 00:17:51.341 "adrfam": "IPv4", 00:17:51.341 "traddr": "10.0.0.2", 00:17:51.341 "trsvcid": "4420" 00:17:51.341 }, 00:17:51.341 "peer_address": { 00:17:51.342 "trtype": "TCP", 00:17:51.342 "adrfam": "IPv4", 00:17:51.342 "traddr": "10.0.0.1", 00:17:51.342 "trsvcid": "53486" 00:17:51.342 }, 00:17:51.342 "auth": { 00:17:51.342 "state": "completed", 00:17:51.342 "digest": "sha256", 00:17:51.342 "dhgroup": "ffdhe6144" 00:17:51.342 } 00:17:51.342 } 00:17:51.342 ]' 00:17:51.342 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:51.342 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:51.342 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:51.342 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:51.602 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:51.602 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.602 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.602 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.602 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzY2MTI0ZDdkM2E5ZDljNmJmOTU2ZWRmY2Y0ZDQ2NDko9oR0: --dhchap-ctrl-secret DHHC-1:02:ODU1MDU4MWQ5NmZlMjJkZDJmMjcxMjJkOGNhNTYwOGJiMzBkMWNhODg1MzdkYTY4EBx+bw==: 00:17:51.602 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:01:NzY2MTI0ZDdkM2E5ZDljNmJmOTU2ZWRmY2Y0ZDQ2NDko9oR0: --dhchap-ctrl-secret DHHC-1:02:ODU1MDU4MWQ5NmZlMjJkZDJmMjcxMjJkOGNhNTYwOGJiMzBkMWNhODg1MzdkYTY4EBx+bw==: 00:17:52.173 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.173 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.173 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:17:52.173 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.173 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.434 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.434 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:52.434 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:52.434 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:52.434 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:17:52.434 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:52.434 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:52.434 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:52.434 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:52.434 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.434 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.434 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.434 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.434 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.434 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.434 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.434 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.694 00:17:52.954 06:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:52.954 06:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:52.954 06:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.954 06:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.954 06:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.954 06:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.954 06:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.954 06:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.954 06:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:52.954 { 00:17:52.954 "cntlid": 37, 00:17:52.954 "qid": 0, 00:17:52.954 "state": "enabled", 00:17:52.954 "thread": "nvmf_tgt_poll_group_000", 00:17:52.954 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:17:52.954 "listen_address": { 00:17:52.954 "trtype": "TCP", 00:17:52.954 "adrfam": "IPv4", 00:17:52.954 "traddr": "10.0.0.2", 00:17:52.954 "trsvcid": "4420" 00:17:52.954 }, 00:17:52.954 "peer_address": { 00:17:52.954 "trtype": "TCP", 00:17:52.954 "adrfam": "IPv4", 00:17:52.954 "traddr": "10.0.0.1", 00:17:52.954 "trsvcid": "53512" 00:17:52.954 }, 00:17:52.954 "auth": { 00:17:52.954 "state": "completed", 00:17:52.954 "digest": "sha256", 00:17:52.954 "dhgroup": "ffdhe6144" 00:17:52.954 } 00:17:52.954 } 00:17:52.954 ]' 00:17:52.954 06:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:52.954 06:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:52.954 06:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:53.215 06:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:53.215 06:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:53.215 06:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.215 06:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.215 06:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.215 06:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjU1OWUzODczNGI0NmU3OGY1NjVlMGE5YzA4ZDMzNzVlNTQ3NDlkMWM2ZmMwNzljte7iiw==: --dhchap-ctrl-secret DHHC-1:01:ODNjNzA4YzFhZTgyNjcxYTgzYWQ4YjExY2NmYmI1OWY6vyUp: 00:17:53.215 06:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:02:NjU1OWUzODczNGI0NmU3OGY1NjVlMGE5YzA4ZDMzNzVlNTQ3NDlkMWM2ZmMwNzljte7iiw==: --dhchap-ctrl-secret DHHC-1:01:ODNjNzA4YzFhZTgyNjcxYTgzYWQ4YjExY2NmYmI1OWY6vyUp: 00:17:54.157 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:54.157 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:54.157 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:17:54.157 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.157 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.157 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.157 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:54.157 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:54.157 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:54.157 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:17:54.157 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:54.157 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:54.157 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:54.157 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:54.157 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.157 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key3 00:17:54.157 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.157 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.157 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.157 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:54.157 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:54.157 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:54.417 00:17:54.417 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:54.417 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.417 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:54.677 06:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.677 06:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.677 06:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.677 06:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.677 06:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.677 06:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:54.677 { 00:17:54.677 "cntlid": 39, 00:17:54.677 "qid": 0, 00:17:54.677 "state": "enabled", 00:17:54.677 "thread": "nvmf_tgt_poll_group_000", 00:17:54.677 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:17:54.677 "listen_address": { 00:17:54.677 "trtype": "TCP", 00:17:54.677 "adrfam": "IPv4", 00:17:54.677 "traddr": "10.0.0.2", 00:17:54.677 "trsvcid": "4420" 00:17:54.677 }, 00:17:54.677 "peer_address": { 00:17:54.677 "trtype": "TCP", 00:17:54.677 "adrfam": "IPv4", 00:17:54.677 "traddr": "10.0.0.1", 00:17:54.677 "trsvcid": "53544" 00:17:54.677 }, 00:17:54.677 "auth": { 00:17:54.677 "state": "completed", 00:17:54.677 "digest": "sha256", 00:17:54.677 "dhgroup": "ffdhe6144" 00:17:54.677 } 00:17:54.677 } 00:17:54.677 ]' 00:17:54.677 06:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:54.677 06:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:54.677 06:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:54.677 06:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:54.677 06:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:54.936 06:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.936 06:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.936 06:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.936 06:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDAwY2MzNmM4YmFiYjE4ZTg5OTI4MDczZGFlYzE2YmI4NTI5YjQyOWVlZWI1NTY1MmZmOTJhYzMyMjcyNGVkOBP12V4=: 00:17:54.937 06:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:03:ZDAwY2MzNmM4YmFiYjE4ZTg5OTI4MDczZGFlYzE2YmI4NTI5YjQyOWVlZWI1NTY1MmZmOTJhYzMyMjcyNGVkOBP12V4=: 00:17:55.508 06:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.768 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.768 06:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:17:55.768 06:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.768 06:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.768 06:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.768 06:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:55.768 06:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:55.768 06:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:55.768 06:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:55.769 06:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:17:55.769 06:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:55.769 06:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:55.769 06:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:55.769 06:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:55.769 06:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.769 06:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.769 06:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.769 06:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.769 06:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.769 06:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.769 06:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.769 06:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.340 00:17:56.340 06:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:56.340 06:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:56.340 06:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.600 06:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.600 06:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.600 06:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.600 06:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.600 06:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.600 06:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:56.600 { 00:17:56.600 "cntlid": 41, 00:17:56.600 "qid": 0, 00:17:56.600 "state": "enabled", 00:17:56.600 "thread": "nvmf_tgt_poll_group_000", 00:17:56.600 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:17:56.600 "listen_address": { 00:17:56.600 "trtype": "TCP", 00:17:56.600 "adrfam": "IPv4", 00:17:56.600 "traddr": "10.0.0.2", 00:17:56.600 "trsvcid": "4420" 00:17:56.600 }, 00:17:56.600 "peer_address": { 00:17:56.600 "trtype": "TCP", 00:17:56.600 "adrfam": "IPv4", 00:17:56.600 "traddr": "10.0.0.1", 00:17:56.600 "trsvcid": "58694" 00:17:56.600 }, 00:17:56.600 "auth": { 00:17:56.600 "state": "completed", 00:17:56.600 "digest": "sha256", 00:17:56.600 "dhgroup": "ffdhe8192" 00:17:56.600 } 00:17:56.600 } 00:17:56.600 ]' 00:17:56.600 06:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:56.600 06:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:56.600 06:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:56.600 06:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:56.600 06:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:56.600 06:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.600 06:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.600 06:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.860 06:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDAxOGE5NDk1MDJhNzM2ODFiYzUyY2MxZmU2OThmOTBiOTMwZjU3ZDQ0ZjE4NmU1ABSl4A==: --dhchap-ctrl-secret DHHC-1:03:MTMyODEzMGVmNWE5NzE3YWJlNzM5ZTEzMmY5NDNlMDBiMDgzOGZiZTI5MmY4OTQ2M2IxMDFlYjU5MTA2MWUzN2CRBlA=: 00:17:56.860 06:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:00:MDAxOGE5NDk1MDJhNzM2ODFiYzUyY2MxZmU2OThmOTBiOTMwZjU3ZDQ0ZjE4NmU1ABSl4A==: --dhchap-ctrl-secret DHHC-1:03:MTMyODEzMGVmNWE5NzE3YWJlNzM5ZTEzMmY5NDNlMDBiMDgzOGZiZTI5MmY4OTQ2M2IxMDFlYjU5MTA2MWUzN2CRBlA=: 00:17:57.429 06:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.429 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.429 06:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:17:57.429 06:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.429 06:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.429 06:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.429 06:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:57.429 06:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:57.429 06:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:57.691 06:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:17:57.691 06:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:57.691 06:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:57.691 06:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:57.691 06:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:57.691 06:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.691 06:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:57.691 06:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.691 06:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.691 06:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.691 06:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:57.691 06:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:57.691 06:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.274 00:17:58.274 06:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:58.274 06:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:58.275 06:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.275 06:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.275 06:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.275 06:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.275 06:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.275 06:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.275 06:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:58.275 { 00:17:58.275 "cntlid": 43, 00:17:58.275 "qid": 0, 00:17:58.275 "state": "enabled", 00:17:58.275 "thread": "nvmf_tgt_poll_group_000", 00:17:58.275 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:17:58.275 "listen_address": { 00:17:58.275 "trtype": "TCP", 00:17:58.275 "adrfam": "IPv4", 00:17:58.275 "traddr": "10.0.0.2", 00:17:58.275 "trsvcid": "4420" 00:17:58.275 }, 00:17:58.275 "peer_address": { 00:17:58.275 "trtype": "TCP", 00:17:58.275 "adrfam": "IPv4", 00:17:58.275 "traddr": "10.0.0.1", 00:17:58.275 "trsvcid": "58710" 00:17:58.275 }, 00:17:58.275 "auth": { 00:17:58.275 "state": "completed", 00:17:58.275 "digest": "sha256", 00:17:58.275 "dhgroup": "ffdhe8192" 00:17:58.275 } 00:17:58.275 } 00:17:58.275 ]' 00:17:58.275 06:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:58.275 06:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:58.275 06:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:58.275 06:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:58.275 06:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:58.536 06:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.536 06:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.536 06:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.536 06:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzY2MTI0ZDdkM2E5ZDljNmJmOTU2ZWRmY2Y0ZDQ2NDko9oR0: --dhchap-ctrl-secret DHHC-1:02:ODU1MDU4MWQ5NmZlMjJkZDJmMjcxMjJkOGNhNTYwOGJiMzBkMWNhODg1MzdkYTY4EBx+bw==: 00:17:58.536 06:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:01:NzY2MTI0ZDdkM2E5ZDljNmJmOTU2ZWRmY2Y0ZDQ2NDko9oR0: --dhchap-ctrl-secret DHHC-1:02:ODU1MDU4MWQ5NmZlMjJkZDJmMjcxMjJkOGNhNTYwOGJiMzBkMWNhODg1MzdkYTY4EBx+bw==: 00:17:59.107 06:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.369 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.369 06:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:17:59.369 06:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.369 06:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.369 06:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.369 06:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:59.369 06:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:59.369 06:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:59.369 06:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:17:59.369 06:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:59.369 06:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:59.369 06:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:59.369 06:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:59.369 06:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.369 06:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.369 06:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.369 06:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.369 06:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.369 06:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.369 06:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.369 06:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.941 00:17:59.941 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:59.941 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.941 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:00.215 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.215 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.215 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.215 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.215 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.215 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:00.215 { 00:18:00.215 "cntlid": 45, 00:18:00.215 "qid": 0, 00:18:00.215 "state": "enabled", 00:18:00.215 "thread": "nvmf_tgt_poll_group_000", 00:18:00.215 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:18:00.215 "listen_address": { 00:18:00.215 "trtype": "TCP", 00:18:00.215 "adrfam": "IPv4", 00:18:00.215 "traddr": "10.0.0.2", 00:18:00.215 "trsvcid": "4420" 00:18:00.215 }, 00:18:00.215 "peer_address": { 00:18:00.215 "trtype": "TCP", 00:18:00.215 "adrfam": "IPv4", 00:18:00.215 "traddr": "10.0.0.1", 00:18:00.215 "trsvcid": "58748" 00:18:00.215 }, 00:18:00.215 "auth": { 00:18:00.215 "state": "completed", 00:18:00.215 "digest": "sha256", 00:18:00.215 "dhgroup": "ffdhe8192" 00:18:00.215 } 00:18:00.215 } 00:18:00.215 ]' 00:18:00.215 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:00.215 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:00.215 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:00.215 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:00.215 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:00.215 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.215 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.215 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.477 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjU1OWUzODczNGI0NmU3OGY1NjVlMGE5YzA4ZDMzNzVlNTQ3NDlkMWM2ZmMwNzljte7iiw==: --dhchap-ctrl-secret DHHC-1:01:ODNjNzA4YzFhZTgyNjcxYTgzYWQ4YjExY2NmYmI1OWY6vyUp: 00:18:00.477 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:02:NjU1OWUzODczNGI0NmU3OGY1NjVlMGE5YzA4ZDMzNzVlNTQ3NDlkMWM2ZmMwNzljte7iiw==: --dhchap-ctrl-secret DHHC-1:01:ODNjNzA4YzFhZTgyNjcxYTgzYWQ4YjExY2NmYmI1OWY6vyUp: 00:18:01.046 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.046 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.046 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:01.046 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.046 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.046 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.046 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:01.046 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:01.046 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:01.307 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:18:01.307 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:01.307 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:01.307 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:01.307 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:01.307 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.307 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key3 00:18:01.307 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.307 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.307 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.307 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:01.307 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:01.307 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:01.568 00:18:01.829 06:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:01.829 06:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:01.829 06:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.829 06:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.829 06:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.829 06:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.829 06:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.829 06:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.829 06:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:01.829 { 00:18:01.829 "cntlid": 47, 00:18:01.829 "qid": 0, 00:18:01.829 "state": "enabled", 00:18:01.829 "thread": "nvmf_tgt_poll_group_000", 00:18:01.829 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:18:01.829 "listen_address": { 00:18:01.829 "trtype": "TCP", 00:18:01.829 "adrfam": "IPv4", 00:18:01.829 "traddr": "10.0.0.2", 00:18:01.829 "trsvcid": "4420" 00:18:01.829 }, 00:18:01.829 "peer_address": { 00:18:01.829 "trtype": "TCP", 00:18:01.829 "adrfam": "IPv4", 00:18:01.829 "traddr": "10.0.0.1", 00:18:01.829 "trsvcid": "58778" 00:18:01.829 }, 00:18:01.829 "auth": { 00:18:01.829 "state": "completed", 00:18:01.829 "digest": "sha256", 00:18:01.830 "dhgroup": "ffdhe8192" 00:18:01.830 } 00:18:01.830 } 00:18:01.830 ]' 00:18:01.830 06:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:01.830 06:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:01.830 06:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:02.091 06:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:02.091 06:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:02.091 06:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.091 06:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.091 06:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.353 06:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDAwY2MzNmM4YmFiYjE4ZTg5OTI4MDczZGFlYzE2YmI4NTI5YjQyOWVlZWI1NTY1MmZmOTJhYzMyMjcyNGVkOBP12V4=: 00:18:02.353 06:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:03:ZDAwY2MzNmM4YmFiYjE4ZTg5OTI4MDczZGFlYzE2YmI4NTI5YjQyOWVlZWI1NTY1MmZmOTJhYzMyMjcyNGVkOBP12V4=: 00:18:02.922 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.922 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.922 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:02.922 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.922 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.922 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.922 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:18:02.922 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:02.922 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:02.922 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:02.922 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:02.922 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:18:02.922 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:02.922 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:02.922 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:02.922 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:02.922 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.922 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:02.922 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.922 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.922 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.922 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:02.922 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:02.922 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.182 00:18:03.182 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:03.182 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:03.182 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.442 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.442 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.442 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.442 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.442 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.442 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:03.442 { 00:18:03.442 "cntlid": 49, 00:18:03.442 "qid": 0, 00:18:03.442 "state": "enabled", 00:18:03.442 "thread": "nvmf_tgt_poll_group_000", 00:18:03.442 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:18:03.442 "listen_address": { 00:18:03.442 "trtype": "TCP", 00:18:03.442 "adrfam": "IPv4", 00:18:03.442 "traddr": "10.0.0.2", 00:18:03.442 "trsvcid": "4420" 00:18:03.442 }, 00:18:03.442 "peer_address": { 00:18:03.442 "trtype": "TCP", 00:18:03.442 "adrfam": "IPv4", 00:18:03.442 "traddr": "10.0.0.1", 00:18:03.442 "trsvcid": "58810" 00:18:03.442 }, 00:18:03.442 "auth": { 00:18:03.442 "state": "completed", 00:18:03.442 "digest": "sha384", 00:18:03.442 "dhgroup": "null" 00:18:03.442 } 00:18:03.442 } 00:18:03.442 ]' 00:18:03.442 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:03.442 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:03.442 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:03.442 06:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:03.442 06:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:03.702 06:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.702 06:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.702 06:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.702 06:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDAxOGE5NDk1MDJhNzM2ODFiYzUyY2MxZmU2OThmOTBiOTMwZjU3ZDQ0ZjE4NmU1ABSl4A==: --dhchap-ctrl-secret DHHC-1:03:MTMyODEzMGVmNWE5NzE3YWJlNzM5ZTEzMmY5NDNlMDBiMDgzOGZiZTI5MmY4OTQ2M2IxMDFlYjU5MTA2MWUzN2CRBlA=: 00:18:03.702 06:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:00:MDAxOGE5NDk1MDJhNzM2ODFiYzUyY2MxZmU2OThmOTBiOTMwZjU3ZDQ0ZjE4NmU1ABSl4A==: --dhchap-ctrl-secret DHHC-1:03:MTMyODEzMGVmNWE5NzE3YWJlNzM5ZTEzMmY5NDNlMDBiMDgzOGZiZTI5MmY4OTQ2M2IxMDFlYjU5MTA2MWUzN2CRBlA=: 00:18:04.644 06:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.644 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.644 06:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:04.644 06:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.644 06:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.644 06:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.644 06:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:04.644 06:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:04.644 06:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:04.644 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:18:04.644 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:04.644 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:04.644 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:04.644 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:04.644 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.644 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:04.644 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.644 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.644 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.644 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:04.644 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:04.644 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:04.903 00:18:04.903 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:04.903 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:04.903 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.903 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.903 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.903 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.903 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.903 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.903 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:04.903 { 00:18:04.903 "cntlid": 51, 00:18:04.903 "qid": 0, 00:18:04.903 "state": "enabled", 00:18:04.903 "thread": "nvmf_tgt_poll_group_000", 00:18:04.903 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:18:04.903 "listen_address": { 00:18:04.903 "trtype": "TCP", 00:18:04.903 "adrfam": "IPv4", 00:18:04.904 "traddr": "10.0.0.2", 00:18:04.904 "trsvcid": "4420" 00:18:04.904 }, 00:18:04.904 "peer_address": { 00:18:04.904 "trtype": "TCP", 00:18:04.904 "adrfam": "IPv4", 00:18:04.904 "traddr": "10.0.0.1", 00:18:04.904 "trsvcid": "58842" 00:18:04.904 }, 00:18:04.904 "auth": { 00:18:04.904 "state": "completed", 00:18:04.904 "digest": "sha384", 00:18:04.904 "dhgroup": "null" 00:18:04.904 } 00:18:04.904 } 00:18:04.904 ]' 00:18:04.904 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:05.163 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:05.164 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:05.164 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:05.164 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:05.164 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.164 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.164 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.424 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzY2MTI0ZDdkM2E5ZDljNmJmOTU2ZWRmY2Y0ZDQ2NDko9oR0: --dhchap-ctrl-secret DHHC-1:02:ODU1MDU4MWQ5NmZlMjJkZDJmMjcxMjJkOGNhNTYwOGJiMzBkMWNhODg1MzdkYTY4EBx+bw==: 00:18:05.424 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:01:NzY2MTI0ZDdkM2E5ZDljNmJmOTU2ZWRmY2Y0ZDQ2NDko9oR0: --dhchap-ctrl-secret DHHC-1:02:ODU1MDU4MWQ5NmZlMjJkZDJmMjcxMjJkOGNhNTYwOGJiMzBkMWNhODg1MzdkYTY4EBx+bw==: 00:18:05.994 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.994 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.994 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:05.994 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.994 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.994 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.994 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:05.994 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:05.994 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:06.255 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:18:06.255 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:06.255 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:06.255 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:06.255 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:06.255 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.255 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.255 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.255 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.255 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.255 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.255 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.255 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.515 00:18:06.515 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:06.515 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:06.515 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.515 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.515 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.515 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.515 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.776 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.776 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:06.776 { 00:18:06.776 "cntlid": 53, 00:18:06.776 "qid": 0, 00:18:06.776 "state": "enabled", 00:18:06.776 "thread": "nvmf_tgt_poll_group_000", 00:18:06.776 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:18:06.776 "listen_address": { 00:18:06.776 "trtype": "TCP", 00:18:06.776 "adrfam": "IPv4", 00:18:06.776 "traddr": "10.0.0.2", 00:18:06.776 "trsvcid": "4420" 00:18:06.776 }, 00:18:06.776 "peer_address": { 00:18:06.776 "trtype": "TCP", 00:18:06.776 "adrfam": "IPv4", 00:18:06.776 "traddr": "10.0.0.1", 00:18:06.776 "trsvcid": "41328" 00:18:06.776 }, 00:18:06.776 "auth": { 00:18:06.776 "state": "completed", 00:18:06.776 "digest": "sha384", 00:18:06.776 "dhgroup": "null" 00:18:06.776 } 00:18:06.776 } 00:18:06.776 ]' 00:18:06.776 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:06.776 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:06.776 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:06.776 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:06.776 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:06.776 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.776 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.776 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.037 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjU1OWUzODczNGI0NmU3OGY1NjVlMGE5YzA4ZDMzNzVlNTQ3NDlkMWM2ZmMwNzljte7iiw==: --dhchap-ctrl-secret DHHC-1:01:ODNjNzA4YzFhZTgyNjcxYTgzYWQ4YjExY2NmYmI1OWY6vyUp: 00:18:07.037 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:02:NjU1OWUzODczNGI0NmU3OGY1NjVlMGE5YzA4ZDMzNzVlNTQ3NDlkMWM2ZmMwNzljte7iiw==: --dhchap-ctrl-secret DHHC-1:01:ODNjNzA4YzFhZTgyNjcxYTgzYWQ4YjExY2NmYmI1OWY6vyUp: 00:18:07.608 06:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.608 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.608 06:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:07.608 06:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.608 06:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.608 06:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.608 06:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:07.608 06:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:07.608 06:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:07.868 06:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:18:07.868 06:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:07.868 06:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:07.868 06:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:07.868 06:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:07.868 06:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.868 06:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key3 00:18:07.868 06:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.868 06:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.868 06:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.868 06:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:07.868 06:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:07.868 06:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:07.868 00:18:08.140 06:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:08.140 06:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:08.140 06:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.140 06:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.140 06:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.140 06:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.140 06:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.140 06:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.140 06:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:08.140 { 00:18:08.140 "cntlid": 55, 00:18:08.140 "qid": 0, 00:18:08.140 "state": "enabled", 00:18:08.140 "thread": "nvmf_tgt_poll_group_000", 00:18:08.140 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:18:08.140 "listen_address": { 00:18:08.140 "trtype": "TCP", 00:18:08.140 "adrfam": "IPv4", 00:18:08.140 "traddr": "10.0.0.2", 00:18:08.140 "trsvcid": "4420" 00:18:08.140 }, 00:18:08.140 "peer_address": { 00:18:08.140 "trtype": "TCP", 00:18:08.140 "adrfam": "IPv4", 00:18:08.140 "traddr": "10.0.0.1", 00:18:08.140 "trsvcid": "41360" 00:18:08.140 }, 00:18:08.140 "auth": { 00:18:08.140 "state": "completed", 00:18:08.140 "digest": "sha384", 00:18:08.140 "dhgroup": "null" 00:18:08.140 } 00:18:08.140 } 00:18:08.140 ]' 00:18:08.140 06:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:08.140 06:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:08.140 06:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:08.401 06:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:08.401 06:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:08.401 06:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.401 06:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.401 06:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.401 06:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDAwY2MzNmM4YmFiYjE4ZTg5OTI4MDczZGFlYzE2YmI4NTI5YjQyOWVlZWI1NTY1MmZmOTJhYzMyMjcyNGVkOBP12V4=: 00:18:08.401 06:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:03:ZDAwY2MzNmM4YmFiYjE4ZTg5OTI4MDczZGFlYzE2YmI4NTI5YjQyOWVlZWI1NTY1MmZmOTJhYzMyMjcyNGVkOBP12V4=: 00:18:09.342 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.342 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.342 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:09.342 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.342 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.342 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.342 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:09.342 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:09.342 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:09.342 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:09.342 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:18:09.342 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:09.342 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:09.342 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:09.342 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:09.342 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.342 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:09.342 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.342 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.342 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.342 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:09.342 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:09.342 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:09.604 00:18:09.604 06:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:09.604 06:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:09.604 06:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.866 06:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.866 06:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.866 06:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.866 06:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.866 06:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.866 06:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:09.866 { 00:18:09.866 "cntlid": 57, 00:18:09.866 "qid": 0, 00:18:09.866 "state": "enabled", 00:18:09.866 "thread": "nvmf_tgt_poll_group_000", 00:18:09.866 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:18:09.866 "listen_address": { 00:18:09.866 "trtype": "TCP", 00:18:09.866 "adrfam": "IPv4", 00:18:09.866 "traddr": "10.0.0.2", 00:18:09.866 "trsvcid": "4420" 00:18:09.866 }, 00:18:09.866 "peer_address": { 00:18:09.866 "trtype": "TCP", 00:18:09.866 "adrfam": "IPv4", 00:18:09.866 "traddr": "10.0.0.1", 00:18:09.866 "trsvcid": "41392" 00:18:09.866 }, 00:18:09.866 "auth": { 00:18:09.866 "state": "completed", 00:18:09.866 "digest": "sha384", 00:18:09.866 "dhgroup": "ffdhe2048" 00:18:09.866 } 00:18:09.866 } 00:18:09.866 ]' 00:18:09.866 06:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:09.866 06:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:09.866 06:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:09.866 06:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:09.866 06:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:09.866 06:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.866 06:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.866 06:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.127 06:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDAxOGE5NDk1MDJhNzM2ODFiYzUyY2MxZmU2OThmOTBiOTMwZjU3ZDQ0ZjE4NmU1ABSl4A==: --dhchap-ctrl-secret DHHC-1:03:MTMyODEzMGVmNWE5NzE3YWJlNzM5ZTEzMmY5NDNlMDBiMDgzOGZiZTI5MmY4OTQ2M2IxMDFlYjU5MTA2MWUzN2CRBlA=: 00:18:10.127 06:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:00:MDAxOGE5NDk1MDJhNzM2ODFiYzUyY2MxZmU2OThmOTBiOTMwZjU3ZDQ0ZjE4NmU1ABSl4A==: --dhchap-ctrl-secret DHHC-1:03:MTMyODEzMGVmNWE5NzE3YWJlNzM5ZTEzMmY5NDNlMDBiMDgzOGZiZTI5MmY4OTQ2M2IxMDFlYjU5MTA2MWUzN2CRBlA=: 00:18:10.706 06:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.706 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.706 06:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:10.706 06:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.706 06:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.706 06:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.706 06:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:10.706 06:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:10.706 06:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:10.969 06:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:18:10.969 06:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:10.969 06:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:10.969 06:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:10.969 06:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:10.969 06:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.969 06:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.969 06:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.969 06:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.969 06:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.969 06:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.969 06:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.970 06:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:11.229 00:18:11.229 06:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:11.229 06:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:11.230 06:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.490 06:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.490 06:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.490 06:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.490 06:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.490 06:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.490 06:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:11.490 { 00:18:11.490 "cntlid": 59, 00:18:11.490 "qid": 0, 00:18:11.490 "state": "enabled", 00:18:11.490 "thread": "nvmf_tgt_poll_group_000", 00:18:11.490 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:18:11.490 "listen_address": { 00:18:11.490 "trtype": "TCP", 00:18:11.490 "adrfam": "IPv4", 00:18:11.490 "traddr": "10.0.0.2", 00:18:11.490 "trsvcid": "4420" 00:18:11.490 }, 00:18:11.490 "peer_address": { 00:18:11.490 "trtype": "TCP", 00:18:11.490 "adrfam": "IPv4", 00:18:11.490 "traddr": "10.0.0.1", 00:18:11.490 "trsvcid": "41420" 00:18:11.490 }, 00:18:11.490 "auth": { 00:18:11.490 "state": "completed", 00:18:11.490 "digest": "sha384", 00:18:11.490 "dhgroup": "ffdhe2048" 00:18:11.490 } 00:18:11.490 } 00:18:11.490 ]' 00:18:11.490 06:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:11.490 06:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:11.490 06:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:11.490 06:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:11.490 06:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:11.490 06:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.490 06:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.490 06:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.750 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzY2MTI0ZDdkM2E5ZDljNmJmOTU2ZWRmY2Y0ZDQ2NDko9oR0: --dhchap-ctrl-secret DHHC-1:02:ODU1MDU4MWQ5NmZlMjJkZDJmMjcxMjJkOGNhNTYwOGJiMzBkMWNhODg1MzdkYTY4EBx+bw==: 00:18:11.751 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:01:NzY2MTI0ZDdkM2E5ZDljNmJmOTU2ZWRmY2Y0ZDQ2NDko9oR0: --dhchap-ctrl-secret DHHC-1:02:ODU1MDU4MWQ5NmZlMjJkZDJmMjcxMjJkOGNhNTYwOGJiMzBkMWNhODg1MzdkYTY4EBx+bw==: 00:18:12.322 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.322 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.322 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:12.322 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.322 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.322 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.322 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:12.322 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:12.322 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:12.583 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:18:12.583 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:12.583 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:12.583 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:12.583 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:12.583 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.583 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.583 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.583 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.583 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.583 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.583 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.583 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.844 00:18:12.844 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:12.844 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:12.844 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.844 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.844 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.844 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.844 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.105 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.105 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:13.105 { 00:18:13.105 "cntlid": 61, 00:18:13.105 "qid": 0, 00:18:13.105 "state": "enabled", 00:18:13.105 "thread": "nvmf_tgt_poll_group_000", 00:18:13.105 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:18:13.105 "listen_address": { 00:18:13.105 "trtype": "TCP", 00:18:13.105 "adrfam": "IPv4", 00:18:13.105 "traddr": "10.0.0.2", 00:18:13.105 "trsvcid": "4420" 00:18:13.105 }, 00:18:13.105 "peer_address": { 00:18:13.105 "trtype": "TCP", 00:18:13.105 "adrfam": "IPv4", 00:18:13.105 "traddr": "10.0.0.1", 00:18:13.105 "trsvcid": "41466" 00:18:13.105 }, 00:18:13.105 "auth": { 00:18:13.105 "state": "completed", 00:18:13.105 "digest": "sha384", 00:18:13.105 "dhgroup": "ffdhe2048" 00:18:13.105 } 00:18:13.105 } 00:18:13.105 ]' 00:18:13.105 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:13.105 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:13.105 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:13.105 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:13.105 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:13.105 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.105 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.105 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.366 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjU1OWUzODczNGI0NmU3OGY1NjVlMGE5YzA4ZDMzNzVlNTQ3NDlkMWM2ZmMwNzljte7iiw==: --dhchap-ctrl-secret DHHC-1:01:ODNjNzA4YzFhZTgyNjcxYTgzYWQ4YjExY2NmYmI1OWY6vyUp: 00:18:13.366 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:02:NjU1OWUzODczNGI0NmU3OGY1NjVlMGE5YzA4ZDMzNzVlNTQ3NDlkMWM2ZmMwNzljte7iiw==: --dhchap-ctrl-secret DHHC-1:01:ODNjNzA4YzFhZTgyNjcxYTgzYWQ4YjExY2NmYmI1OWY6vyUp: 00:18:13.937 06:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.937 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.937 06:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:13.937 06:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.937 06:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.937 06:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.937 06:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:13.937 06:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:13.937 06:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:14.197 06:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:18:14.197 06:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:14.197 06:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:14.197 06:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:14.197 06:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:14.197 06:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:14.197 06:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key3 00:18:14.197 06:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.197 06:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.197 06:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.197 06:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:14.197 06:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:14.197 06:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:14.197 00:18:14.459 06:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:14.459 06:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:14.459 06:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.459 06:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.459 06:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.459 06:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.459 06:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.459 06:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.459 06:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:14.459 { 00:18:14.459 "cntlid": 63, 00:18:14.459 "qid": 0, 00:18:14.459 "state": "enabled", 00:18:14.459 "thread": "nvmf_tgt_poll_group_000", 00:18:14.459 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:18:14.459 "listen_address": { 00:18:14.459 "trtype": "TCP", 00:18:14.459 "adrfam": "IPv4", 00:18:14.459 "traddr": "10.0.0.2", 00:18:14.459 "trsvcid": "4420" 00:18:14.459 }, 00:18:14.459 "peer_address": { 00:18:14.459 "trtype": "TCP", 00:18:14.459 "adrfam": "IPv4", 00:18:14.459 "traddr": "10.0.0.1", 00:18:14.459 "trsvcid": "41492" 00:18:14.459 }, 00:18:14.459 "auth": { 00:18:14.459 "state": "completed", 00:18:14.459 "digest": "sha384", 00:18:14.459 "dhgroup": "ffdhe2048" 00:18:14.459 } 00:18:14.459 } 00:18:14.459 ]' 00:18:14.459 06:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:14.459 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:14.459 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:14.720 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:14.720 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:14.720 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.720 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.720 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.720 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDAwY2MzNmM4YmFiYjE4ZTg5OTI4MDczZGFlYzE2YmI4NTI5YjQyOWVlZWI1NTY1MmZmOTJhYzMyMjcyNGVkOBP12V4=: 00:18:14.720 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:03:ZDAwY2MzNmM4YmFiYjE4ZTg5OTI4MDczZGFlYzE2YmI4NTI5YjQyOWVlZWI1NTY1MmZmOTJhYzMyMjcyNGVkOBP12V4=: 00:18:15.663 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.663 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.663 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:15.663 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.663 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.663 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.663 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:15.663 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:15.663 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:15.663 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:15.663 06:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:18:15.663 06:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:15.663 06:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:15.663 06:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:15.663 06:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:15.663 06:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.663 06:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.663 06:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.663 06:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.663 06:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.663 06:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.663 06:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.663 06:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.924 00:18:15.924 06:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:15.924 06:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:15.924 06:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.185 06:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.185 06:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.185 06:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.185 06:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.185 06:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.185 06:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:16.185 { 00:18:16.185 "cntlid": 65, 00:18:16.185 "qid": 0, 00:18:16.185 "state": "enabled", 00:18:16.185 "thread": "nvmf_tgt_poll_group_000", 00:18:16.185 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:18:16.185 "listen_address": { 00:18:16.185 "trtype": "TCP", 00:18:16.185 "adrfam": "IPv4", 00:18:16.185 "traddr": "10.0.0.2", 00:18:16.185 "trsvcid": "4420" 00:18:16.185 }, 00:18:16.185 "peer_address": { 00:18:16.185 "trtype": "TCP", 00:18:16.185 "adrfam": "IPv4", 00:18:16.185 "traddr": "10.0.0.1", 00:18:16.185 "trsvcid": "40030" 00:18:16.185 }, 00:18:16.185 "auth": { 00:18:16.185 "state": "completed", 00:18:16.185 "digest": "sha384", 00:18:16.185 "dhgroup": "ffdhe3072" 00:18:16.185 } 00:18:16.185 } 00:18:16.185 ]' 00:18:16.185 06:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:16.185 06:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:16.185 06:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:16.185 06:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:16.185 06:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:16.185 06:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.185 06:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.185 06:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.445 06:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDAxOGE5NDk1MDJhNzM2ODFiYzUyY2MxZmU2OThmOTBiOTMwZjU3ZDQ0ZjE4NmU1ABSl4A==: --dhchap-ctrl-secret DHHC-1:03:MTMyODEzMGVmNWE5NzE3YWJlNzM5ZTEzMmY5NDNlMDBiMDgzOGZiZTI5MmY4OTQ2M2IxMDFlYjU5MTA2MWUzN2CRBlA=: 00:18:16.445 06:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:00:MDAxOGE5NDk1MDJhNzM2ODFiYzUyY2MxZmU2OThmOTBiOTMwZjU3ZDQ0ZjE4NmU1ABSl4A==: --dhchap-ctrl-secret DHHC-1:03:MTMyODEzMGVmNWE5NzE3YWJlNzM5ZTEzMmY5NDNlMDBiMDgzOGZiZTI5MmY4OTQ2M2IxMDFlYjU5MTA2MWUzN2CRBlA=: 00:18:17.015 06:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.015 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.015 06:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:17.015 06:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.015 06:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.015 06:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.015 06:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:17.015 06:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:17.015 06:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:17.281 06:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:18:17.281 06:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:17.281 06:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:17.281 06:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:17.281 06:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:17.281 06:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.281 06:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.281 06:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.281 06:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.281 06:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.281 06:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.281 06:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.281 06:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.541 00:18:17.542 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:17.542 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:17.542 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.803 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.803 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.803 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.803 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.803 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.803 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:17.803 { 00:18:17.803 "cntlid": 67, 00:18:17.803 "qid": 0, 00:18:17.803 "state": "enabled", 00:18:17.803 "thread": "nvmf_tgt_poll_group_000", 00:18:17.803 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:18:17.803 "listen_address": { 00:18:17.803 "trtype": "TCP", 00:18:17.803 "adrfam": "IPv4", 00:18:17.803 "traddr": "10.0.0.2", 00:18:17.803 "trsvcid": "4420" 00:18:17.803 }, 00:18:17.803 "peer_address": { 00:18:17.803 "trtype": "TCP", 00:18:17.803 "adrfam": "IPv4", 00:18:17.803 "traddr": "10.0.0.1", 00:18:17.803 "trsvcid": "40064" 00:18:17.803 }, 00:18:17.803 "auth": { 00:18:17.803 "state": "completed", 00:18:17.803 "digest": "sha384", 00:18:17.803 "dhgroup": "ffdhe3072" 00:18:17.803 } 00:18:17.803 } 00:18:17.803 ]' 00:18:17.803 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:17.803 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:17.803 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:17.803 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:17.803 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:17.803 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.803 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.803 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.063 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzY2MTI0ZDdkM2E5ZDljNmJmOTU2ZWRmY2Y0ZDQ2NDko9oR0: --dhchap-ctrl-secret DHHC-1:02:ODU1MDU4MWQ5NmZlMjJkZDJmMjcxMjJkOGNhNTYwOGJiMzBkMWNhODg1MzdkYTY4EBx+bw==: 00:18:18.063 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:01:NzY2MTI0ZDdkM2E5ZDljNmJmOTU2ZWRmY2Y0ZDQ2NDko9oR0: --dhchap-ctrl-secret DHHC-1:02:ODU1MDU4MWQ5NmZlMjJkZDJmMjcxMjJkOGNhNTYwOGJiMzBkMWNhODg1MzdkYTY4EBx+bw==: 00:18:18.636 06:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.636 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.636 06:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:18.636 06:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.636 06:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.636 06:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.636 06:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:18.636 06:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:18.636 06:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:18.898 06:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:18:18.898 06:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:18.898 06:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:18.898 06:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:18.898 06:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:18.898 06:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.898 06:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.898 06:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.898 06:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.898 06:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.898 06:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.898 06:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.898 06:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.159 00:18:19.159 06:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:19.159 06:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:19.159 06:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.419 06:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.419 06:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.419 06:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.419 06:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.419 06:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.419 06:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:19.419 { 00:18:19.419 "cntlid": 69, 00:18:19.419 "qid": 0, 00:18:19.419 "state": "enabled", 00:18:19.419 "thread": "nvmf_tgt_poll_group_000", 00:18:19.419 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:18:19.419 "listen_address": { 00:18:19.419 "trtype": "TCP", 00:18:19.419 "adrfam": "IPv4", 00:18:19.420 "traddr": "10.0.0.2", 00:18:19.420 "trsvcid": "4420" 00:18:19.420 }, 00:18:19.420 "peer_address": { 00:18:19.420 "trtype": "TCP", 00:18:19.420 "adrfam": "IPv4", 00:18:19.420 "traddr": "10.0.0.1", 00:18:19.420 "trsvcid": "40072" 00:18:19.420 }, 00:18:19.420 "auth": { 00:18:19.420 "state": "completed", 00:18:19.420 "digest": "sha384", 00:18:19.420 "dhgroup": "ffdhe3072" 00:18:19.420 } 00:18:19.420 } 00:18:19.420 ]' 00:18:19.420 06:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:19.420 06:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:19.420 06:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:19.420 06:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:19.420 06:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:19.420 06:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.420 06:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.420 06:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.681 06:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjU1OWUzODczNGI0NmU3OGY1NjVlMGE5YzA4ZDMzNzVlNTQ3NDlkMWM2ZmMwNzljte7iiw==: --dhchap-ctrl-secret DHHC-1:01:ODNjNzA4YzFhZTgyNjcxYTgzYWQ4YjExY2NmYmI1OWY6vyUp: 00:18:19.681 06:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:02:NjU1OWUzODczNGI0NmU3OGY1NjVlMGE5YzA4ZDMzNzVlNTQ3NDlkMWM2ZmMwNzljte7iiw==: --dhchap-ctrl-secret DHHC-1:01:ODNjNzA4YzFhZTgyNjcxYTgzYWQ4YjExY2NmYmI1OWY6vyUp: 00:18:20.252 06:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.252 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.253 06:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:20.253 06:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.253 06:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.253 06:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.253 06:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:20.253 06:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:20.253 06:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:20.513 06:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:18:20.513 06:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:20.513 06:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:20.513 06:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:20.513 06:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:20.513 06:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.513 06:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key3 00:18:20.513 06:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.513 06:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.513 06:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.513 06:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:20.513 06:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:20.513 06:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:20.773 00:18:20.773 06:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:20.773 06:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:20.773 06:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.033 06:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.033 06:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.033 06:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.033 06:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.034 06:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.034 06:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:21.034 { 00:18:21.034 "cntlid": 71, 00:18:21.034 "qid": 0, 00:18:21.034 "state": "enabled", 00:18:21.034 "thread": "nvmf_tgt_poll_group_000", 00:18:21.034 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:18:21.034 "listen_address": { 00:18:21.034 "trtype": "TCP", 00:18:21.034 "adrfam": "IPv4", 00:18:21.034 "traddr": "10.0.0.2", 00:18:21.034 "trsvcid": "4420" 00:18:21.034 }, 00:18:21.034 "peer_address": { 00:18:21.034 "trtype": "TCP", 00:18:21.034 "adrfam": "IPv4", 00:18:21.034 "traddr": "10.0.0.1", 00:18:21.034 "trsvcid": "40102" 00:18:21.034 }, 00:18:21.034 "auth": { 00:18:21.034 "state": "completed", 00:18:21.034 "digest": "sha384", 00:18:21.034 "dhgroup": "ffdhe3072" 00:18:21.034 } 00:18:21.034 } 00:18:21.034 ]' 00:18:21.034 06:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:21.034 06:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:21.034 06:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:21.034 06:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:21.034 06:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:21.034 06:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.034 06:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.034 06:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.294 06:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDAwY2MzNmM4YmFiYjE4ZTg5OTI4MDczZGFlYzE2YmI4NTI5YjQyOWVlZWI1NTY1MmZmOTJhYzMyMjcyNGVkOBP12V4=: 00:18:21.294 06:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:03:ZDAwY2MzNmM4YmFiYjE4ZTg5OTI4MDczZGFlYzE2YmI4NTI5YjQyOWVlZWI1NTY1MmZmOTJhYzMyMjcyNGVkOBP12V4=: 00:18:21.865 06:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.865 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.865 06:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:21.865 06:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.865 06:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.865 06:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.865 06:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:21.865 06:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:21.865 06:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:21.865 06:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:22.126 06:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:18:22.126 06:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:22.126 06:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:22.126 06:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:22.126 06:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:22.126 06:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:22.126 06:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:22.126 06:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.126 06:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.126 06:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.126 06:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:22.126 06:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:22.126 06:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:22.386 00:18:22.386 06:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:22.386 06:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:22.386 06:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.386 06:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.386 06:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.386 06:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.386 06:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.646 06:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.646 06:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:22.646 { 00:18:22.646 "cntlid": 73, 00:18:22.646 "qid": 0, 00:18:22.646 "state": "enabled", 00:18:22.646 "thread": "nvmf_tgt_poll_group_000", 00:18:22.646 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:18:22.646 "listen_address": { 00:18:22.646 "trtype": "TCP", 00:18:22.646 "adrfam": "IPv4", 00:18:22.646 "traddr": "10.0.0.2", 00:18:22.646 "trsvcid": "4420" 00:18:22.646 }, 00:18:22.646 "peer_address": { 00:18:22.646 "trtype": "TCP", 00:18:22.646 "adrfam": "IPv4", 00:18:22.646 "traddr": "10.0.0.1", 00:18:22.646 "trsvcid": "40140" 00:18:22.646 }, 00:18:22.646 "auth": { 00:18:22.646 "state": "completed", 00:18:22.646 "digest": "sha384", 00:18:22.646 "dhgroup": "ffdhe4096" 00:18:22.646 } 00:18:22.646 } 00:18:22.646 ]' 00:18:22.646 06:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:22.646 06:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:22.646 06:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:22.646 06:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:22.646 06:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:22.646 06:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.646 06:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.646 06:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.906 06:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDAxOGE5NDk1MDJhNzM2ODFiYzUyY2MxZmU2OThmOTBiOTMwZjU3ZDQ0ZjE4NmU1ABSl4A==: --dhchap-ctrl-secret DHHC-1:03:MTMyODEzMGVmNWE5NzE3YWJlNzM5ZTEzMmY5NDNlMDBiMDgzOGZiZTI5MmY4OTQ2M2IxMDFlYjU5MTA2MWUzN2CRBlA=: 00:18:22.906 06:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:00:MDAxOGE5NDk1MDJhNzM2ODFiYzUyY2MxZmU2OThmOTBiOTMwZjU3ZDQ0ZjE4NmU1ABSl4A==: --dhchap-ctrl-secret DHHC-1:03:MTMyODEzMGVmNWE5NzE3YWJlNzM5ZTEzMmY5NDNlMDBiMDgzOGZiZTI5MmY4OTQ2M2IxMDFlYjU5MTA2MWUzN2CRBlA=: 00:18:23.476 06:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.476 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.476 06:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:23.476 06:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.476 06:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.476 06:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.476 06:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:23.476 06:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:23.476 06:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:23.737 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:18:23.737 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:23.737 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:23.737 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:23.737 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:23.737 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.737 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.737 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.737 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.737 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.737 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.737 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.737 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.997 00:18:23.998 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:23.998 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:23.998 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.998 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.998 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:23.998 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.998 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.998 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.998 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:23.998 { 00:18:23.998 "cntlid": 75, 00:18:23.998 "qid": 0, 00:18:23.998 "state": "enabled", 00:18:23.998 "thread": "nvmf_tgt_poll_group_000", 00:18:23.998 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:18:23.998 "listen_address": { 00:18:23.998 "trtype": "TCP", 00:18:23.998 "adrfam": "IPv4", 00:18:23.998 "traddr": "10.0.0.2", 00:18:23.998 "trsvcid": "4420" 00:18:23.998 }, 00:18:23.998 "peer_address": { 00:18:23.998 "trtype": "TCP", 00:18:23.998 "adrfam": "IPv4", 00:18:23.998 "traddr": "10.0.0.1", 00:18:23.998 "trsvcid": "40172" 00:18:23.998 }, 00:18:23.998 "auth": { 00:18:23.998 "state": "completed", 00:18:23.998 "digest": "sha384", 00:18:23.998 "dhgroup": "ffdhe4096" 00:18:23.998 } 00:18:23.998 } 00:18:23.998 ]' 00:18:23.998 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:24.257 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:24.257 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:24.257 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:24.257 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:24.257 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.258 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.258 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.597 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzY2MTI0ZDdkM2E5ZDljNmJmOTU2ZWRmY2Y0ZDQ2NDko9oR0: --dhchap-ctrl-secret DHHC-1:02:ODU1MDU4MWQ5NmZlMjJkZDJmMjcxMjJkOGNhNTYwOGJiMzBkMWNhODg1MzdkYTY4EBx+bw==: 00:18:24.597 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:01:NzY2MTI0ZDdkM2E5ZDljNmJmOTU2ZWRmY2Y0ZDQ2NDko9oR0: --dhchap-ctrl-secret DHHC-1:02:ODU1MDU4MWQ5NmZlMjJkZDJmMjcxMjJkOGNhNTYwOGJiMzBkMWNhODg1MzdkYTY4EBx+bw==: 00:18:25.239 06:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:25.239 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:25.239 06:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:25.239 06:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.239 06:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.239 06:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.239 06:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:25.239 06:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:25.239 06:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:25.239 06:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:18:25.239 06:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:25.239 06:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:25.239 06:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:25.239 06:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:25.239 06:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:25.239 06:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:25.239 06:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.239 06:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.239 06:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.239 06:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:25.239 06:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:25.239 06:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:25.504 00:18:25.504 06:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:25.504 06:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:25.504 06:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.767 06:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.767 06:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.767 06:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.767 06:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.767 06:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.767 06:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:25.767 { 00:18:25.767 "cntlid": 77, 00:18:25.767 "qid": 0, 00:18:25.767 "state": "enabled", 00:18:25.767 "thread": "nvmf_tgt_poll_group_000", 00:18:25.767 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:18:25.767 "listen_address": { 00:18:25.767 "trtype": "TCP", 00:18:25.767 "adrfam": "IPv4", 00:18:25.767 "traddr": "10.0.0.2", 00:18:25.767 "trsvcid": "4420" 00:18:25.767 }, 00:18:25.767 "peer_address": { 00:18:25.767 "trtype": "TCP", 00:18:25.767 "adrfam": "IPv4", 00:18:25.767 "traddr": "10.0.0.1", 00:18:25.767 "trsvcid": "55474" 00:18:25.767 }, 00:18:25.767 "auth": { 00:18:25.767 "state": "completed", 00:18:25.767 "digest": "sha384", 00:18:25.767 "dhgroup": "ffdhe4096" 00:18:25.767 } 00:18:25.767 } 00:18:25.767 ]' 00:18:25.767 06:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:25.767 06:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:25.767 06:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:25.767 06:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:25.767 06:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:25.767 06:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.767 06:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.767 06:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.031 06:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjU1OWUzODczNGI0NmU3OGY1NjVlMGE5YzA4ZDMzNzVlNTQ3NDlkMWM2ZmMwNzljte7iiw==: --dhchap-ctrl-secret DHHC-1:01:ODNjNzA4YzFhZTgyNjcxYTgzYWQ4YjExY2NmYmI1OWY6vyUp: 00:18:26.031 06:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:02:NjU1OWUzODczNGI0NmU3OGY1NjVlMGE5YzA4ZDMzNzVlNTQ3NDlkMWM2ZmMwNzljte7iiw==: --dhchap-ctrl-secret DHHC-1:01:ODNjNzA4YzFhZTgyNjcxYTgzYWQ4YjExY2NmYmI1OWY6vyUp: 00:18:26.604 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.604 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.604 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:26.604 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.604 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.604 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.604 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:26.604 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:26.604 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:26.867 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:18:26.867 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:26.867 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:26.867 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:26.867 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:26.867 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:26.867 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key3 00:18:26.867 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.867 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.867 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.867 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:26.867 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:26.867 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:27.130 00:18:27.131 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:27.131 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.131 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:27.402 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.402 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:27.402 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.402 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.402 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.402 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:27.402 { 00:18:27.402 "cntlid": 79, 00:18:27.402 "qid": 0, 00:18:27.402 "state": "enabled", 00:18:27.402 "thread": "nvmf_tgt_poll_group_000", 00:18:27.402 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:18:27.402 "listen_address": { 00:18:27.402 "trtype": "TCP", 00:18:27.402 "adrfam": "IPv4", 00:18:27.402 "traddr": "10.0.0.2", 00:18:27.402 "trsvcid": "4420" 00:18:27.402 }, 00:18:27.402 "peer_address": { 00:18:27.402 "trtype": "TCP", 00:18:27.402 "adrfam": "IPv4", 00:18:27.402 "traddr": "10.0.0.1", 00:18:27.402 "trsvcid": "55502" 00:18:27.402 }, 00:18:27.402 "auth": { 00:18:27.402 "state": "completed", 00:18:27.402 "digest": "sha384", 00:18:27.402 "dhgroup": "ffdhe4096" 00:18:27.402 } 00:18:27.402 } 00:18:27.402 ]' 00:18:27.402 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:27.402 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:27.402 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:27.402 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:27.402 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:27.402 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:27.402 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:27.402 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.673 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDAwY2MzNmM4YmFiYjE4ZTg5OTI4MDczZGFlYzE2YmI4NTI5YjQyOWVlZWI1NTY1MmZmOTJhYzMyMjcyNGVkOBP12V4=: 00:18:27.673 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:03:ZDAwY2MzNmM4YmFiYjE4ZTg5OTI4MDczZGFlYzE2YmI4NTI5YjQyOWVlZWI1NTY1MmZmOTJhYzMyMjcyNGVkOBP12V4=: 00:18:28.278 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:28.278 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:28.278 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:28.278 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.278 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.278 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.278 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:28.279 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:28.279 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:28.279 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:28.547 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:18:28.547 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:28.547 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:28.547 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:28.547 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:28.547 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.547 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:28.547 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.547 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.547 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.547 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:28.547 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:28.547 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:28.816 00:18:28.816 06:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:28.816 06:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:28.816 06:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.082 06:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.082 06:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:29.082 06:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.082 06:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.082 06:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.082 06:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:29.082 { 00:18:29.082 "cntlid": 81, 00:18:29.082 "qid": 0, 00:18:29.082 "state": "enabled", 00:18:29.082 "thread": "nvmf_tgt_poll_group_000", 00:18:29.082 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:18:29.082 "listen_address": { 00:18:29.082 "trtype": "TCP", 00:18:29.082 "adrfam": "IPv4", 00:18:29.082 "traddr": "10.0.0.2", 00:18:29.082 "trsvcid": "4420" 00:18:29.082 }, 00:18:29.082 "peer_address": { 00:18:29.082 "trtype": "TCP", 00:18:29.082 "adrfam": "IPv4", 00:18:29.082 "traddr": "10.0.0.1", 00:18:29.082 "trsvcid": "55512" 00:18:29.082 }, 00:18:29.082 "auth": { 00:18:29.082 "state": "completed", 00:18:29.082 "digest": "sha384", 00:18:29.082 "dhgroup": "ffdhe6144" 00:18:29.082 } 00:18:29.082 } 00:18:29.082 ]' 00:18:29.082 06:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:29.082 06:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:29.082 06:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:29.082 06:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:29.082 06:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:29.082 06:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:29.082 06:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:29.082 06:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.346 06:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDAxOGE5NDk1MDJhNzM2ODFiYzUyY2MxZmU2OThmOTBiOTMwZjU3ZDQ0ZjE4NmU1ABSl4A==: --dhchap-ctrl-secret DHHC-1:03:MTMyODEzMGVmNWE5NzE3YWJlNzM5ZTEzMmY5NDNlMDBiMDgzOGZiZTI5MmY4OTQ2M2IxMDFlYjU5MTA2MWUzN2CRBlA=: 00:18:29.346 06:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:00:MDAxOGE5NDk1MDJhNzM2ODFiYzUyY2MxZmU2OThmOTBiOTMwZjU3ZDQ0ZjE4NmU1ABSl4A==: --dhchap-ctrl-secret DHHC-1:03:MTMyODEzMGVmNWE5NzE3YWJlNzM5ZTEzMmY5NDNlMDBiMDgzOGZiZTI5MmY4OTQ2M2IxMDFlYjU5MTA2MWUzN2CRBlA=: 00:18:29.924 06:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.924 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.924 06:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:29.924 06:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.924 06:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.924 06:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.924 06:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:29.924 06:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:29.924 06:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:30.192 06:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:18:30.192 06:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:30.192 06:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:30.192 06:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:30.192 06:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:30.192 06:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:30.192 06:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.192 06:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.192 06:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.192 06:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.192 06:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.192 06:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.192 06:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.462 00:18:30.462 06:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:30.462 06:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:30.462 06:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.732 06:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.732 06:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:30.732 06:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.732 06:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.732 06:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.732 06:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:30.732 { 00:18:30.732 "cntlid": 83, 00:18:30.732 "qid": 0, 00:18:30.732 "state": "enabled", 00:18:30.732 "thread": "nvmf_tgt_poll_group_000", 00:18:30.732 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:18:30.732 "listen_address": { 00:18:30.732 "trtype": "TCP", 00:18:30.732 "adrfam": "IPv4", 00:18:30.732 "traddr": "10.0.0.2", 00:18:30.732 "trsvcid": "4420" 00:18:30.732 }, 00:18:30.732 "peer_address": { 00:18:30.732 "trtype": "TCP", 00:18:30.732 "adrfam": "IPv4", 00:18:30.732 "traddr": "10.0.0.1", 00:18:30.732 "trsvcid": "55550" 00:18:30.732 }, 00:18:30.732 "auth": { 00:18:30.732 "state": "completed", 00:18:30.732 "digest": "sha384", 00:18:30.732 "dhgroup": "ffdhe6144" 00:18:30.732 } 00:18:30.732 } 00:18:30.732 ]' 00:18:30.732 06:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:30.732 06:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:30.732 06:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:30.732 06:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:30.732 06:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:31.000 06:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:31.000 06:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.000 06:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:31.000 06:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzY2MTI0ZDdkM2E5ZDljNmJmOTU2ZWRmY2Y0ZDQ2NDko9oR0: --dhchap-ctrl-secret DHHC-1:02:ODU1MDU4MWQ5NmZlMjJkZDJmMjcxMjJkOGNhNTYwOGJiMzBkMWNhODg1MzdkYTY4EBx+bw==: 00:18:31.000 06:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:01:NzY2MTI0ZDdkM2E5ZDljNmJmOTU2ZWRmY2Y0ZDQ2NDko9oR0: --dhchap-ctrl-secret DHHC-1:02:ODU1MDU4MWQ5NmZlMjJkZDJmMjcxMjJkOGNhNTYwOGJiMzBkMWNhODg1MzdkYTY4EBx+bw==: 00:18:31.585 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.585 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.851 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:31.851 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.851 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.851 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.851 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:31.851 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:31.851 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:31.851 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:18:31.851 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:31.851 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:31.851 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:31.851 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:31.851 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:31.851 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:31.851 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.851 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.851 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.851 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:31.851 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:31.851 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:32.117 00:18:32.385 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:32.385 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.385 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:32.385 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.385 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:32.385 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.385 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.385 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.385 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:32.385 { 00:18:32.385 "cntlid": 85, 00:18:32.385 "qid": 0, 00:18:32.385 "state": "enabled", 00:18:32.385 "thread": "nvmf_tgt_poll_group_000", 00:18:32.385 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:18:32.385 "listen_address": { 00:18:32.385 "trtype": "TCP", 00:18:32.385 "adrfam": "IPv4", 00:18:32.385 "traddr": "10.0.0.2", 00:18:32.385 "trsvcid": "4420" 00:18:32.385 }, 00:18:32.385 "peer_address": { 00:18:32.385 "trtype": "TCP", 00:18:32.385 "adrfam": "IPv4", 00:18:32.385 "traddr": "10.0.0.1", 00:18:32.385 "trsvcid": "55580" 00:18:32.385 }, 00:18:32.385 "auth": { 00:18:32.385 "state": "completed", 00:18:32.385 "digest": "sha384", 00:18:32.385 "dhgroup": "ffdhe6144" 00:18:32.386 } 00:18:32.386 } 00:18:32.386 ]' 00:18:32.386 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:32.669 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:32.669 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:32.669 06:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:32.669 06:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:32.669 06:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:32.669 06:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.669 06:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.669 06:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjU1OWUzODczNGI0NmU3OGY1NjVlMGE5YzA4ZDMzNzVlNTQ3NDlkMWM2ZmMwNzljte7iiw==: --dhchap-ctrl-secret DHHC-1:01:ODNjNzA4YzFhZTgyNjcxYTgzYWQ4YjExY2NmYmI1OWY6vyUp: 00:18:32.669 06:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:02:NjU1OWUzODczNGI0NmU3OGY1NjVlMGE5YzA4ZDMzNzVlNTQ3NDlkMWM2ZmMwNzljte7iiw==: --dhchap-ctrl-secret DHHC-1:01:ODNjNzA4YzFhZTgyNjcxYTgzYWQ4YjExY2NmYmI1OWY6vyUp: 00:18:33.620 06:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:33.620 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:33.620 06:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:33.620 06:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.620 06:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.620 06:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.620 06:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:33.620 06:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:33.620 06:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:33.620 06:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:18:33.620 06:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:33.620 06:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:33.620 06:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:33.620 06:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:33.620 06:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:33.620 06:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key3 00:18:33.620 06:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.620 06:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.620 06:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.620 06:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:33.620 06:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:33.620 06:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:33.890 00:18:33.890 06:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:33.890 06:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:33.890 06:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.169 06:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.169 06:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:34.169 06:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.169 06:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.169 06:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.169 06:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:34.169 { 00:18:34.169 "cntlid": 87, 00:18:34.169 "qid": 0, 00:18:34.169 "state": "enabled", 00:18:34.169 "thread": "nvmf_tgt_poll_group_000", 00:18:34.169 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:18:34.169 "listen_address": { 00:18:34.169 "trtype": "TCP", 00:18:34.169 "adrfam": "IPv4", 00:18:34.169 "traddr": "10.0.0.2", 00:18:34.169 "trsvcid": "4420" 00:18:34.169 }, 00:18:34.169 "peer_address": { 00:18:34.169 "trtype": "TCP", 00:18:34.169 "adrfam": "IPv4", 00:18:34.169 "traddr": "10.0.0.1", 00:18:34.169 "trsvcid": "55612" 00:18:34.169 }, 00:18:34.169 "auth": { 00:18:34.169 "state": "completed", 00:18:34.169 "digest": "sha384", 00:18:34.169 "dhgroup": "ffdhe6144" 00:18:34.169 } 00:18:34.169 } 00:18:34.169 ]' 00:18:34.169 06:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:34.169 06:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:34.169 06:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:34.169 06:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:34.169 06:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:34.169 06:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.169 06:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.169 06:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:34.438 06:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDAwY2MzNmM4YmFiYjE4ZTg5OTI4MDczZGFlYzE2YmI4NTI5YjQyOWVlZWI1NTY1MmZmOTJhYzMyMjcyNGVkOBP12V4=: 00:18:34.438 06:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:03:ZDAwY2MzNmM4YmFiYjE4ZTg5OTI4MDczZGFlYzE2YmI4NTI5YjQyOWVlZWI1NTY1MmZmOTJhYzMyMjcyNGVkOBP12V4=: 00:18:35.019 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:35.019 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:35.019 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:35.019 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.019 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.019 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.019 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:35.019 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:35.019 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:35.019 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:35.284 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:18:35.284 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:35.284 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:35.284 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:35.284 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:35.284 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:35.284 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:35.284 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.284 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.284 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.284 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:35.284 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:35.284 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:35.864 00:18:35.864 06:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:35.864 06:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:35.864 06:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.864 06:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.864 06:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:35.864 06:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.864 06:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.864 06:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.864 06:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:35.864 { 00:18:35.864 "cntlid": 89, 00:18:35.864 "qid": 0, 00:18:35.864 "state": "enabled", 00:18:35.864 "thread": "nvmf_tgt_poll_group_000", 00:18:35.864 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:18:35.864 "listen_address": { 00:18:35.864 "trtype": "TCP", 00:18:35.864 "adrfam": "IPv4", 00:18:35.864 "traddr": "10.0.0.2", 00:18:35.864 "trsvcid": "4420" 00:18:35.864 }, 00:18:35.864 "peer_address": { 00:18:35.864 "trtype": "TCP", 00:18:35.864 "adrfam": "IPv4", 00:18:35.864 "traddr": "10.0.0.1", 00:18:35.864 "trsvcid": "50424" 00:18:35.864 }, 00:18:35.864 "auth": { 00:18:35.864 "state": "completed", 00:18:35.864 "digest": "sha384", 00:18:35.864 "dhgroup": "ffdhe8192" 00:18:35.864 } 00:18:35.864 } 00:18:35.864 ]' 00:18:35.864 06:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:35.864 06:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:35.864 06:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:36.135 06:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:36.136 06:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:36.136 06:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:36.136 06:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.136 06:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:36.136 06:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDAxOGE5NDk1MDJhNzM2ODFiYzUyY2MxZmU2OThmOTBiOTMwZjU3ZDQ0ZjE4NmU1ABSl4A==: --dhchap-ctrl-secret DHHC-1:03:MTMyODEzMGVmNWE5NzE3YWJlNzM5ZTEzMmY5NDNlMDBiMDgzOGZiZTI5MmY4OTQ2M2IxMDFlYjU5MTA2MWUzN2CRBlA=: 00:18:36.136 06:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:00:MDAxOGE5NDk1MDJhNzM2ODFiYzUyY2MxZmU2OThmOTBiOTMwZjU3ZDQ0ZjE4NmU1ABSl4A==: --dhchap-ctrl-secret DHHC-1:03:MTMyODEzMGVmNWE5NzE3YWJlNzM5ZTEzMmY5NDNlMDBiMDgzOGZiZTI5MmY4OTQ2M2IxMDFlYjU5MTA2MWUzN2CRBlA=: 00:18:37.112 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:37.112 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:37.112 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:37.112 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.112 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.112 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.112 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:37.112 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:37.112 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:37.112 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:18:37.112 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:37.112 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:37.112 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:37.112 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:37.112 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:37.112 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:37.112 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.112 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.112 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.112 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:37.112 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:37.112 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:37.704 00:18:37.704 06:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:37.704 06:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:37.704 06:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.704 06:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.704 06:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:37.704 06:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.704 06:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.704 06:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.704 06:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:37.704 { 00:18:37.704 "cntlid": 91, 00:18:37.704 "qid": 0, 00:18:37.704 "state": "enabled", 00:18:37.704 "thread": "nvmf_tgt_poll_group_000", 00:18:37.704 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:18:37.704 "listen_address": { 00:18:37.704 "trtype": "TCP", 00:18:37.704 "adrfam": "IPv4", 00:18:37.704 "traddr": "10.0.0.2", 00:18:37.704 "trsvcid": "4420" 00:18:37.704 }, 00:18:37.704 "peer_address": { 00:18:37.704 "trtype": "TCP", 00:18:37.704 "adrfam": "IPv4", 00:18:37.704 "traddr": "10.0.0.1", 00:18:37.704 "trsvcid": "50444" 00:18:37.704 }, 00:18:37.704 "auth": { 00:18:37.704 "state": "completed", 00:18:37.704 "digest": "sha384", 00:18:37.704 "dhgroup": "ffdhe8192" 00:18:37.704 } 00:18:37.704 } 00:18:37.704 ]' 00:18:37.704 06:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:37.704 06:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:37.980 06:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:37.980 06:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:37.980 06:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:37.980 06:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:37.980 06:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:37.980 06:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.980 06:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzY2MTI0ZDdkM2E5ZDljNmJmOTU2ZWRmY2Y0ZDQ2NDko9oR0: --dhchap-ctrl-secret DHHC-1:02:ODU1MDU4MWQ5NmZlMjJkZDJmMjcxMjJkOGNhNTYwOGJiMzBkMWNhODg1MzdkYTY4EBx+bw==: 00:18:37.980 06:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:01:NzY2MTI0ZDdkM2E5ZDljNmJmOTU2ZWRmY2Y0ZDQ2NDko9oR0: --dhchap-ctrl-secret DHHC-1:02:ODU1MDU4MWQ5NmZlMjJkZDJmMjcxMjJkOGNhNTYwOGJiMzBkMWNhODg1MzdkYTY4EBx+bw==: 00:18:38.607 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.607 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.876 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:38.876 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.876 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.876 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.876 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:38.876 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:38.876 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:38.876 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:18:38.876 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:38.876 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:38.876 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:38.876 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:38.876 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:38.876 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:38.876 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.876 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.876 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.876 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:38.876 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:38.876 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:39.465 00:18:39.465 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:39.465 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:39.465 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.465 06:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.465 06:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:39.465 06:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.465 06:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.731 06:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.731 06:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:39.731 { 00:18:39.731 "cntlid": 93, 00:18:39.731 "qid": 0, 00:18:39.731 "state": "enabled", 00:18:39.731 "thread": "nvmf_tgt_poll_group_000", 00:18:39.731 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:18:39.731 "listen_address": { 00:18:39.731 "trtype": "TCP", 00:18:39.731 "adrfam": "IPv4", 00:18:39.731 "traddr": "10.0.0.2", 00:18:39.731 "trsvcid": "4420" 00:18:39.731 }, 00:18:39.731 "peer_address": { 00:18:39.731 "trtype": "TCP", 00:18:39.731 "adrfam": "IPv4", 00:18:39.731 "traddr": "10.0.0.1", 00:18:39.731 "trsvcid": "50476" 00:18:39.731 }, 00:18:39.731 "auth": { 00:18:39.731 "state": "completed", 00:18:39.731 "digest": "sha384", 00:18:39.731 "dhgroup": "ffdhe8192" 00:18:39.731 } 00:18:39.731 } 00:18:39.731 ]' 00:18:39.731 06:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:39.731 06:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:39.731 06:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:39.731 06:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:39.731 06:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:39.732 06:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:39.732 06:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:39.732 06:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:39.998 06:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjU1OWUzODczNGI0NmU3OGY1NjVlMGE5YzA4ZDMzNzVlNTQ3NDlkMWM2ZmMwNzljte7iiw==: --dhchap-ctrl-secret DHHC-1:01:ODNjNzA4YzFhZTgyNjcxYTgzYWQ4YjExY2NmYmI1OWY6vyUp: 00:18:39.998 06:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:02:NjU1OWUzODczNGI0NmU3OGY1NjVlMGE5YzA4ZDMzNzVlNTQ3NDlkMWM2ZmMwNzljte7iiw==: --dhchap-ctrl-secret DHHC-1:01:ODNjNzA4YzFhZTgyNjcxYTgzYWQ4YjExY2NmYmI1OWY6vyUp: 00:18:40.591 06:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.591 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.591 06:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:40.591 06:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.591 06:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.591 06:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.591 06:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:40.591 06:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:40.591 06:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:40.591 06:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:18:40.591 06:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:40.591 06:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:40.591 06:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:40.591 06:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:40.591 06:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:40.591 06:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key3 00:18:40.591 06:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.591 06:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.591 06:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.591 06:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:40.591 06:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:40.862 06:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:41.130 00:18:41.130 06:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:41.130 06:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:41.130 06:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.399 06:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.399 06:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:41.399 06:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.399 06:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.399 06:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.399 06:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:41.399 { 00:18:41.399 "cntlid": 95, 00:18:41.399 "qid": 0, 00:18:41.399 "state": "enabled", 00:18:41.399 "thread": "nvmf_tgt_poll_group_000", 00:18:41.399 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:18:41.399 "listen_address": { 00:18:41.399 "trtype": "TCP", 00:18:41.399 "adrfam": "IPv4", 00:18:41.399 "traddr": "10.0.0.2", 00:18:41.399 "trsvcid": "4420" 00:18:41.399 }, 00:18:41.399 "peer_address": { 00:18:41.399 "trtype": "TCP", 00:18:41.399 "adrfam": "IPv4", 00:18:41.399 "traddr": "10.0.0.1", 00:18:41.399 "trsvcid": "50498" 00:18:41.399 }, 00:18:41.399 "auth": { 00:18:41.399 "state": "completed", 00:18:41.399 "digest": "sha384", 00:18:41.399 "dhgroup": "ffdhe8192" 00:18:41.399 } 00:18:41.399 } 00:18:41.399 ]' 00:18:41.399 06:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:41.399 06:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:41.399 06:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:41.399 06:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:41.399 06:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:41.668 06:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:41.668 06:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:41.668 06:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.668 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDAwY2MzNmM4YmFiYjE4ZTg5OTI4MDczZGFlYzE2YmI4NTI5YjQyOWVlZWI1NTY1MmZmOTJhYzMyMjcyNGVkOBP12V4=: 00:18:41.668 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:03:ZDAwY2MzNmM4YmFiYjE4ZTg5OTI4MDczZGFlYzE2YmI4NTI5YjQyOWVlZWI1NTY1MmZmOTJhYzMyMjcyNGVkOBP12V4=: 00:18:42.254 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:42.254 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:42.254 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:42.254 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.254 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.254 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.254 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:18:42.254 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:42.254 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:42.254 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:42.254 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:42.526 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:18:42.526 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:42.526 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:42.526 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:42.526 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:42.526 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.526 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:42.526 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.526 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.526 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.526 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:42.526 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:42.526 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:42.796 00:18:42.796 06:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:42.796 06:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:42.796 06:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.066 06:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.066 06:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.066 06:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.066 06:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.066 06:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.066 06:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:43.066 { 00:18:43.066 "cntlid": 97, 00:18:43.066 "qid": 0, 00:18:43.066 "state": "enabled", 00:18:43.066 "thread": "nvmf_tgt_poll_group_000", 00:18:43.066 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:18:43.066 "listen_address": { 00:18:43.066 "trtype": "TCP", 00:18:43.066 "adrfam": "IPv4", 00:18:43.066 "traddr": "10.0.0.2", 00:18:43.066 "trsvcid": "4420" 00:18:43.066 }, 00:18:43.066 "peer_address": { 00:18:43.066 "trtype": "TCP", 00:18:43.066 "adrfam": "IPv4", 00:18:43.066 "traddr": "10.0.0.1", 00:18:43.066 "trsvcid": "50530" 00:18:43.066 }, 00:18:43.066 "auth": { 00:18:43.066 "state": "completed", 00:18:43.066 "digest": "sha512", 00:18:43.066 "dhgroup": "null" 00:18:43.066 } 00:18:43.066 } 00:18:43.066 ]' 00:18:43.066 06:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:43.066 06:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:43.066 06:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:43.066 06:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:43.066 06:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:43.066 06:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:43.066 06:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.066 06:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.337 06:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDAxOGE5NDk1MDJhNzM2ODFiYzUyY2MxZmU2OThmOTBiOTMwZjU3ZDQ0ZjE4NmU1ABSl4A==: --dhchap-ctrl-secret DHHC-1:03:MTMyODEzMGVmNWE5NzE3YWJlNzM5ZTEzMmY5NDNlMDBiMDgzOGZiZTI5MmY4OTQ2M2IxMDFlYjU5MTA2MWUzN2CRBlA=: 00:18:43.337 06:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:00:MDAxOGE5NDk1MDJhNzM2ODFiYzUyY2MxZmU2OThmOTBiOTMwZjU3ZDQ0ZjE4NmU1ABSl4A==: --dhchap-ctrl-secret DHHC-1:03:MTMyODEzMGVmNWE5NzE3YWJlNzM5ZTEzMmY5NDNlMDBiMDgzOGZiZTI5MmY4OTQ2M2IxMDFlYjU5MTA2MWUzN2CRBlA=: 00:18:43.965 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:43.965 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:43.965 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:43.965 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.965 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.965 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.965 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:43.965 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:43.965 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:44.232 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:18:44.232 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:44.232 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:44.232 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:44.232 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:44.232 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:44.232 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.232 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.232 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.232 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.232 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.232 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.232 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.232 00:18:44.232 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:44.232 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:44.232 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.502 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.502 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:44.502 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.502 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.502 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.502 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:44.502 { 00:18:44.502 "cntlid": 99, 00:18:44.502 "qid": 0, 00:18:44.502 "state": "enabled", 00:18:44.502 "thread": "nvmf_tgt_poll_group_000", 00:18:44.502 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:18:44.502 "listen_address": { 00:18:44.502 "trtype": "TCP", 00:18:44.502 "adrfam": "IPv4", 00:18:44.502 "traddr": "10.0.0.2", 00:18:44.502 "trsvcid": "4420" 00:18:44.502 }, 00:18:44.502 "peer_address": { 00:18:44.502 "trtype": "TCP", 00:18:44.502 "adrfam": "IPv4", 00:18:44.502 "traddr": "10.0.0.1", 00:18:44.502 "trsvcid": "50560" 00:18:44.502 }, 00:18:44.502 "auth": { 00:18:44.502 "state": "completed", 00:18:44.502 "digest": "sha512", 00:18:44.502 "dhgroup": "null" 00:18:44.502 } 00:18:44.502 } 00:18:44.502 ]' 00:18:44.502 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:44.502 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:44.502 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:44.502 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:44.774 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:44.774 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:44.774 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:44.774 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.774 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzY2MTI0ZDdkM2E5ZDljNmJmOTU2ZWRmY2Y0ZDQ2NDko9oR0: --dhchap-ctrl-secret DHHC-1:02:ODU1MDU4MWQ5NmZlMjJkZDJmMjcxMjJkOGNhNTYwOGJiMzBkMWNhODg1MzdkYTY4EBx+bw==: 00:18:44.774 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:01:NzY2MTI0ZDdkM2E5ZDljNmJmOTU2ZWRmY2Y0ZDQ2NDko9oR0: --dhchap-ctrl-secret DHHC-1:02:ODU1MDU4MWQ5NmZlMjJkZDJmMjcxMjJkOGNhNTYwOGJiMzBkMWNhODg1MzdkYTY4EBx+bw==: 00:18:45.357 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:45.357 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:45.357 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:45.357 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.357 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.357 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.357 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:45.357 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:45.357 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:45.629 06:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:18:45.629 06:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:45.629 06:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:45.629 06:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:45.629 06:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:45.629 06:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:45.629 06:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:45.629 06:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.629 06:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.629 06:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.629 06:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:45.629 06:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:45.629 06:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:45.904 00:18:45.904 06:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:45.904 06:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:45.904 06:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:46.173 06:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.173 06:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:46.173 06:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.173 06:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.173 06:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.173 06:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:46.173 { 00:18:46.173 "cntlid": 101, 00:18:46.173 "qid": 0, 00:18:46.173 "state": "enabled", 00:18:46.173 "thread": "nvmf_tgt_poll_group_000", 00:18:46.173 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:18:46.173 "listen_address": { 00:18:46.173 "trtype": "TCP", 00:18:46.173 "adrfam": "IPv4", 00:18:46.173 "traddr": "10.0.0.2", 00:18:46.173 "trsvcid": "4420" 00:18:46.173 }, 00:18:46.173 "peer_address": { 00:18:46.173 "trtype": "TCP", 00:18:46.173 "adrfam": "IPv4", 00:18:46.173 "traddr": "10.0.0.1", 00:18:46.173 "trsvcid": "38942" 00:18:46.173 }, 00:18:46.173 "auth": { 00:18:46.173 "state": "completed", 00:18:46.173 "digest": "sha512", 00:18:46.173 "dhgroup": "null" 00:18:46.173 } 00:18:46.173 } 00:18:46.173 ]' 00:18:46.173 06:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:46.173 06:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:46.173 06:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:46.173 06:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:46.173 06:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:46.173 06:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:46.173 06:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:46.173 06:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.445 06:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjU1OWUzODczNGI0NmU3OGY1NjVlMGE5YzA4ZDMzNzVlNTQ3NDlkMWM2ZmMwNzljte7iiw==: --dhchap-ctrl-secret DHHC-1:01:ODNjNzA4YzFhZTgyNjcxYTgzYWQ4YjExY2NmYmI1OWY6vyUp: 00:18:46.445 06:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:02:NjU1OWUzODczNGI0NmU3OGY1NjVlMGE5YzA4ZDMzNzVlNTQ3NDlkMWM2ZmMwNzljte7iiw==: --dhchap-ctrl-secret DHHC-1:01:ODNjNzA4YzFhZTgyNjcxYTgzYWQ4YjExY2NmYmI1OWY6vyUp: 00:18:47.042 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:47.042 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:47.042 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:47.042 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.042 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.042 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.042 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:47.042 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:47.042 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:47.318 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:18:47.318 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:47.318 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:47.318 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:47.318 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:47.318 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:47.318 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key3 00:18:47.318 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.318 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.318 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.318 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:47.318 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:47.318 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:47.318 00:18:47.318 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:47.318 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:47.318 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.595 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.595 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:47.595 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.595 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.595 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.595 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:47.595 { 00:18:47.595 "cntlid": 103, 00:18:47.595 "qid": 0, 00:18:47.595 "state": "enabled", 00:18:47.595 "thread": "nvmf_tgt_poll_group_000", 00:18:47.595 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:18:47.595 "listen_address": { 00:18:47.595 "trtype": "TCP", 00:18:47.595 "adrfam": "IPv4", 00:18:47.595 "traddr": "10.0.0.2", 00:18:47.595 "trsvcid": "4420" 00:18:47.595 }, 00:18:47.595 "peer_address": { 00:18:47.595 "trtype": "TCP", 00:18:47.595 "adrfam": "IPv4", 00:18:47.595 "traddr": "10.0.0.1", 00:18:47.595 "trsvcid": "38984" 00:18:47.595 }, 00:18:47.595 "auth": { 00:18:47.595 "state": "completed", 00:18:47.595 "digest": "sha512", 00:18:47.595 "dhgroup": "null" 00:18:47.595 } 00:18:47.595 } 00:18:47.595 ]' 00:18:47.595 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:47.595 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:47.595 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:47.595 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:47.595 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:47.873 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:47.873 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.873 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.873 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDAwY2MzNmM4YmFiYjE4ZTg5OTI4MDczZGFlYzE2YmI4NTI5YjQyOWVlZWI1NTY1MmZmOTJhYzMyMjcyNGVkOBP12V4=: 00:18:47.873 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:03:ZDAwY2MzNmM4YmFiYjE4ZTg5OTI4MDczZGFlYzE2YmI4NTI5YjQyOWVlZWI1NTY1MmZmOTJhYzMyMjcyNGVkOBP12V4=: 00:18:48.490 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.490 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.490 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:48.490 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.490 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.490 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.490 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:48.490 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:48.490 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:48.490 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:48.779 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:18:48.779 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:48.779 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:48.779 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:48.779 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:48.779 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:48.779 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:48.779 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.779 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.779 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.779 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:48.779 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:48.779 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:49.092 00:18:49.092 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:49.092 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:49.092 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.420 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.420 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:49.420 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.420 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.420 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.420 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:49.420 { 00:18:49.420 "cntlid": 105, 00:18:49.420 "qid": 0, 00:18:49.420 "state": "enabled", 00:18:49.420 "thread": "nvmf_tgt_poll_group_000", 00:18:49.420 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:18:49.420 "listen_address": { 00:18:49.420 "trtype": "TCP", 00:18:49.420 "adrfam": "IPv4", 00:18:49.420 "traddr": "10.0.0.2", 00:18:49.420 "trsvcid": "4420" 00:18:49.420 }, 00:18:49.420 "peer_address": { 00:18:49.420 "trtype": "TCP", 00:18:49.420 "adrfam": "IPv4", 00:18:49.420 "traddr": "10.0.0.1", 00:18:49.420 "trsvcid": "39004" 00:18:49.420 }, 00:18:49.420 "auth": { 00:18:49.420 "state": "completed", 00:18:49.420 "digest": "sha512", 00:18:49.420 "dhgroup": "ffdhe2048" 00:18:49.420 } 00:18:49.420 } 00:18:49.420 ]' 00:18:49.420 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:49.420 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:49.420 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:49.420 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:49.420 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:49.420 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:49.420 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.420 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.420 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDAxOGE5NDk1MDJhNzM2ODFiYzUyY2MxZmU2OThmOTBiOTMwZjU3ZDQ0ZjE4NmU1ABSl4A==: --dhchap-ctrl-secret DHHC-1:03:MTMyODEzMGVmNWE5NzE3YWJlNzM5ZTEzMmY5NDNlMDBiMDgzOGZiZTI5MmY4OTQ2M2IxMDFlYjU5MTA2MWUzN2CRBlA=: 00:18:49.420 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:00:MDAxOGE5NDk1MDJhNzM2ODFiYzUyY2MxZmU2OThmOTBiOTMwZjU3ZDQ0ZjE4NmU1ABSl4A==: --dhchap-ctrl-secret DHHC-1:03:MTMyODEzMGVmNWE5NzE3YWJlNzM5ZTEzMmY5NDNlMDBiMDgzOGZiZTI5MmY4OTQ2M2IxMDFlYjU5MTA2MWUzN2CRBlA=: 00:18:50.050 06:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:50.050 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:50.050 06:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:50.050 06:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.050 06:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.050 06:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.050 06:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:50.051 06:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:50.051 06:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:50.312 06:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:18:50.312 06:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:50.312 06:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:50.312 06:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:50.312 06:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:50.312 06:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:50.312 06:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:50.312 06:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.312 06:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.312 06:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.312 06:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:50.312 06:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:50.312 06:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:50.585 00:18:50.585 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:50.585 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:50.585 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.847 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.847 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:50.847 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.847 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.847 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.847 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:50.847 { 00:18:50.847 "cntlid": 107, 00:18:50.847 "qid": 0, 00:18:50.847 "state": "enabled", 00:18:50.847 "thread": "nvmf_tgt_poll_group_000", 00:18:50.847 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:18:50.847 "listen_address": { 00:18:50.847 "trtype": "TCP", 00:18:50.847 "adrfam": "IPv4", 00:18:50.847 "traddr": "10.0.0.2", 00:18:50.847 "trsvcid": "4420" 00:18:50.847 }, 00:18:50.847 "peer_address": { 00:18:50.847 "trtype": "TCP", 00:18:50.847 "adrfam": "IPv4", 00:18:50.847 "traddr": "10.0.0.1", 00:18:50.847 "trsvcid": "39030" 00:18:50.847 }, 00:18:50.847 "auth": { 00:18:50.847 "state": "completed", 00:18:50.847 "digest": "sha512", 00:18:50.847 "dhgroup": "ffdhe2048" 00:18:50.847 } 00:18:50.847 } 00:18:50.847 ]' 00:18:50.847 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:50.847 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:50.847 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:50.847 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:50.847 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:50.847 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:50.847 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:50.847 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.107 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzY2MTI0ZDdkM2E5ZDljNmJmOTU2ZWRmY2Y0ZDQ2NDko9oR0: --dhchap-ctrl-secret DHHC-1:02:ODU1MDU4MWQ5NmZlMjJkZDJmMjcxMjJkOGNhNTYwOGJiMzBkMWNhODg1MzdkYTY4EBx+bw==: 00:18:51.107 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:01:NzY2MTI0ZDdkM2E5ZDljNmJmOTU2ZWRmY2Y0ZDQ2NDko9oR0: --dhchap-ctrl-secret DHHC-1:02:ODU1MDU4MWQ5NmZlMjJkZDJmMjcxMjJkOGNhNTYwOGJiMzBkMWNhODg1MzdkYTY4EBx+bw==: 00:18:51.677 06:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.677 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.677 06:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:51.677 06:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.677 06:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.677 06:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.677 06:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:51.677 06:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:51.677 06:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:51.937 06:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:18:51.937 06:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:51.937 06:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:51.937 06:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:51.937 06:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:51.937 06:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:51.937 06:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:51.937 06:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.937 06:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.937 06:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.937 06:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:51.937 06:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:51.937 06:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:52.197 00:18:52.197 06:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:52.197 06:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:52.197 06:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.456 06:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.456 06:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:52.456 06:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.456 06:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.456 06:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.456 06:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:52.456 { 00:18:52.456 "cntlid": 109, 00:18:52.456 "qid": 0, 00:18:52.456 "state": "enabled", 00:18:52.456 "thread": "nvmf_tgt_poll_group_000", 00:18:52.456 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:18:52.456 "listen_address": { 00:18:52.456 "trtype": "TCP", 00:18:52.456 "adrfam": "IPv4", 00:18:52.456 "traddr": "10.0.0.2", 00:18:52.456 "trsvcid": "4420" 00:18:52.456 }, 00:18:52.456 "peer_address": { 00:18:52.456 "trtype": "TCP", 00:18:52.456 "adrfam": "IPv4", 00:18:52.456 "traddr": "10.0.0.1", 00:18:52.456 "trsvcid": "39050" 00:18:52.456 }, 00:18:52.456 "auth": { 00:18:52.456 "state": "completed", 00:18:52.456 "digest": "sha512", 00:18:52.456 "dhgroup": "ffdhe2048" 00:18:52.456 } 00:18:52.456 } 00:18:52.456 ]' 00:18:52.456 06:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:52.456 06:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:52.456 06:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:52.456 06:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:52.456 06:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:52.456 06:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:52.456 06:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:52.456 06:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:52.717 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjU1OWUzODczNGI0NmU3OGY1NjVlMGE5YzA4ZDMzNzVlNTQ3NDlkMWM2ZmMwNzljte7iiw==: --dhchap-ctrl-secret DHHC-1:01:ODNjNzA4YzFhZTgyNjcxYTgzYWQ4YjExY2NmYmI1OWY6vyUp: 00:18:52.717 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:02:NjU1OWUzODczNGI0NmU3OGY1NjVlMGE5YzA4ZDMzNzVlNTQ3NDlkMWM2ZmMwNzljte7iiw==: --dhchap-ctrl-secret DHHC-1:01:ODNjNzA4YzFhZTgyNjcxYTgzYWQ4YjExY2NmYmI1OWY6vyUp: 00:18:53.287 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:53.287 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:53.287 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:53.287 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.287 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.287 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.287 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:53.287 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:53.287 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:53.547 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:18:53.547 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:53.547 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:53.547 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:53.547 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:53.548 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:53.548 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key3 00:18:53.548 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.548 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.548 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.548 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:53.548 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:53.548 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:53.869 00:18:53.869 06:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:53.869 06:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:53.869 06:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.869 06:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.869 06:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:53.869 06:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.869 06:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.869 06:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.869 06:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:53.869 { 00:18:53.869 "cntlid": 111, 00:18:53.869 "qid": 0, 00:18:53.869 "state": "enabled", 00:18:53.869 "thread": "nvmf_tgt_poll_group_000", 00:18:53.869 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:18:53.869 "listen_address": { 00:18:53.869 "trtype": "TCP", 00:18:53.869 "adrfam": "IPv4", 00:18:53.869 "traddr": "10.0.0.2", 00:18:53.869 "trsvcid": "4420" 00:18:53.869 }, 00:18:53.869 "peer_address": { 00:18:53.869 "trtype": "TCP", 00:18:53.869 "adrfam": "IPv4", 00:18:53.869 "traddr": "10.0.0.1", 00:18:53.869 "trsvcid": "39060" 00:18:53.869 }, 00:18:53.869 "auth": { 00:18:53.869 "state": "completed", 00:18:53.869 "digest": "sha512", 00:18:53.869 "dhgroup": "ffdhe2048" 00:18:53.869 } 00:18:53.869 } 00:18:53.869 ]' 00:18:53.869 06:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:53.869 06:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:53.869 06:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:54.129 06:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:54.129 06:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:54.129 06:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.129 06:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.129 06:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:54.129 06:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDAwY2MzNmM4YmFiYjE4ZTg5OTI4MDczZGFlYzE2YmI4NTI5YjQyOWVlZWI1NTY1MmZmOTJhYzMyMjcyNGVkOBP12V4=: 00:18:54.129 06:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:03:ZDAwY2MzNmM4YmFiYjE4ZTg5OTI4MDczZGFlYzE2YmI4NTI5YjQyOWVlZWI1NTY1MmZmOTJhYzMyMjcyNGVkOBP12V4=: 00:18:55.067 06:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:55.067 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:55.067 06:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:55.067 06:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.067 06:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.067 06:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.067 06:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:55.067 06:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:55.067 06:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:55.067 06:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:55.067 06:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:18:55.067 06:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:55.067 06:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:55.067 06:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:55.067 06:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:55.067 06:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:55.067 06:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:55.067 06:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.067 06:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.067 06:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.067 06:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:55.067 06:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:55.067 06:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:55.326 00:18:55.326 06:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:55.326 06:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:55.326 06:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.587 06:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.587 06:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:55.587 06:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.587 06:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.587 06:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.587 06:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:55.587 { 00:18:55.587 "cntlid": 113, 00:18:55.587 "qid": 0, 00:18:55.587 "state": "enabled", 00:18:55.587 "thread": "nvmf_tgt_poll_group_000", 00:18:55.587 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:18:55.587 "listen_address": { 00:18:55.587 "trtype": "TCP", 00:18:55.587 "adrfam": "IPv4", 00:18:55.587 "traddr": "10.0.0.2", 00:18:55.587 "trsvcid": "4420" 00:18:55.587 }, 00:18:55.587 "peer_address": { 00:18:55.587 "trtype": "TCP", 00:18:55.587 "adrfam": "IPv4", 00:18:55.587 "traddr": "10.0.0.1", 00:18:55.587 "trsvcid": "35222" 00:18:55.587 }, 00:18:55.587 "auth": { 00:18:55.587 "state": "completed", 00:18:55.587 "digest": "sha512", 00:18:55.587 "dhgroup": "ffdhe3072" 00:18:55.587 } 00:18:55.587 } 00:18:55.587 ]' 00:18:55.587 06:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:55.587 06:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:55.587 06:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:55.587 06:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:55.587 06:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:55.587 06:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:55.587 06:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.587 06:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.847 06:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDAxOGE5NDk1MDJhNzM2ODFiYzUyY2MxZmU2OThmOTBiOTMwZjU3ZDQ0ZjE4NmU1ABSl4A==: --dhchap-ctrl-secret DHHC-1:03:MTMyODEzMGVmNWE5NzE3YWJlNzM5ZTEzMmY5NDNlMDBiMDgzOGZiZTI5MmY4OTQ2M2IxMDFlYjU5MTA2MWUzN2CRBlA=: 00:18:55.847 06:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:00:MDAxOGE5NDk1MDJhNzM2ODFiYzUyY2MxZmU2OThmOTBiOTMwZjU3ZDQ0ZjE4NmU1ABSl4A==: --dhchap-ctrl-secret DHHC-1:03:MTMyODEzMGVmNWE5NzE3YWJlNzM5ZTEzMmY5NDNlMDBiMDgzOGZiZTI5MmY4OTQ2M2IxMDFlYjU5MTA2MWUzN2CRBlA=: 00:18:56.417 06:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:56.418 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:56.418 06:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:56.418 06:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.418 06:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.418 06:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.418 06:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:56.418 06:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:56.418 06:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:56.678 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:18:56.678 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:56.678 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:56.678 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:56.678 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:56.678 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:56.679 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:56.679 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.679 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.679 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.679 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:56.679 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:56.679 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:56.939 00:18:56.939 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:56.939 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:56.939 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.200 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.200 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:57.200 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.200 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.200 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.201 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:57.201 { 00:18:57.201 "cntlid": 115, 00:18:57.201 "qid": 0, 00:18:57.201 "state": "enabled", 00:18:57.201 "thread": "nvmf_tgt_poll_group_000", 00:18:57.201 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:18:57.201 "listen_address": { 00:18:57.201 "trtype": "TCP", 00:18:57.201 "adrfam": "IPv4", 00:18:57.201 "traddr": "10.0.0.2", 00:18:57.201 "trsvcid": "4420" 00:18:57.201 }, 00:18:57.201 "peer_address": { 00:18:57.201 "trtype": "TCP", 00:18:57.201 "adrfam": "IPv4", 00:18:57.201 "traddr": "10.0.0.1", 00:18:57.201 "trsvcid": "35256" 00:18:57.201 }, 00:18:57.201 "auth": { 00:18:57.201 "state": "completed", 00:18:57.201 "digest": "sha512", 00:18:57.201 "dhgroup": "ffdhe3072" 00:18:57.201 } 00:18:57.201 } 00:18:57.201 ]' 00:18:57.201 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:57.201 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:57.201 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:57.201 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:57.201 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:57.201 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:57.201 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.201 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:57.460 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzY2MTI0ZDdkM2E5ZDljNmJmOTU2ZWRmY2Y0ZDQ2NDko9oR0: --dhchap-ctrl-secret DHHC-1:02:ODU1MDU4MWQ5NmZlMjJkZDJmMjcxMjJkOGNhNTYwOGJiMzBkMWNhODg1MzdkYTY4EBx+bw==: 00:18:57.460 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:01:NzY2MTI0ZDdkM2E5ZDljNmJmOTU2ZWRmY2Y0ZDQ2NDko9oR0: --dhchap-ctrl-secret DHHC-1:02:ODU1MDU4MWQ5NmZlMjJkZDJmMjcxMjJkOGNhNTYwOGJiMzBkMWNhODg1MzdkYTY4EBx+bw==: 00:18:58.030 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:58.030 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:58.030 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:58.030 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.030 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.030 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.030 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:58.030 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:58.031 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:58.291 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:18:58.291 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:58.291 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:58.291 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:58.291 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:58.291 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:58.291 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.291 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.291 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.291 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.291 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.291 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.291 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.552 00:18:58.552 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:58.552 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:58.552 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.552 06:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.552 06:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:58.552 06:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.552 06:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.552 06:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.552 06:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:58.552 { 00:18:58.552 "cntlid": 117, 00:18:58.552 "qid": 0, 00:18:58.552 "state": "enabled", 00:18:58.552 "thread": "nvmf_tgt_poll_group_000", 00:18:58.552 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:18:58.552 "listen_address": { 00:18:58.552 "trtype": "TCP", 00:18:58.552 "adrfam": "IPv4", 00:18:58.552 "traddr": "10.0.0.2", 00:18:58.552 "trsvcid": "4420" 00:18:58.552 }, 00:18:58.552 "peer_address": { 00:18:58.552 "trtype": "TCP", 00:18:58.552 "adrfam": "IPv4", 00:18:58.552 "traddr": "10.0.0.1", 00:18:58.552 "trsvcid": "35272" 00:18:58.552 }, 00:18:58.552 "auth": { 00:18:58.552 "state": "completed", 00:18:58.552 "digest": "sha512", 00:18:58.552 "dhgroup": "ffdhe3072" 00:18:58.552 } 00:18:58.552 } 00:18:58.552 ]' 00:18:58.552 06:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:58.813 06:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:58.813 06:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:58.813 06:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:58.813 06:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:58.813 06:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:58.813 06:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:58.813 06:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.075 06:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjU1OWUzODczNGI0NmU3OGY1NjVlMGE5YzA4ZDMzNzVlNTQ3NDlkMWM2ZmMwNzljte7iiw==: --dhchap-ctrl-secret DHHC-1:01:ODNjNzA4YzFhZTgyNjcxYTgzYWQ4YjExY2NmYmI1OWY6vyUp: 00:18:59.075 06:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:02:NjU1OWUzODczNGI0NmU3OGY1NjVlMGE5YzA4ZDMzNzVlNTQ3NDlkMWM2ZmMwNzljte7iiw==: --dhchap-ctrl-secret DHHC-1:01:ODNjNzA4YzFhZTgyNjcxYTgzYWQ4YjExY2NmYmI1OWY6vyUp: 00:18:59.647 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:59.647 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:59.647 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:59.647 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.647 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.647 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.647 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:59.647 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:59.647 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:59.907 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:18:59.907 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:59.907 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:59.907 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:59.907 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:59.907 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:59.907 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key3 00:18:59.907 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.907 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.907 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.907 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:59.907 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:59.907 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:00.168 00:19:00.168 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:00.168 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:00.168 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.168 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.168 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:00.168 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.168 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.428 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.428 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:00.428 { 00:19:00.428 "cntlid": 119, 00:19:00.428 "qid": 0, 00:19:00.428 "state": "enabled", 00:19:00.428 "thread": "nvmf_tgt_poll_group_000", 00:19:00.428 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:19:00.428 "listen_address": { 00:19:00.428 "trtype": "TCP", 00:19:00.428 "adrfam": "IPv4", 00:19:00.428 "traddr": "10.0.0.2", 00:19:00.428 "trsvcid": "4420" 00:19:00.428 }, 00:19:00.428 "peer_address": { 00:19:00.428 "trtype": "TCP", 00:19:00.428 "adrfam": "IPv4", 00:19:00.428 "traddr": "10.0.0.1", 00:19:00.428 "trsvcid": "35286" 00:19:00.428 }, 00:19:00.428 "auth": { 00:19:00.428 "state": "completed", 00:19:00.428 "digest": "sha512", 00:19:00.428 "dhgroup": "ffdhe3072" 00:19:00.428 } 00:19:00.428 } 00:19:00.428 ]' 00:19:00.428 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:00.428 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:00.428 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:00.428 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:00.428 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:00.428 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:00.428 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.429 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.694 06:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDAwY2MzNmM4YmFiYjE4ZTg5OTI4MDczZGFlYzE2YmI4NTI5YjQyOWVlZWI1NTY1MmZmOTJhYzMyMjcyNGVkOBP12V4=: 00:19:00.694 06:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:03:ZDAwY2MzNmM4YmFiYjE4ZTg5OTI4MDczZGFlYzE2YmI4NTI5YjQyOWVlZWI1NTY1MmZmOTJhYzMyMjcyNGVkOBP12V4=: 00:19:01.267 06:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.267 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.267 06:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:01.267 06:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.267 06:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.267 06:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.267 06:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:01.267 06:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:01.267 06:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:01.267 06:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:01.528 06:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:19:01.528 06:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:01.528 06:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:01.528 06:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:01.528 06:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:01.528 06:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:01.528 06:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.528 06:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.528 06:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.528 06:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.528 06:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.528 06:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.528 06:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.789 00:19:01.789 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:01.789 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:01.789 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.049 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.049 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:02.049 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.049 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.049 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.049 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:02.049 { 00:19:02.049 "cntlid": 121, 00:19:02.049 "qid": 0, 00:19:02.049 "state": "enabled", 00:19:02.049 "thread": "nvmf_tgt_poll_group_000", 00:19:02.049 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:19:02.049 "listen_address": { 00:19:02.049 "trtype": "TCP", 00:19:02.049 "adrfam": "IPv4", 00:19:02.049 "traddr": "10.0.0.2", 00:19:02.049 "trsvcid": "4420" 00:19:02.049 }, 00:19:02.049 "peer_address": { 00:19:02.049 "trtype": "TCP", 00:19:02.049 "adrfam": "IPv4", 00:19:02.049 "traddr": "10.0.0.1", 00:19:02.049 "trsvcid": "35308" 00:19:02.049 }, 00:19:02.049 "auth": { 00:19:02.049 "state": "completed", 00:19:02.049 "digest": "sha512", 00:19:02.049 "dhgroup": "ffdhe4096" 00:19:02.049 } 00:19:02.049 } 00:19:02.049 ]' 00:19:02.049 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:02.049 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:02.049 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:02.049 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:02.049 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:02.049 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:02.049 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.049 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:02.309 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDAxOGE5NDk1MDJhNzM2ODFiYzUyY2MxZmU2OThmOTBiOTMwZjU3ZDQ0ZjE4NmU1ABSl4A==: --dhchap-ctrl-secret DHHC-1:03:MTMyODEzMGVmNWE5NzE3YWJlNzM5ZTEzMmY5NDNlMDBiMDgzOGZiZTI5MmY4OTQ2M2IxMDFlYjU5MTA2MWUzN2CRBlA=: 00:19:02.309 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:00:MDAxOGE5NDk1MDJhNzM2ODFiYzUyY2MxZmU2OThmOTBiOTMwZjU3ZDQ0ZjE4NmU1ABSl4A==: --dhchap-ctrl-secret DHHC-1:03:MTMyODEzMGVmNWE5NzE3YWJlNzM5ZTEzMmY5NDNlMDBiMDgzOGZiZTI5MmY4OTQ2M2IxMDFlYjU5MTA2MWUzN2CRBlA=: 00:19:02.879 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:02.880 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:02.880 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:02.880 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.880 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.880 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.880 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:02.880 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:02.880 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:03.139 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:19:03.139 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:03.139 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:03.139 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:03.139 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:03.139 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:03.139 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.139 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.139 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.139 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.139 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.139 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.139 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.398 00:19:03.398 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:03.398 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:03.398 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:03.658 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.658 06:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:03.658 06:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.658 06:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.658 06:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.658 06:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:03.658 { 00:19:03.658 "cntlid": 123, 00:19:03.658 "qid": 0, 00:19:03.658 "state": "enabled", 00:19:03.658 "thread": "nvmf_tgt_poll_group_000", 00:19:03.658 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:19:03.658 "listen_address": { 00:19:03.658 "trtype": "TCP", 00:19:03.658 "adrfam": "IPv4", 00:19:03.658 "traddr": "10.0.0.2", 00:19:03.658 "trsvcid": "4420" 00:19:03.658 }, 00:19:03.658 "peer_address": { 00:19:03.658 "trtype": "TCP", 00:19:03.658 "adrfam": "IPv4", 00:19:03.658 "traddr": "10.0.0.1", 00:19:03.658 "trsvcid": "35334" 00:19:03.658 }, 00:19:03.658 "auth": { 00:19:03.658 "state": "completed", 00:19:03.658 "digest": "sha512", 00:19:03.658 "dhgroup": "ffdhe4096" 00:19:03.658 } 00:19:03.658 } 00:19:03.658 ]' 00:19:03.658 06:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:03.658 06:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:03.658 06:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:03.658 06:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:03.658 06:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:03.658 06:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.658 06:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.658 06:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.917 06:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzY2MTI0ZDdkM2E5ZDljNmJmOTU2ZWRmY2Y0ZDQ2NDko9oR0: --dhchap-ctrl-secret DHHC-1:02:ODU1MDU4MWQ5NmZlMjJkZDJmMjcxMjJkOGNhNTYwOGJiMzBkMWNhODg1MzdkYTY4EBx+bw==: 00:19:03.917 06:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:01:NzY2MTI0ZDdkM2E5ZDljNmJmOTU2ZWRmY2Y0ZDQ2NDko9oR0: --dhchap-ctrl-secret DHHC-1:02:ODU1MDU4MWQ5NmZlMjJkZDJmMjcxMjJkOGNhNTYwOGJiMzBkMWNhODg1MzdkYTY4EBx+bw==: 00:19:04.484 06:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.484 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:04.484 06:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:04.484 06:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.484 06:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.484 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.484 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:04.484 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:04.484 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:04.743 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:19:04.743 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:04.743 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:04.743 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:04.743 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:04.743 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.743 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:04.743 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.743 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.743 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.743 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:04.743 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:04.743 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:05.003 00:19:05.003 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:05.003 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:05.003 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.263 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.263 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.263 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.263 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.263 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.263 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:05.263 { 00:19:05.263 "cntlid": 125, 00:19:05.263 "qid": 0, 00:19:05.263 "state": "enabled", 00:19:05.263 "thread": "nvmf_tgt_poll_group_000", 00:19:05.263 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:19:05.263 "listen_address": { 00:19:05.263 "trtype": "TCP", 00:19:05.263 "adrfam": "IPv4", 00:19:05.263 "traddr": "10.0.0.2", 00:19:05.263 "trsvcid": "4420" 00:19:05.263 }, 00:19:05.263 "peer_address": { 00:19:05.263 "trtype": "TCP", 00:19:05.263 "adrfam": "IPv4", 00:19:05.263 "traddr": "10.0.0.1", 00:19:05.263 "trsvcid": "39296" 00:19:05.263 }, 00:19:05.263 "auth": { 00:19:05.263 "state": "completed", 00:19:05.263 "digest": "sha512", 00:19:05.263 "dhgroup": "ffdhe4096" 00:19:05.263 } 00:19:05.263 } 00:19:05.263 ]' 00:19:05.263 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:05.263 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:05.263 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:05.263 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:05.263 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:05.263 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.263 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.263 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.525 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjU1OWUzODczNGI0NmU3OGY1NjVlMGE5YzA4ZDMzNzVlNTQ3NDlkMWM2ZmMwNzljte7iiw==: --dhchap-ctrl-secret DHHC-1:01:ODNjNzA4YzFhZTgyNjcxYTgzYWQ4YjExY2NmYmI1OWY6vyUp: 00:19:05.525 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:02:NjU1OWUzODczNGI0NmU3OGY1NjVlMGE5YzA4ZDMzNzVlNTQ3NDlkMWM2ZmMwNzljte7iiw==: --dhchap-ctrl-secret DHHC-1:01:ODNjNzA4YzFhZTgyNjcxYTgzYWQ4YjExY2NmYmI1OWY6vyUp: 00:19:06.097 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.097 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.097 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:06.097 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.097 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.097 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.097 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:06.097 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:06.097 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:06.359 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:19:06.359 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:06.359 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:06.359 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:06.359 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:06.359 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:06.359 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key3 00:19:06.359 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.359 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.359 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.359 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:06.359 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:06.359 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:06.620 00:19:06.621 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:06.621 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:06.621 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.882 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.882 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.882 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.882 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.882 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.882 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:06.882 { 00:19:06.882 "cntlid": 127, 00:19:06.882 "qid": 0, 00:19:06.882 "state": "enabled", 00:19:06.882 "thread": "nvmf_tgt_poll_group_000", 00:19:06.882 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:19:06.882 "listen_address": { 00:19:06.882 "trtype": "TCP", 00:19:06.882 "adrfam": "IPv4", 00:19:06.882 "traddr": "10.0.0.2", 00:19:06.882 "trsvcid": "4420" 00:19:06.882 }, 00:19:06.882 "peer_address": { 00:19:06.882 "trtype": "TCP", 00:19:06.882 "adrfam": "IPv4", 00:19:06.882 "traddr": "10.0.0.1", 00:19:06.882 "trsvcid": "39326" 00:19:06.882 }, 00:19:06.882 "auth": { 00:19:06.882 "state": "completed", 00:19:06.882 "digest": "sha512", 00:19:06.882 "dhgroup": "ffdhe4096" 00:19:06.882 } 00:19:06.882 } 00:19:06.882 ]' 00:19:06.882 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:06.882 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:06.882 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:06.882 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:06.882 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:06.882 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:06.882 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:06.882 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.142 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDAwY2MzNmM4YmFiYjE4ZTg5OTI4MDczZGFlYzE2YmI4NTI5YjQyOWVlZWI1NTY1MmZmOTJhYzMyMjcyNGVkOBP12V4=: 00:19:07.143 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:03:ZDAwY2MzNmM4YmFiYjE4ZTg5OTI4MDczZGFlYzE2YmI4NTI5YjQyOWVlZWI1NTY1MmZmOTJhYzMyMjcyNGVkOBP12V4=: 00:19:07.714 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.714 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.714 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:07.714 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.714 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.714 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.714 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:07.714 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:07.714 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:07.714 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:07.975 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:19:07.975 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:07.975 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:07.975 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:07.975 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:07.975 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:07.975 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:07.975 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.975 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.975 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.975 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:07.975 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:07.975 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:08.235 00:19:08.235 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:08.235 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:08.235 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.496 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.496 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.496 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.496 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.496 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.496 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:08.496 { 00:19:08.496 "cntlid": 129, 00:19:08.496 "qid": 0, 00:19:08.496 "state": "enabled", 00:19:08.496 "thread": "nvmf_tgt_poll_group_000", 00:19:08.496 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:19:08.496 "listen_address": { 00:19:08.496 "trtype": "TCP", 00:19:08.496 "adrfam": "IPv4", 00:19:08.496 "traddr": "10.0.0.2", 00:19:08.496 "trsvcid": "4420" 00:19:08.496 }, 00:19:08.496 "peer_address": { 00:19:08.496 "trtype": "TCP", 00:19:08.496 "adrfam": "IPv4", 00:19:08.496 "traddr": "10.0.0.1", 00:19:08.496 "trsvcid": "39356" 00:19:08.496 }, 00:19:08.496 "auth": { 00:19:08.496 "state": "completed", 00:19:08.496 "digest": "sha512", 00:19:08.496 "dhgroup": "ffdhe6144" 00:19:08.496 } 00:19:08.496 } 00:19:08.496 ]' 00:19:08.496 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:08.496 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:08.496 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:08.496 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:08.496 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:08.496 06:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:08.496 06:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.496 06:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:08.773 06:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDAxOGE5NDk1MDJhNzM2ODFiYzUyY2MxZmU2OThmOTBiOTMwZjU3ZDQ0ZjE4NmU1ABSl4A==: --dhchap-ctrl-secret DHHC-1:03:MTMyODEzMGVmNWE5NzE3YWJlNzM5ZTEzMmY5NDNlMDBiMDgzOGZiZTI5MmY4OTQ2M2IxMDFlYjU5MTA2MWUzN2CRBlA=: 00:19:08.773 06:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:00:MDAxOGE5NDk1MDJhNzM2ODFiYzUyY2MxZmU2OThmOTBiOTMwZjU3ZDQ0ZjE4NmU1ABSl4A==: --dhchap-ctrl-secret DHHC-1:03:MTMyODEzMGVmNWE5NzE3YWJlNzM5ZTEzMmY5NDNlMDBiMDgzOGZiZTI5MmY4OTQ2M2IxMDFlYjU5MTA2MWUzN2CRBlA=: 00:19:09.344 06:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:09.344 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:09.344 06:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:09.344 06:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.344 06:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.344 06:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.344 06:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:09.344 06:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:09.344 06:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:09.605 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:19:09.605 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:09.605 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:09.605 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:09.605 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:09.605 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:09.605 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.605 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.605 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.605 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.605 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.605 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.605 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.866 00:19:09.866 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:09.866 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:09.866 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.127 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.127 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.127 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.127 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.127 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.127 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:10.127 { 00:19:10.127 "cntlid": 131, 00:19:10.127 "qid": 0, 00:19:10.127 "state": "enabled", 00:19:10.127 "thread": "nvmf_tgt_poll_group_000", 00:19:10.127 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:19:10.127 "listen_address": { 00:19:10.127 "trtype": "TCP", 00:19:10.127 "adrfam": "IPv4", 00:19:10.127 "traddr": "10.0.0.2", 00:19:10.127 "trsvcid": "4420" 00:19:10.127 }, 00:19:10.127 "peer_address": { 00:19:10.127 "trtype": "TCP", 00:19:10.127 "adrfam": "IPv4", 00:19:10.127 "traddr": "10.0.0.1", 00:19:10.127 "trsvcid": "39388" 00:19:10.127 }, 00:19:10.127 "auth": { 00:19:10.127 "state": "completed", 00:19:10.127 "digest": "sha512", 00:19:10.127 "dhgroup": "ffdhe6144" 00:19:10.127 } 00:19:10.127 } 00:19:10.127 ]' 00:19:10.127 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:10.127 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:10.127 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:10.127 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:10.127 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:10.127 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.128 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.128 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:10.389 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzY2MTI0ZDdkM2E5ZDljNmJmOTU2ZWRmY2Y0ZDQ2NDko9oR0: --dhchap-ctrl-secret DHHC-1:02:ODU1MDU4MWQ5NmZlMjJkZDJmMjcxMjJkOGNhNTYwOGJiMzBkMWNhODg1MzdkYTY4EBx+bw==: 00:19:10.389 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:01:NzY2MTI0ZDdkM2E5ZDljNmJmOTU2ZWRmY2Y0ZDQ2NDko9oR0: --dhchap-ctrl-secret DHHC-1:02:ODU1MDU4MWQ5NmZlMjJkZDJmMjcxMjJkOGNhNTYwOGJiMzBkMWNhODg1MzdkYTY4EBx+bw==: 00:19:10.967 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.967 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.967 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:10.967 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.967 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.967 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.967 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:10.967 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:10.967 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:11.229 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:19:11.229 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:11.229 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:11.229 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:11.229 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:11.229 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:11.229 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:11.229 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.229 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.229 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.229 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:11.229 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:11.229 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:11.489 00:19:11.489 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:11.489 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:11.489 06:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.750 06:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.750 06:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.750 06:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.750 06:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.750 06:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.750 06:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:11.750 { 00:19:11.750 "cntlid": 133, 00:19:11.750 "qid": 0, 00:19:11.750 "state": "enabled", 00:19:11.750 "thread": "nvmf_tgt_poll_group_000", 00:19:11.750 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:19:11.750 "listen_address": { 00:19:11.750 "trtype": "TCP", 00:19:11.750 "adrfam": "IPv4", 00:19:11.750 "traddr": "10.0.0.2", 00:19:11.750 "trsvcid": "4420" 00:19:11.750 }, 00:19:11.750 "peer_address": { 00:19:11.750 "trtype": "TCP", 00:19:11.750 "adrfam": "IPv4", 00:19:11.750 "traddr": "10.0.0.1", 00:19:11.750 "trsvcid": "39422" 00:19:11.750 }, 00:19:11.750 "auth": { 00:19:11.750 "state": "completed", 00:19:11.750 "digest": "sha512", 00:19:11.750 "dhgroup": "ffdhe6144" 00:19:11.750 } 00:19:11.750 } 00:19:11.750 ]' 00:19:11.750 06:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:11.750 06:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:11.750 06:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:11.750 06:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:11.750 06:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:11.750 06:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.750 06:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.750 06:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:12.010 06:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjU1OWUzODczNGI0NmU3OGY1NjVlMGE5YzA4ZDMzNzVlNTQ3NDlkMWM2ZmMwNzljte7iiw==: --dhchap-ctrl-secret DHHC-1:01:ODNjNzA4YzFhZTgyNjcxYTgzYWQ4YjExY2NmYmI1OWY6vyUp: 00:19:12.010 06:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:02:NjU1OWUzODczNGI0NmU3OGY1NjVlMGE5YzA4ZDMzNzVlNTQ3NDlkMWM2ZmMwNzljte7iiw==: --dhchap-ctrl-secret DHHC-1:01:ODNjNzA4YzFhZTgyNjcxYTgzYWQ4YjExY2NmYmI1OWY6vyUp: 00:19:12.580 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.580 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.580 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:12.580 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.580 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.580 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.580 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:12.580 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:12.580 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:12.839 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:19:12.839 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:12.839 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:12.839 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:12.839 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:12.839 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:12.839 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key3 00:19:12.839 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.839 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.839 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.839 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:12.839 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:12.839 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:13.099 00:19:13.099 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:13.099 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:13.099 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.359 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.359 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.359 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.359 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.359 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.359 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:13.359 { 00:19:13.359 "cntlid": 135, 00:19:13.359 "qid": 0, 00:19:13.359 "state": "enabled", 00:19:13.359 "thread": "nvmf_tgt_poll_group_000", 00:19:13.359 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:19:13.359 "listen_address": { 00:19:13.359 "trtype": "TCP", 00:19:13.359 "adrfam": "IPv4", 00:19:13.359 "traddr": "10.0.0.2", 00:19:13.359 "trsvcid": "4420" 00:19:13.359 }, 00:19:13.359 "peer_address": { 00:19:13.359 "trtype": "TCP", 00:19:13.359 "adrfam": "IPv4", 00:19:13.359 "traddr": "10.0.0.1", 00:19:13.359 "trsvcid": "39442" 00:19:13.359 }, 00:19:13.359 "auth": { 00:19:13.359 "state": "completed", 00:19:13.359 "digest": "sha512", 00:19:13.359 "dhgroup": "ffdhe6144" 00:19:13.359 } 00:19:13.359 } 00:19:13.359 ]' 00:19:13.359 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:13.359 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:13.359 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:13.359 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:13.359 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:13.619 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.619 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.619 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:13.619 06:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDAwY2MzNmM4YmFiYjE4ZTg5OTI4MDczZGFlYzE2YmI4NTI5YjQyOWVlZWI1NTY1MmZmOTJhYzMyMjcyNGVkOBP12V4=: 00:19:13.619 06:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:03:ZDAwY2MzNmM4YmFiYjE4ZTg5OTI4MDczZGFlYzE2YmI4NTI5YjQyOWVlZWI1NTY1MmZmOTJhYzMyMjcyNGVkOBP12V4=: 00:19:14.558 06:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:14.558 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:14.558 06:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:14.558 06:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.558 06:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.558 06:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.558 06:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:14.558 06:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:14.558 06:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:14.558 06:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:14.558 06:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:19:14.558 06:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:14.558 06:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:14.558 06:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:14.558 06:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:14.558 06:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:14.558 06:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:14.558 06:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.558 06:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.558 06:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.558 06:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:14.558 06:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:14.558 06:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:15.129 00:19:15.129 06:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:15.129 06:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:15.129 06:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.129 06:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.129 06:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.129 06:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.129 06:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.129 06:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.129 06:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:15.129 { 00:19:15.129 "cntlid": 137, 00:19:15.129 "qid": 0, 00:19:15.129 "state": "enabled", 00:19:15.129 "thread": "nvmf_tgt_poll_group_000", 00:19:15.129 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:19:15.129 "listen_address": { 00:19:15.129 "trtype": "TCP", 00:19:15.129 "adrfam": "IPv4", 00:19:15.129 "traddr": "10.0.0.2", 00:19:15.129 "trsvcid": "4420" 00:19:15.129 }, 00:19:15.129 "peer_address": { 00:19:15.129 "trtype": "TCP", 00:19:15.129 "adrfam": "IPv4", 00:19:15.129 "traddr": "10.0.0.1", 00:19:15.129 "trsvcid": "39460" 00:19:15.129 }, 00:19:15.129 "auth": { 00:19:15.129 "state": "completed", 00:19:15.129 "digest": "sha512", 00:19:15.129 "dhgroup": "ffdhe8192" 00:19:15.129 } 00:19:15.129 } 00:19:15.129 ]' 00:19:15.129 06:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:15.129 06:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:15.129 06:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:15.390 06:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:15.390 06:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:15.390 06:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:15.390 06:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.390 06:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:15.390 06:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDAxOGE5NDk1MDJhNzM2ODFiYzUyY2MxZmU2OThmOTBiOTMwZjU3ZDQ0ZjE4NmU1ABSl4A==: --dhchap-ctrl-secret DHHC-1:03:MTMyODEzMGVmNWE5NzE3YWJlNzM5ZTEzMmY5NDNlMDBiMDgzOGZiZTI5MmY4OTQ2M2IxMDFlYjU5MTA2MWUzN2CRBlA=: 00:19:15.390 06:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:00:MDAxOGE5NDk1MDJhNzM2ODFiYzUyY2MxZmU2OThmOTBiOTMwZjU3ZDQ0ZjE4NmU1ABSl4A==: --dhchap-ctrl-secret DHHC-1:03:MTMyODEzMGVmNWE5NzE3YWJlNzM5ZTEzMmY5NDNlMDBiMDgzOGZiZTI5MmY4OTQ2M2IxMDFlYjU5MTA2MWUzN2CRBlA=: 00:19:16.351 06:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.351 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.351 06:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:16.351 06:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.351 06:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.351 06:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.351 06:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:16.351 06:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:16.351 06:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:16.351 06:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:19:16.351 06:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:16.351 06:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:16.351 06:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:16.351 06:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:16.351 06:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:16.351 06:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:16.351 06:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.351 06:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.352 06:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.352 06:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:16.352 06:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:16.352 06:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:16.920 00:19:16.920 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:16.920 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:16.920 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:16.920 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.921 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:16.921 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.921 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.921 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.921 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:16.921 { 00:19:16.921 "cntlid": 139, 00:19:16.921 "qid": 0, 00:19:16.921 "state": "enabled", 00:19:16.921 "thread": "nvmf_tgt_poll_group_000", 00:19:16.921 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:19:16.921 "listen_address": { 00:19:16.921 "trtype": "TCP", 00:19:16.921 "adrfam": "IPv4", 00:19:16.921 "traddr": "10.0.0.2", 00:19:16.921 "trsvcid": "4420" 00:19:16.921 }, 00:19:16.921 "peer_address": { 00:19:16.921 "trtype": "TCP", 00:19:16.921 "adrfam": "IPv4", 00:19:16.921 "traddr": "10.0.0.1", 00:19:16.921 "trsvcid": "50526" 00:19:16.921 }, 00:19:16.921 "auth": { 00:19:16.921 "state": "completed", 00:19:16.921 "digest": "sha512", 00:19:16.921 "dhgroup": "ffdhe8192" 00:19:16.921 } 00:19:16.921 } 00:19:16.921 ]' 00:19:16.921 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:16.921 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:16.921 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:17.181 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:17.181 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:17.181 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.181 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.181 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:17.181 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzY2MTI0ZDdkM2E5ZDljNmJmOTU2ZWRmY2Y0ZDQ2NDko9oR0: --dhchap-ctrl-secret DHHC-1:02:ODU1MDU4MWQ5NmZlMjJkZDJmMjcxMjJkOGNhNTYwOGJiMzBkMWNhODg1MzdkYTY4EBx+bw==: 00:19:17.181 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:01:NzY2MTI0ZDdkM2E5ZDljNmJmOTU2ZWRmY2Y0ZDQ2NDko9oR0: --dhchap-ctrl-secret DHHC-1:02:ODU1MDU4MWQ5NmZlMjJkZDJmMjcxMjJkOGNhNTYwOGJiMzBkMWNhODg1MzdkYTY4EBx+bw==: 00:19:18.121 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:18.121 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:18.121 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:18.121 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.121 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.121 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.121 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:18.121 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:18.121 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:18.121 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:19:18.121 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:18.121 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:18.121 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:18.121 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:18.121 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:18.121 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:18.121 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.121 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.121 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.121 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:18.121 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:18.121 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:18.692 00:19:18.692 06:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:18.692 06:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.692 06:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:18.692 06:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.692 06:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:18.692 06:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.692 06:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.692 06:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.692 06:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:18.692 { 00:19:18.692 "cntlid": 141, 00:19:18.692 "qid": 0, 00:19:18.692 "state": "enabled", 00:19:18.692 "thread": "nvmf_tgt_poll_group_000", 00:19:18.692 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:19:18.692 "listen_address": { 00:19:18.692 "trtype": "TCP", 00:19:18.692 "adrfam": "IPv4", 00:19:18.692 "traddr": "10.0.0.2", 00:19:18.692 "trsvcid": "4420" 00:19:18.692 }, 00:19:18.692 "peer_address": { 00:19:18.692 "trtype": "TCP", 00:19:18.692 "adrfam": "IPv4", 00:19:18.692 "traddr": "10.0.0.1", 00:19:18.692 "trsvcid": "50564" 00:19:18.692 }, 00:19:18.692 "auth": { 00:19:18.692 "state": "completed", 00:19:18.692 "digest": "sha512", 00:19:18.692 "dhgroup": "ffdhe8192" 00:19:18.692 } 00:19:18.692 } 00:19:18.692 ]' 00:19:18.692 06:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:18.692 06:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:18.692 06:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:18.953 06:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:18.953 06:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:18.953 06:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:18.953 06:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.953 06:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.953 06:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjU1OWUzODczNGI0NmU3OGY1NjVlMGE5YzA4ZDMzNzVlNTQ3NDlkMWM2ZmMwNzljte7iiw==: --dhchap-ctrl-secret DHHC-1:01:ODNjNzA4YzFhZTgyNjcxYTgzYWQ4YjExY2NmYmI1OWY6vyUp: 00:19:18.953 06:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:02:NjU1OWUzODczNGI0NmU3OGY1NjVlMGE5YzA4ZDMzNzVlNTQ3NDlkMWM2ZmMwNzljte7iiw==: --dhchap-ctrl-secret DHHC-1:01:ODNjNzA4YzFhZTgyNjcxYTgzYWQ4YjExY2NmYmI1OWY6vyUp: 00:19:19.896 06:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.896 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.896 06:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:19.896 06:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.896 06:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.896 06:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.896 06:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:19.896 06:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:19.896 06:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:19.896 06:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:19:19.896 06:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:19.896 06:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:19.896 06:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:19.896 06:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:19.896 06:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:19.896 06:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key3 00:19:19.896 06:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.896 06:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.896 06:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.896 06:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:19.896 06:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:19.896 06:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:20.467 00:19:20.467 06:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:20.467 06:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:20.467 06:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.467 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.467 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.467 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.467 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.467 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.467 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:20.467 { 00:19:20.467 "cntlid": 143, 00:19:20.467 "qid": 0, 00:19:20.467 "state": "enabled", 00:19:20.467 "thread": "nvmf_tgt_poll_group_000", 00:19:20.467 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:19:20.467 "listen_address": { 00:19:20.467 "trtype": "TCP", 00:19:20.467 "adrfam": "IPv4", 00:19:20.467 "traddr": "10.0.0.2", 00:19:20.467 "trsvcid": "4420" 00:19:20.467 }, 00:19:20.467 "peer_address": { 00:19:20.467 "trtype": "TCP", 00:19:20.467 "adrfam": "IPv4", 00:19:20.467 "traddr": "10.0.0.1", 00:19:20.467 "trsvcid": "50584" 00:19:20.467 }, 00:19:20.467 "auth": { 00:19:20.467 "state": "completed", 00:19:20.467 "digest": "sha512", 00:19:20.467 "dhgroup": "ffdhe8192" 00:19:20.467 } 00:19:20.467 } 00:19:20.467 ]' 00:19:20.467 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:20.727 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:20.727 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:20.727 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:20.727 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:20.727 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.727 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.727 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:20.995 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDAwY2MzNmM4YmFiYjE4ZTg5OTI4MDczZGFlYzE2YmI4NTI5YjQyOWVlZWI1NTY1MmZmOTJhYzMyMjcyNGVkOBP12V4=: 00:19:20.995 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:03:ZDAwY2MzNmM4YmFiYjE4ZTg5OTI4MDczZGFlYzE2YmI4NTI5YjQyOWVlZWI1NTY1MmZmOTJhYzMyMjcyNGVkOBP12V4=: 00:19:21.566 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:21.566 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:21.566 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:21.566 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.566 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.566 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.566 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:19:21.566 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:19:21.566 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:19:21.566 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:21.566 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:21.566 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:21.566 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:19:21.566 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:21.566 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:21.566 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:21.566 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:21.566 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:21.566 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:21.566 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.566 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.826 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.826 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:21.826 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:21.826 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.085 00:19:22.085 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:22.085 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:22.085 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.344 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.344 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.344 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.344 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.344 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.344 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:22.344 { 00:19:22.344 "cntlid": 145, 00:19:22.344 "qid": 0, 00:19:22.344 "state": "enabled", 00:19:22.344 "thread": "nvmf_tgt_poll_group_000", 00:19:22.344 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:19:22.344 "listen_address": { 00:19:22.344 "trtype": "TCP", 00:19:22.344 "adrfam": "IPv4", 00:19:22.344 "traddr": "10.0.0.2", 00:19:22.344 "trsvcid": "4420" 00:19:22.344 }, 00:19:22.344 "peer_address": { 00:19:22.344 "trtype": "TCP", 00:19:22.344 "adrfam": "IPv4", 00:19:22.344 "traddr": "10.0.0.1", 00:19:22.344 "trsvcid": "50602" 00:19:22.344 }, 00:19:22.344 "auth": { 00:19:22.344 "state": "completed", 00:19:22.344 "digest": "sha512", 00:19:22.344 "dhgroup": "ffdhe8192" 00:19:22.344 } 00:19:22.344 } 00:19:22.344 ]' 00:19:22.344 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:22.344 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:22.344 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:22.344 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:22.344 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:22.604 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.604 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.604 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:22.604 06:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDAxOGE5NDk1MDJhNzM2ODFiYzUyY2MxZmU2OThmOTBiOTMwZjU3ZDQ0ZjE4NmU1ABSl4A==: --dhchap-ctrl-secret DHHC-1:03:MTMyODEzMGVmNWE5NzE3YWJlNzM5ZTEzMmY5NDNlMDBiMDgzOGZiZTI5MmY4OTQ2M2IxMDFlYjU5MTA2MWUzN2CRBlA=: 00:19:22.604 06:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:00:MDAxOGE5NDk1MDJhNzM2ODFiYzUyY2MxZmU2OThmOTBiOTMwZjU3ZDQ0ZjE4NmU1ABSl4A==: --dhchap-ctrl-secret DHHC-1:03:MTMyODEzMGVmNWE5NzE3YWJlNzM5ZTEzMmY5NDNlMDBiMDgzOGZiZTI5MmY4OTQ2M2IxMDFlYjU5MTA2MWUzN2CRBlA=: 00:19:23.173 06:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:23.173 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:23.173 06:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:23.173 06:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.173 06:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.173 06:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.173 06:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 00:19:23.174 06:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.174 06:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.455 06:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.455 06:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:19:23.455 06:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:23.455 06:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:19:23.455 06:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:23.455 06:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:23.455 06:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:23.455 06:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:23.455 06:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:19:23.455 06:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:19:23.455 06:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:19:23.746 request: 00:19:23.746 { 00:19:23.746 "name": "nvme0", 00:19:23.746 "trtype": "tcp", 00:19:23.746 "traddr": "10.0.0.2", 00:19:23.746 "adrfam": "ipv4", 00:19:23.746 "trsvcid": "4420", 00:19:23.746 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:23.746 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:19:23.746 "prchk_reftag": false, 00:19:23.746 "prchk_guard": false, 00:19:23.746 "hdgst": false, 00:19:23.746 "ddgst": false, 00:19:23.746 "dhchap_key": "key2", 00:19:23.746 "allow_unrecognized_csi": false, 00:19:23.746 "method": "bdev_nvme_attach_controller", 00:19:23.746 "req_id": 1 00:19:23.746 } 00:19:23.746 Got JSON-RPC error response 00:19:23.746 response: 00:19:23.746 { 00:19:23.746 "code": -5, 00:19:23.746 "message": "Input/output error" 00:19:23.746 } 00:19:23.746 06:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:23.746 06:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:23.746 06:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:23.746 06:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:23.746 06:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:23.746 06:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.747 06:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.747 06:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.747 06:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:23.747 06:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.747 06:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.747 06:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.747 06:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:23.747 06:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:23.747 06:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:23.747 06:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:23.747 06:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:23.747 06:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:23.747 06:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:23.747 06:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:23.747 06:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:23.747 06:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:24.335 request: 00:19:24.335 { 00:19:24.335 "name": "nvme0", 00:19:24.335 "trtype": "tcp", 00:19:24.335 "traddr": "10.0.0.2", 00:19:24.335 "adrfam": "ipv4", 00:19:24.335 "trsvcid": "4420", 00:19:24.335 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:24.335 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:19:24.335 "prchk_reftag": false, 00:19:24.335 "prchk_guard": false, 00:19:24.335 "hdgst": false, 00:19:24.335 "ddgst": false, 00:19:24.335 "dhchap_key": "key1", 00:19:24.335 "dhchap_ctrlr_key": "ckey2", 00:19:24.335 "allow_unrecognized_csi": false, 00:19:24.335 "method": "bdev_nvme_attach_controller", 00:19:24.335 "req_id": 1 00:19:24.335 } 00:19:24.335 Got JSON-RPC error response 00:19:24.335 response: 00:19:24.335 { 00:19:24.335 "code": -5, 00:19:24.335 "message": "Input/output error" 00:19:24.335 } 00:19:24.335 06:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:24.335 06:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:24.335 06:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:24.335 06:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:24.335 06:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:24.335 06:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.335 06:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.335 06:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.335 06:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 00:19:24.335 06:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.335 06:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.335 06:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.335 06:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.335 06:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:24.335 06:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.335 06:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:24.335 06:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:24.335 06:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:24.335 06:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:24.335 06:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.335 06:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.335 06:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.620 request: 00:19:24.620 { 00:19:24.620 "name": "nvme0", 00:19:24.620 "trtype": "tcp", 00:19:24.620 "traddr": "10.0.0.2", 00:19:24.620 "adrfam": "ipv4", 00:19:24.620 "trsvcid": "4420", 00:19:24.620 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:24.620 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:19:24.620 "prchk_reftag": false, 00:19:24.620 "prchk_guard": false, 00:19:24.620 "hdgst": false, 00:19:24.620 "ddgst": false, 00:19:24.620 "dhchap_key": "key1", 00:19:24.620 "dhchap_ctrlr_key": "ckey1", 00:19:24.620 "allow_unrecognized_csi": false, 00:19:24.620 "method": "bdev_nvme_attach_controller", 00:19:24.620 "req_id": 1 00:19:24.620 } 00:19:24.620 Got JSON-RPC error response 00:19:24.620 response: 00:19:24.620 { 00:19:24.620 "code": -5, 00:19:24.620 "message": "Input/output error" 00:19:24.620 } 00:19:24.620 06:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:24.620 06:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:24.620 06:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:24.620 06:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:24.620 06:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:24.620 06:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.620 06:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.620 06:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.620 06:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 306767 00:19:24.620 06:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 306767 ']' 00:19:24.620 06:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 306767 00:19:24.620 06:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:19:24.620 06:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:24.620 06:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 306767 00:19:24.897 06:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:24.897 06:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:24.897 06:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 306767' 00:19:24.897 killing process with pid 306767 00:19:24.897 06:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 306767 00:19:24.897 06:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 306767 00:19:24.897 06:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:19:24.897 06:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:24.897 06:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:24.897 06:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.897 06:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=329776 00:19:24.897 06:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 329776 00:19:24.897 06:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 329776 ']' 00:19:24.897 06:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:24.897 06:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:24.897 06:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:24.897 06:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:24.897 06:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.897 06:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:19:25.836 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:25.836 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:25.836 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:25.836 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:25.836 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.836 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:25.836 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:25.836 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 329776 00:19:25.836 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 329776 ']' 00:19:25.836 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:25.836 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:25.836 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:25.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:25.836 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:25.836 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.836 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:25.836 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:25.836 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:19:25.836 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.836 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.097 null0 00:19:26.097 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.097 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:26.097 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.JU8 00:19:26.097 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.097 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.097 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.097 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.s7j ]] 00:19:26.097 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.s7j 00:19:26.097 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.097 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.097 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.097 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:26.097 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Brd 00:19:26.097 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.097 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.097 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.097 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.F4x ]] 00:19:26.097 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.F4x 00:19:26.097 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.097 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.097 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.097 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:26.097 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.jjr 00:19:26.097 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.097 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.097 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.097 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.p9x ]] 00:19:26.097 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.p9x 00:19:26.097 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.097 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.097 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.097 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:26.097 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.dcR 00:19:26.097 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.097 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.097 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.097 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:19:26.097 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:19:26.097 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:26.097 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:26.097 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:26.097 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:26.097 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.097 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key3 00:19:26.098 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.098 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.098 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.098 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:26.098 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:26.098 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:27.038 nvme0n1 00:19:27.038 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:27.038 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:27.038 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:27.038 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.038 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:27.038 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.038 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.038 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.038 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:27.038 { 00:19:27.038 "cntlid": 1, 00:19:27.038 "qid": 0, 00:19:27.038 "state": "enabled", 00:19:27.038 "thread": "nvmf_tgt_poll_group_000", 00:19:27.038 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:19:27.038 "listen_address": { 00:19:27.038 "trtype": "TCP", 00:19:27.038 "adrfam": "IPv4", 00:19:27.038 "traddr": "10.0.0.2", 00:19:27.038 "trsvcid": "4420" 00:19:27.038 }, 00:19:27.038 "peer_address": { 00:19:27.038 "trtype": "TCP", 00:19:27.038 "adrfam": "IPv4", 00:19:27.038 "traddr": "10.0.0.1", 00:19:27.038 "trsvcid": "36282" 00:19:27.038 }, 00:19:27.038 "auth": { 00:19:27.038 "state": "completed", 00:19:27.038 "digest": "sha512", 00:19:27.038 "dhgroup": "ffdhe8192" 00:19:27.038 } 00:19:27.038 } 00:19:27.038 ]' 00:19:27.038 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:27.038 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:27.038 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:27.038 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:27.038 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:27.298 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:27.298 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.298 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.298 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDAwY2MzNmM4YmFiYjE4ZTg5OTI4MDczZGFlYzE2YmI4NTI5YjQyOWVlZWI1NTY1MmZmOTJhYzMyMjcyNGVkOBP12V4=: 00:19:27.298 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:03:ZDAwY2MzNmM4YmFiYjE4ZTg5OTI4MDczZGFlYzE2YmI4NTI5YjQyOWVlZWI1NTY1MmZmOTJhYzMyMjcyNGVkOBP12V4=: 00:19:27.868 06:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.128 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.128 06:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:28.128 06:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.128 06:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.128 06:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.128 06:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key3 00:19:28.128 06:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.128 06:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.128 06:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.128 06:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:19:28.128 06:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:19:28.128 06:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:19:28.128 06:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:28.128 06:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:19:28.128 06:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:28.128 06:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:28.128 06:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:28.128 06:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:28.128 06:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:28.129 06:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:28.129 06:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:28.389 request: 00:19:28.389 { 00:19:28.389 "name": "nvme0", 00:19:28.389 "trtype": "tcp", 00:19:28.389 "traddr": "10.0.0.2", 00:19:28.389 "adrfam": "ipv4", 00:19:28.389 "trsvcid": "4420", 00:19:28.389 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:28.389 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:19:28.389 "prchk_reftag": false, 00:19:28.389 "prchk_guard": false, 00:19:28.389 "hdgst": false, 00:19:28.389 "ddgst": false, 00:19:28.389 "dhchap_key": "key3", 00:19:28.389 "allow_unrecognized_csi": false, 00:19:28.389 "method": "bdev_nvme_attach_controller", 00:19:28.389 "req_id": 1 00:19:28.389 } 00:19:28.389 Got JSON-RPC error response 00:19:28.389 response: 00:19:28.389 { 00:19:28.389 "code": -5, 00:19:28.389 "message": "Input/output error" 00:19:28.389 } 00:19:28.389 06:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:28.389 06:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:28.389 06:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:28.389 06:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:28.389 06:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:19:28.389 06:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:19:28.389 06:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:28.389 06:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:28.649 06:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:19:28.649 06:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:28.649 06:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:19:28.649 06:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:28.649 06:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:28.649 06:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:28.649 06:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:28.649 06:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:28.649 06:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:28.649 06:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:28.649 request: 00:19:28.649 { 00:19:28.649 "name": "nvme0", 00:19:28.649 "trtype": "tcp", 00:19:28.649 "traddr": "10.0.0.2", 00:19:28.649 "adrfam": "ipv4", 00:19:28.649 "trsvcid": "4420", 00:19:28.649 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:28.649 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:19:28.649 "prchk_reftag": false, 00:19:28.649 "prchk_guard": false, 00:19:28.649 "hdgst": false, 00:19:28.649 "ddgst": false, 00:19:28.649 "dhchap_key": "key3", 00:19:28.649 "allow_unrecognized_csi": false, 00:19:28.649 "method": "bdev_nvme_attach_controller", 00:19:28.649 "req_id": 1 00:19:28.649 } 00:19:28.649 Got JSON-RPC error response 00:19:28.649 response: 00:19:28.649 { 00:19:28.649 "code": -5, 00:19:28.649 "message": "Input/output error" 00:19:28.649 } 00:19:28.649 06:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:28.649 06:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:28.649 06:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:28.649 06:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:28.909 06:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:19:28.909 06:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:19:28.909 06:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:19:28.909 06:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:28.909 06:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:28.909 06:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:28.909 06:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:28.909 06:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.909 06:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.909 06:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.909 06:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:28.909 06:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.909 06:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.909 06:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.909 06:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:28.909 06:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:28.909 06:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:28.909 06:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:28.909 06:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:28.910 06:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:28.910 06:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:28.910 06:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:28.910 06:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:28.910 06:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:29.170 request: 00:19:29.170 { 00:19:29.170 "name": "nvme0", 00:19:29.170 "trtype": "tcp", 00:19:29.170 "traddr": "10.0.0.2", 00:19:29.170 "adrfam": "ipv4", 00:19:29.170 "trsvcid": "4420", 00:19:29.170 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:29.170 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:19:29.170 "prchk_reftag": false, 00:19:29.170 "prchk_guard": false, 00:19:29.170 "hdgst": false, 00:19:29.170 "ddgst": false, 00:19:29.170 "dhchap_key": "key0", 00:19:29.170 "dhchap_ctrlr_key": "key1", 00:19:29.170 "allow_unrecognized_csi": false, 00:19:29.170 "method": "bdev_nvme_attach_controller", 00:19:29.170 "req_id": 1 00:19:29.170 } 00:19:29.170 Got JSON-RPC error response 00:19:29.170 response: 00:19:29.170 { 00:19:29.170 "code": -5, 00:19:29.170 "message": "Input/output error" 00:19:29.170 } 00:19:29.170 06:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:29.170 06:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:29.170 06:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:29.170 06:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:29.170 06:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:19:29.170 06:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:19:29.170 06:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:19:29.430 nvme0n1 00:19:29.430 06:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:19:29.430 06:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:19:29.430 06:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.690 06:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.690 06:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.690 06:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.949 06:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 00:19:29.949 06:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.949 06:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.949 06:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.949 06:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:19:29.949 06:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:29.949 06:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:30.518 nvme0n1 00:19:30.777 06:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:19:30.777 06:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:19:30.777 06:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:30.777 06:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.777 06:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:30.777 06:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.777 06:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.777 06:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.777 06:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:19:30.777 06:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:19:30.777 06:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.036 06:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.036 06:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NjU1OWUzODczNGI0NmU3OGY1NjVlMGE5YzA4ZDMzNzVlNTQ3NDlkMWM2ZmMwNzljte7iiw==: --dhchap-ctrl-secret DHHC-1:03:ZDAwY2MzNmM4YmFiYjE4ZTg5OTI4MDczZGFlYzE2YmI4NTI5YjQyOWVlZWI1NTY1MmZmOTJhYzMyMjcyNGVkOBP12V4=: 00:19:31.036 06:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:02:NjU1OWUzODczNGI0NmU3OGY1NjVlMGE5YzA4ZDMzNzVlNTQ3NDlkMWM2ZmMwNzljte7iiw==: --dhchap-ctrl-secret DHHC-1:03:ZDAwY2MzNmM4YmFiYjE4ZTg5OTI4MDczZGFlYzE2YmI4NTI5YjQyOWVlZWI1NTY1MmZmOTJhYzMyMjcyNGVkOBP12V4=: 00:19:31.604 06:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:19:31.604 06:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:19:31.604 06:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:19:31.604 06:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:19:31.604 06:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:19:31.604 06:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:19:31.604 06:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:19:31.604 06:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.604 06:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.863 06:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:19:31.863 06:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:31.863 06:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:19:31.863 06:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:31.863 06:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:31.863 06:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:31.863 06:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:31.863 06:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:19:31.863 06:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:31.863 06:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:32.430 request: 00:19:32.430 { 00:19:32.430 "name": "nvme0", 00:19:32.430 "trtype": "tcp", 00:19:32.430 "traddr": "10.0.0.2", 00:19:32.430 "adrfam": "ipv4", 00:19:32.430 "trsvcid": "4420", 00:19:32.430 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:32.430 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:19:32.430 "prchk_reftag": false, 00:19:32.430 "prchk_guard": false, 00:19:32.430 "hdgst": false, 00:19:32.430 "ddgst": false, 00:19:32.430 "dhchap_key": "key1", 00:19:32.430 "allow_unrecognized_csi": false, 00:19:32.430 "method": "bdev_nvme_attach_controller", 00:19:32.430 "req_id": 1 00:19:32.430 } 00:19:32.430 Got JSON-RPC error response 00:19:32.430 response: 00:19:32.430 { 00:19:32.430 "code": -5, 00:19:32.430 "message": "Input/output error" 00:19:32.430 } 00:19:32.430 06:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:32.430 06:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:32.430 06:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:32.430 06:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:32.430 06:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:32.430 06:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:32.430 06:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:32.998 nvme0n1 00:19:32.998 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:19:32.998 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:19:32.998 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.257 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.257 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.257 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.515 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:33.515 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.515 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.515 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.515 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:19:33.515 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:19:33.515 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:19:33.515 nvme0n1 00:19:33.773 06:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:19:33.773 06:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:19:33.773 06:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.773 06:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.773 06:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.773 06:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.031 06:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:34.031 06:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.031 06:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.031 06:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.031 06:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:NzY2MTI0ZDdkM2E5ZDljNmJmOTU2ZWRmY2Y0ZDQ2NDko9oR0: '' 2s 00:19:34.031 06:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:19:34.031 06:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:19:34.031 06:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:NzY2MTI0ZDdkM2E5ZDljNmJmOTU2ZWRmY2Y0ZDQ2NDko9oR0: 00:19:34.031 06:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:19:34.031 06:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:19:34.031 06:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:19:34.031 06:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:NzY2MTI0ZDdkM2E5ZDljNmJmOTU2ZWRmY2Y0ZDQ2NDko9oR0: ]] 00:19:34.031 06:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:NzY2MTI0ZDdkM2E5ZDljNmJmOTU2ZWRmY2Y0ZDQ2NDko9oR0: 00:19:34.031 06:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:19:34.032 06:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:19:34.032 06:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:19:35.940 06:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:19:35.940 06:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:19:35.940 06:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:19:35.940 06:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:19:35.940 06:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:19:35.940 06:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:19:35.940 06:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:19:35.940 06:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 --dhchap-ctrlr-key key2 00:19:35.940 06:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.940 06:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.940 06:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.940 06:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NjU1OWUzODczNGI0NmU3OGY1NjVlMGE5YzA4ZDMzNzVlNTQ3NDlkMWM2ZmMwNzljte7iiw==: 2s 00:19:35.940 06:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:19:35.940 06:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:19:36.199 06:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:19:36.199 06:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NjU1OWUzODczNGI0NmU3OGY1NjVlMGE5YzA4ZDMzNzVlNTQ3NDlkMWM2ZmMwNzljte7iiw==: 00:19:36.199 06:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:19:36.199 06:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:19:36.199 06:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:19:36.199 06:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NjU1OWUzODczNGI0NmU3OGY1NjVlMGE5YzA4ZDMzNzVlNTQ3NDlkMWM2ZmMwNzljte7iiw==: ]] 00:19:36.199 06:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NjU1OWUzODczNGI0NmU3OGY1NjVlMGE5YzA4ZDMzNzVlNTQ3NDlkMWM2ZmMwNzljte7iiw==: 00:19:36.199 06:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:19:36.199 06:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:19:38.117 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:19:38.117 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:19:38.117 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:19:38.117 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:19:38.117 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:19:38.117 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:19:38.117 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:19:38.117 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:38.117 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:38.117 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:38.117 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.117 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.117 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.117 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:38.117 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:38.117 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:39.079 nvme0n1 00:19:39.079 06:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:39.079 06:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.079 06:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.079 06:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.079 06:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:39.079 06:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:39.339 06:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:19:39.339 06:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:19:39.339 06:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.599 06:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.599 06:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:39.599 06:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.599 06:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.599 06:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.599 06:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:19:39.599 06:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:19:39.860 06:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:19:39.860 06:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:19:39.860 06:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.860 06:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.860 06:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:39.860 06:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.860 06:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.860 06:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.860 06:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:39.860 06:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:39.860 06:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:39.860 06:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:19:39.860 06:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:39.860 06:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:19:39.860 06:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:39.860 06:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:39.860 06:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:40.432 request: 00:19:40.432 { 00:19:40.432 "name": "nvme0", 00:19:40.432 "dhchap_key": "key1", 00:19:40.432 "dhchap_ctrlr_key": "key3", 00:19:40.432 "method": "bdev_nvme_set_keys", 00:19:40.432 "req_id": 1 00:19:40.432 } 00:19:40.432 Got JSON-RPC error response 00:19:40.432 response: 00:19:40.432 { 00:19:40.432 "code": -13, 00:19:40.432 "message": "Permission denied" 00:19:40.432 } 00:19:40.432 06:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:40.432 06:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:40.432 06:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:40.432 06:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:40.432 06:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:19:40.432 06:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.432 06:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:19:40.432 06:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:19:40.432 06:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:19:41.839 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:19:41.839 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:19:41.839 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.839 06:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:19:41.839 06:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:41.839 06:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.839 06:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.839 06:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.840 06:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:41.840 06:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:41.840 06:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:42.409 nvme0n1 00:19:42.409 06:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:42.409 06:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.409 06:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.409 06:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.409 06:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:42.409 06:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:42.409 06:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:42.409 06:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:19:42.409 06:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:42.409 06:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:19:42.409 06:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:42.409 06:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:42.409 06:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:42.978 request: 00:19:42.978 { 00:19:42.978 "name": "nvme0", 00:19:42.978 "dhchap_key": "key2", 00:19:42.978 "dhchap_ctrlr_key": "key0", 00:19:42.978 "method": "bdev_nvme_set_keys", 00:19:42.978 "req_id": 1 00:19:42.978 } 00:19:42.978 Got JSON-RPC error response 00:19:42.978 response: 00:19:42.978 { 00:19:42.978 "code": -13, 00:19:42.978 "message": "Permission denied" 00:19:42.978 } 00:19:42.978 06:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:42.978 06:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:42.978 06:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:42.978 06:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:42.978 06:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:19:42.978 06:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:19:42.978 06:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.253 06:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:19:43.253 06:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:19:44.191 06:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:19:44.192 06:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.192 06:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:19:44.192 06:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:19:44.192 06:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:19:44.192 06:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:19:44.192 06:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 306976 00:19:44.192 06:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 306976 ']' 00:19:44.192 06:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 306976 00:19:44.192 06:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:19:44.192 06:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:44.192 06:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 306976 00:19:44.451 06:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:44.451 06:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:44.451 06:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 306976' 00:19:44.451 killing process with pid 306976 00:19:44.451 06:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 306976 00:19:44.451 06:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 306976 00:19:44.451 06:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:19:44.451 06:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:44.451 06:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:19:44.451 06:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:44.451 06:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:19:44.451 06:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:44.451 06:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:44.451 rmmod nvme_tcp 00:19:44.451 rmmod nvme_fabrics 00:19:44.451 rmmod nvme_keyring 00:19:44.711 06:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:44.711 06:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:19:44.711 06:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:19:44.711 06:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 329776 ']' 00:19:44.711 06:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 329776 00:19:44.711 06:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 329776 ']' 00:19:44.711 06:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 329776 00:19:44.711 06:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:19:44.711 06:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:44.711 06:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 329776 00:19:44.711 06:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:44.711 06:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:44.711 06:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 329776' 00:19:44.711 killing process with pid 329776 00:19:44.711 06:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 329776 00:19:44.711 06:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 329776 00:19:44.711 06:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:44.711 06:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:44.711 06:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:44.711 06:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:19:44.711 06:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:19:44.711 06:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:44.711 06:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:19:44.711 06:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:44.711 06:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:44.711 06:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:44.711 06:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:44.711 06:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:47.264 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:47.264 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.JU8 /tmp/spdk.key-sha256.Brd /tmp/spdk.key-sha384.jjr /tmp/spdk.key-sha512.dcR /tmp/spdk.key-sha512.s7j /tmp/spdk.key-sha384.F4x /tmp/spdk.key-sha256.p9x '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:19:47.264 00:19:47.264 real 2m37.487s 00:19:47.264 user 5m53.469s 00:19:47.264 sys 0m24.108s 00:19:47.264 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:47.264 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.264 ************************************ 00:19:47.264 END TEST nvmf_auth_target 00:19:47.264 ************************************ 00:19:47.264 06:18:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:19:47.264 06:18:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:47.264 06:18:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:47.264 06:18:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:47.264 06:18:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:47.264 ************************************ 00:19:47.264 START TEST nvmf_bdevio_no_huge 00:19:47.264 ************************************ 00:19:47.264 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:47.264 * Looking for test storage... 00:19:47.264 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:47.264 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:47.264 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:19:47.264 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:47.264 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:47.264 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:47.264 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:47.264 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:47.264 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:19:47.264 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:19:47.264 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:19:47.264 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:19:47.264 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:19:47.264 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:19:47.264 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:19:47.264 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:47.264 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:19:47.264 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:19:47.264 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:47.264 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:47.265 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:19:47.265 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:19:47.265 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:47.265 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:19:47.265 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:19:47.265 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:19:47.265 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:19:47.265 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:47.265 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:19:47.265 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:19:47.265 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:47.265 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:47.265 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:19:47.265 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:47.265 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:47.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.265 --rc genhtml_branch_coverage=1 00:19:47.265 --rc genhtml_function_coverage=1 00:19:47.265 --rc genhtml_legend=1 00:19:47.265 --rc geninfo_all_blocks=1 00:19:47.265 --rc geninfo_unexecuted_blocks=1 00:19:47.265 00:19:47.265 ' 00:19:47.265 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:47.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.265 --rc genhtml_branch_coverage=1 00:19:47.265 --rc genhtml_function_coverage=1 00:19:47.265 --rc genhtml_legend=1 00:19:47.265 --rc geninfo_all_blocks=1 00:19:47.265 --rc geninfo_unexecuted_blocks=1 00:19:47.265 00:19:47.265 ' 00:19:47.265 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:47.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.265 --rc genhtml_branch_coverage=1 00:19:47.265 --rc genhtml_function_coverage=1 00:19:47.265 --rc genhtml_legend=1 00:19:47.265 --rc geninfo_all_blocks=1 00:19:47.265 --rc geninfo_unexecuted_blocks=1 00:19:47.265 00:19:47.265 ' 00:19:47.265 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:47.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.265 --rc genhtml_branch_coverage=1 00:19:47.265 --rc genhtml_function_coverage=1 00:19:47.265 --rc genhtml_legend=1 00:19:47.265 --rc geninfo_all_blocks=1 00:19:47.265 --rc geninfo_unexecuted_blocks=1 00:19:47.265 00:19:47.265 ' 00:19:47.265 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:47.265 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:19:47.265 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:47.265 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:47.265 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:47.265 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:47.265 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:47.265 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:47.265 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:47.265 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:47.265 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:47.265 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:47.265 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:47.265 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:47.265 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:47.265 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:47.265 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:47.265 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:47.265 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:47.265 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:19:47.265 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:47.265 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:47.265 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:47.265 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.265 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.265 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.265 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:19:47.265 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.265 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:19:47.265 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:47.265 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:47.265 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:47.265 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:47.265 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:47.265 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:47.265 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:47.265 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:47.265 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:47.265 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:47.265 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:47.265 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:47.265 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:19:47.265 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:47.265 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:47.265 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:47.265 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:47.265 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:47.265 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:47.265 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:47.265 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:47.265 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:47.265 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:47.265 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:19:47.265 06:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:55.402 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:55.403 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:55.403 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:55.403 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:55.403 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:55.403 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:55.404 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:55.404 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:55.404 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:55.404 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:55.404 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.477 ms 00:19:55.404 00:19:55.404 --- 10.0.0.2 ping statistics --- 00:19:55.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:55.404 rtt min/avg/max/mdev = 0.477/0.477/0.477/0.000 ms 00:19:55.404 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:55.404 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:55.404 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:19:55.404 00:19:55.404 --- 10.0.0.1 ping statistics --- 00:19:55.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:55.404 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:19:55.404 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:55.404 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:19:55.404 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:55.404 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:55.404 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:55.404 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:55.404 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:55.404 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:55.404 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:55.404 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:55.404 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:55.404 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:55.404 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:55.404 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=337151 00:19:55.404 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 337151 00:19:55.404 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 337151 ']' 00:19:55.404 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:55.404 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:55.404 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:55.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:55.404 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:55.404 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:55.404 06:18:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:55.404 [2024-12-09 06:18:48.981952] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:19:55.404 [2024-12-09 06:18:48.982022] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:55.404 [2024-12-09 06:18:49.067272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:55.404 [2024-12-09 06:18:49.122512] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:55.404 [2024-12-09 06:18:49.122554] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:55.404 [2024-12-09 06:18:49.122562] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:55.404 [2024-12-09 06:18:49.122569] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:55.404 [2024-12-09 06:18:49.122575] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:55.404 [2024-12-09 06:18:49.123944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:19:55.404 [2024-12-09 06:18:49.124103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:19:55.404 [2024-12-09 06:18:49.124254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:55.404 [2024-12-09 06:18:49.124255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:19:55.404 06:18:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:55.404 06:18:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:19:55.404 06:18:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:55.404 06:18:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:55.404 06:18:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:55.404 06:18:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:55.404 06:18:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:55.404 06:18:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.404 06:18:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:55.404 [2024-12-09 06:18:49.844400] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:55.404 06:18:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.404 06:18:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:55.404 06:18:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.404 06:18:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:55.404 Malloc0 00:19:55.404 06:18:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.404 06:18:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:55.404 06:18:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.404 06:18:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:55.404 06:18:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.404 06:18:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:55.404 06:18:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.404 06:18:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:55.404 06:18:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.404 06:18:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:55.404 06:18:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.404 06:18:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:55.404 [2024-12-09 06:18:49.881849] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:55.404 06:18:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.404 06:18:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:55.404 06:18:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:55.404 06:18:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:19:55.404 06:18:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:19:55.404 06:18:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:55.404 06:18:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:55.404 { 00:19:55.404 "params": { 00:19:55.404 "name": "Nvme$subsystem", 00:19:55.404 "trtype": "$TEST_TRANSPORT", 00:19:55.404 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:55.404 "adrfam": "ipv4", 00:19:55.404 "trsvcid": "$NVMF_PORT", 00:19:55.404 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:55.404 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:55.404 "hdgst": ${hdgst:-false}, 00:19:55.404 "ddgst": ${ddgst:-false} 00:19:55.404 }, 00:19:55.404 "method": "bdev_nvme_attach_controller" 00:19:55.404 } 00:19:55.404 EOF 00:19:55.404 )") 00:19:55.404 06:18:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:19:55.404 06:18:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:19:55.404 06:18:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:19:55.404 06:18:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:19:55.404 "params": { 00:19:55.404 "name": "Nvme1", 00:19:55.404 "trtype": "tcp", 00:19:55.404 "traddr": "10.0.0.2", 00:19:55.404 "adrfam": "ipv4", 00:19:55.404 "trsvcid": "4420", 00:19:55.404 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:55.404 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:55.404 "hdgst": false, 00:19:55.404 "ddgst": false 00:19:55.404 }, 00:19:55.404 "method": "bdev_nvme_attach_controller" 00:19:55.404 }' 00:19:55.404 [2024-12-09 06:18:49.938459] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:19:55.404 [2024-12-09 06:18:49.938532] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid337374 ] 00:19:55.665 [2024-12-09 06:18:50.034005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:55.665 [2024-12-09 06:18:50.092282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:55.665 [2024-12-09 06:18:50.092437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:55.665 [2024-12-09 06:18:50.092437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:55.925 I/O targets: 00:19:55.925 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:55.925 00:19:55.925 00:19:55.925 CUnit - A unit testing framework for C - Version 2.1-3 00:19:55.925 http://cunit.sourceforge.net/ 00:19:55.925 00:19:55.925 00:19:55.925 Suite: bdevio tests on: Nvme1n1 00:19:55.925 Test: blockdev write read block ...passed 00:19:55.925 Test: blockdev write zeroes read block ...passed 00:19:55.925 Test: blockdev write zeroes read no split ...passed 00:19:55.925 Test: blockdev write zeroes read split ...passed 00:19:55.925 Test: blockdev write zeroes read split partial ...passed 00:19:55.925 Test: blockdev reset ...[2024-12-09 06:18:50.391646] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:55.925 [2024-12-09 06:18:50.391747] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d9f70 (9): Bad file descriptor 00:19:55.925 [2024-12-09 06:18:50.407001] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:19:55.925 passed 00:19:55.925 Test: blockdev write read 8 blocks ...passed 00:19:55.925 Test: blockdev write read size > 128k ...passed 00:19:55.925 Test: blockdev write read invalid size ...passed 00:19:55.925 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:55.926 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:55.926 Test: blockdev write read max offset ...passed 00:19:56.186 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:56.186 Test: blockdev writev readv 8 blocks ...passed 00:19:56.186 Test: blockdev writev readv 30 x 1block ...passed 00:19:56.186 Test: blockdev writev readv block ...passed 00:19:56.186 Test: blockdev writev readv size > 128k ...passed 00:19:56.186 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:56.186 Test: blockdev comparev and writev ...[2024-12-09 06:18:50.671452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:56.186 [2024-12-09 06:18:50.671495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:56.186 [2024-12-09 06:18:50.671510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:56.186 [2024-12-09 06:18:50.671519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:56.186 [2024-12-09 06:18:50.672028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:56.186 [2024-12-09 06:18:50.672040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:56.186 [2024-12-09 06:18:50.672053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:56.186 [2024-12-09 06:18:50.672062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:56.186 [2024-12-09 06:18:50.672550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:56.187 [2024-12-09 06:18:50.672561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:56.187 [2024-12-09 06:18:50.672580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:56.187 [2024-12-09 06:18:50.672588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:56.187 [2024-12-09 06:18:50.673095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:56.187 [2024-12-09 06:18:50.673105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:56.187 [2024-12-09 06:18:50.673118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:56.187 [2024-12-09 06:18:50.673125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:56.187 passed 00:19:56.187 Test: blockdev nvme passthru rw ...passed 00:19:56.187 Test: blockdev nvme passthru vendor specific ...[2024-12-09 06:18:50.757285] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:56.187 [2024-12-09 06:18:50.757301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:56.187 [2024-12-09 06:18:50.757636] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:56.187 [2024-12-09 06:18:50.757647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:56.187 [2024-12-09 06:18:50.758006] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:56.187 [2024-12-09 06:18:50.758015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:56.187 [2024-12-09 06:18:50.758371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:56.187 [2024-12-09 06:18:50.758380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:56.187 passed 00:19:56.452 Test: blockdev nvme admin passthru ...passed 00:19:56.452 Test: blockdev copy ...passed 00:19:56.452 00:19:56.452 Run Summary: Type Total Ran Passed Failed Inactive 00:19:56.452 suites 1 1 n/a 0 0 00:19:56.452 tests 23 23 23 0 0 00:19:56.452 asserts 152 152 152 0 n/a 00:19:56.452 00:19:56.452 Elapsed time = 1.094 seconds 00:19:56.712 06:18:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:56.712 06:18:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.712 06:18:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:56.712 06:18:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.712 06:18:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:56.712 06:18:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:19:56.712 06:18:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:56.712 06:18:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:19:56.712 06:18:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:56.712 06:18:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:19:56.712 06:18:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:56.712 06:18:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:56.712 rmmod nvme_tcp 00:19:56.712 rmmod nvme_fabrics 00:19:56.712 rmmod nvme_keyring 00:19:56.712 06:18:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:56.712 06:18:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:19:56.712 06:18:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:19:56.712 06:18:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 337151 ']' 00:19:56.712 06:18:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 337151 00:19:56.712 06:18:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 337151 ']' 00:19:56.712 06:18:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 337151 00:19:56.712 06:18:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:19:56.713 06:18:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:56.713 06:18:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 337151 00:19:56.713 06:18:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:19:56.713 06:18:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:19:56.713 06:18:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 337151' 00:19:56.713 killing process with pid 337151 00:19:56.713 06:18:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 337151 00:19:56.713 06:18:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 337151 00:19:56.973 06:18:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:56.973 06:18:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:56.973 06:18:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:56.973 06:18:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:19:56.973 06:18:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:19:56.973 06:18:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:56.973 06:18:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:19:56.973 06:18:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:56.973 06:18:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:56.973 06:18:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:56.973 06:18:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:56.973 06:18:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:59.518 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:59.518 00:19:59.518 real 0m12.127s 00:19:59.518 user 0m13.122s 00:19:59.518 sys 0m6.403s 00:19:59.518 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:59.518 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:59.518 ************************************ 00:19:59.518 END TEST nvmf_bdevio_no_huge 00:19:59.518 ************************************ 00:19:59.518 06:18:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:59.518 06:18:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:59.518 06:18:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:59.518 06:18:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:59.518 ************************************ 00:19:59.518 START TEST nvmf_tls 00:19:59.518 ************************************ 00:19:59.518 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:59.518 * Looking for test storage... 00:19:59.518 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:59.518 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:59.518 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:59.518 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:19:59.518 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:59.518 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:59.518 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:59.518 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:59.518 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:19:59.518 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:19:59.518 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:19:59.518 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:19:59.518 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:19:59.518 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:19:59.518 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:19:59.518 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:59.518 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:19:59.518 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:19:59.518 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:59.518 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:59.518 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:19:59.518 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:19:59.518 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:59.518 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:19:59.518 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:19:59.518 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:19:59.518 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:19:59.518 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:59.518 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:19:59.518 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:19:59.518 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:59.518 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:59.518 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:19:59.518 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:59.518 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:59.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:59.518 --rc genhtml_branch_coverage=1 00:19:59.518 --rc genhtml_function_coverage=1 00:19:59.518 --rc genhtml_legend=1 00:19:59.518 --rc geninfo_all_blocks=1 00:19:59.518 --rc geninfo_unexecuted_blocks=1 00:19:59.518 00:19:59.518 ' 00:19:59.518 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:59.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:59.518 --rc genhtml_branch_coverage=1 00:19:59.518 --rc genhtml_function_coverage=1 00:19:59.518 --rc genhtml_legend=1 00:19:59.518 --rc geninfo_all_blocks=1 00:19:59.518 --rc geninfo_unexecuted_blocks=1 00:19:59.518 00:19:59.518 ' 00:19:59.518 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:59.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:59.518 --rc genhtml_branch_coverage=1 00:19:59.518 --rc genhtml_function_coverage=1 00:19:59.518 --rc genhtml_legend=1 00:19:59.518 --rc geninfo_all_blocks=1 00:19:59.518 --rc geninfo_unexecuted_blocks=1 00:19:59.518 00:19:59.518 ' 00:19:59.518 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:59.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:59.518 --rc genhtml_branch_coverage=1 00:19:59.518 --rc genhtml_function_coverage=1 00:19:59.518 --rc genhtml_legend=1 00:19:59.518 --rc geninfo_all_blocks=1 00:19:59.518 --rc geninfo_unexecuted_blocks=1 00:19:59.518 00:19:59.518 ' 00:19:59.518 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:59.518 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:19:59.518 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:59.518 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:59.518 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:59.518 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:59.518 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:59.518 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:59.518 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:59.518 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:59.518 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:59.518 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:59.518 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:59.518 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:59.518 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:59.518 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:59.518 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:59.518 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:59.518 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:59.518 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:19:59.518 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:59.518 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:59.518 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:59.518 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.518 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.518 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.518 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:19:59.519 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.519 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:19:59.519 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:59.519 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:59.519 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:59.519 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:59.519 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:59.519 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:59.519 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:59.519 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:59.519 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:59.519 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:59.519 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:59.519 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:19:59.519 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:59.519 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:59.519 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:59.519 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:59.519 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:59.519 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:59.519 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:59.519 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:59.519 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:59.519 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:59.519 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:19:59.519 06:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:07.653 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:07.653 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:20:07.653 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:07.653 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:07.653 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:07.653 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:07.653 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:07.653 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:20:07.653 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:07.653 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:20:07.653 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:20:07.653 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:20:07.653 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:20:07.653 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:20:07.653 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:20:07.653 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:07.653 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:07.653 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:07.653 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:07.653 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:07.653 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:07.653 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:07.653 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:07.653 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:07.653 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:07.653 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:07.653 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:07.653 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:07.654 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:07.654 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:07.654 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:07.654 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:07.654 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:07.654 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:07.654 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:07.654 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:07.654 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:07.654 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:07.654 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:07.654 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:07.654 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:07.654 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:07.654 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:07.654 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:07.654 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:07.654 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:07.654 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:07.654 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:07.654 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:07.654 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:07.654 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:07.654 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:07.654 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:07.654 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:07.654 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:07.654 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:07.654 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:07.654 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:07.654 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:07.654 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:07.654 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:07.654 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:07.654 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:07.654 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:07.654 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:07.654 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:07.654 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:07.654 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:07.654 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:07.654 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:07.654 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:07.654 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:07.654 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:07.654 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:20:07.654 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:07.654 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:07.654 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:07.654 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:07.654 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:07.654 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:07.654 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:07.654 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:07.654 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:07.654 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:07.654 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:07.654 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:07.654 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:07.654 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:07.654 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:07.654 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:07.654 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:07.654 06:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:07.654 06:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:07.654 06:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:07.654 06:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:07.654 06:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:07.654 06:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:07.654 06:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:07.654 06:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:07.654 06:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:07.654 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:07.654 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.575 ms 00:20:07.654 00:20:07.654 --- 10.0.0.2 ping statistics --- 00:20:07.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:07.654 rtt min/avg/max/mdev = 0.575/0.575/0.575/0.000 ms 00:20:07.654 06:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:07.654 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:07.654 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:20:07.654 00:20:07.654 --- 10.0.0.1 ping statistics --- 00:20:07.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:07.654 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:20:07.654 06:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:07.654 06:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:20:07.654 06:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:07.654 06:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:07.654 06:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:07.654 06:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:07.654 06:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:07.654 06:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:07.654 06:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:07.654 06:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:07.654 06:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:07.654 06:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:07.654 06:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:07.654 06:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=341509 00:20:07.654 06:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 341509 00:20:07.654 06:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:07.654 06:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 341509 ']' 00:20:07.654 06:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:07.654 06:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:07.654 06:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:07.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:07.654 06:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:07.654 06:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:07.654 [2024-12-09 06:19:01.268198] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:20:07.654 [2024-12-09 06:19:01.268260] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:07.654 [2024-12-09 06:19:01.349468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:07.654 [2024-12-09 06:19:01.399933] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:07.654 [2024-12-09 06:19:01.399985] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:07.654 [2024-12-09 06:19:01.399993] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:07.655 [2024-12-09 06:19:01.400000] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:07.655 [2024-12-09 06:19:01.400006] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:07.655 [2024-12-09 06:19:01.400763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:07.655 06:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:07.655 06:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:07.655 06:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:07.655 06:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:07.655 06:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:07.655 06:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:07.655 06:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:20:07.655 06:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:20:07.914 true 00:20:07.914 06:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:07.914 06:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:20:08.173 06:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:20:08.173 06:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:20:08.173 06:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:08.173 06:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:08.173 06:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:20:08.434 06:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:20:08.434 06:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:20:08.434 06:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:20:08.694 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:08.694 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:20:08.694 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:20:08.694 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:20:08.694 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:08.694 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:20:08.955 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:20:08.955 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:20:08.955 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:20:09.216 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:09.216 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:20:09.216 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:20:09.216 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:20:09.216 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:20:09.477 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:09.477 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:20:09.477 06:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:20:09.477 06:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:20:09.477 06:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:20:09.477 06:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:20:09.477 06:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:20:09.477 06:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:09.477 06:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:20:09.477 06:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:20:09.477 06:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:20:09.737 06:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:09.737 06:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:20:09.737 06:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:20:09.737 06:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:20:09.737 06:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:09.737 06:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:20:09.737 06:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:20:09.737 06:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:20:09.737 06:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:09.737 06:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:20:09.737 06:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.EOMzywe4Jv 00:20:09.737 06:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:20:09.737 06:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.6tEDwyT3Oj 00:20:09.737 06:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:09.737 06:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:09.737 06:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.EOMzywe4Jv 00:20:09.737 06:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.6tEDwyT3Oj 00:20:09.737 06:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:09.996 06:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:20:09.996 06:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.EOMzywe4Jv 00:20:09.996 06:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.EOMzywe4Jv 00:20:09.996 06:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:10.258 [2024-12-09 06:19:04.718136] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:10.258 06:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:10.518 06:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:10.518 [2024-12-09 06:19:05.058970] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:10.518 [2024-12-09 06:19:05.059160] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:10.518 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:10.777 malloc0 00:20:10.777 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:11.036 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.EOMzywe4Jv 00:20:11.036 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:11.296 06:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.EOMzywe4Jv 00:20:21.285 Initializing NVMe Controllers 00:20:21.285 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:21.285 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:21.285 Initialization complete. Launching workers. 00:20:21.285 ======================================================== 00:20:21.285 Latency(us) 00:20:21.285 Device Information : IOPS MiB/s Average min max 00:20:21.285 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18348.28 71.67 3488.29 1068.46 4105.51 00:20:21.285 ======================================================== 00:20:21.285 Total : 18348.28 71.67 3488.29 1068.46 4105.51 00:20:21.285 00:20:21.546 06:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.EOMzywe4Jv 00:20:21.546 06:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:21.546 06:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:21.546 06:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:21.546 06:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.EOMzywe4Jv 00:20:21.546 06:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:21.546 06:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=343930 00:20:21.546 06:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:21.546 06:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 343930 /var/tmp/bdevperf.sock 00:20:21.546 06:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 343930 ']' 00:20:21.546 06:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:21.546 06:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:21.546 06:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:21.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:21.546 06:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:21.546 06:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:21.546 06:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:21.546 [2024-12-09 06:19:15.935527] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:20:21.546 [2024-12-09 06:19:15.935585] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid343930 ] 00:20:21.546 [2024-12-09 06:19:16.004150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:21.546 [2024-12-09 06:19:16.037721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:22.486 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:22.486 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:22.486 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.EOMzywe4Jv 00:20:22.486 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:22.486 [2024-12-09 06:19:17.052713] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:22.746 TLSTESTn1 00:20:22.746 06:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:22.746 Running I/O for 10 seconds... 00:20:25.072 2039.00 IOPS, 7.96 MiB/s [2024-12-09T05:19:20.601Z] 1845.00 IOPS, 7.21 MiB/s [2024-12-09T05:19:21.541Z] 1657.33 IOPS, 6.47 MiB/s [2024-12-09T05:19:22.477Z] 1583.25 IOPS, 6.18 MiB/s [2024-12-09T05:19:23.416Z] 1731.00 IOPS, 6.76 MiB/s [2024-12-09T05:19:24.352Z] 1910.67 IOPS, 7.46 MiB/s [2024-12-09T05:19:25.291Z] 1846.86 IOPS, 7.21 MiB/s [2024-12-09T05:19:26.681Z] 1775.38 IOPS, 6.94 MiB/s [2024-12-09T05:19:27.621Z] 1767.11 IOPS, 6.90 MiB/s [2024-12-09T05:19:27.621Z] 1752.30 IOPS, 6.84 MiB/s 00:20:33.034 Latency(us) 00:20:33.034 [2024-12-09T05:19:27.621Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:33.034 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:33.034 Verification LBA range: start 0x0 length 0x2000 00:20:33.034 TLSTESTn1 : 10.06 1755.15 6.86 0.00 0.00 72788.54 5545.35 189550.28 00:20:33.034 [2024-12-09T05:19:27.621Z] =================================================================================================================== 00:20:33.034 [2024-12-09T05:19:27.621Z] Total : 1755.15 6.86 0.00 0.00 72788.54 5545.35 189550.28 00:20:33.034 { 00:20:33.034 "results": [ 00:20:33.034 { 00:20:33.034 "job": "TLSTESTn1", 00:20:33.034 "core_mask": "0x4", 00:20:33.034 "workload": "verify", 00:20:33.034 "status": "finished", 00:20:33.034 "verify_range": { 00:20:33.034 "start": 0, 00:20:33.034 "length": 8192 00:20:33.034 }, 00:20:33.034 "queue_depth": 128, 00:20:33.034 "io_size": 4096, 00:20:33.034 "runtime": 10.056695, 00:20:33.034 "iops": 1755.149181714271, 00:20:33.034 "mibps": 6.856051491071371, 00:20:33.034 "io_failed": 0, 00:20:33.034 "io_timeout": 0, 00:20:33.034 "avg_latency_us": 72788.53918514097, 00:20:33.034 "min_latency_us": 5545.3538461538465, 00:20:33.034 "max_latency_us": 189550.27692307692 00:20:33.034 } 00:20:33.034 ], 00:20:33.034 "core_count": 1 00:20:33.034 } 00:20:33.034 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:33.034 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 343930 00:20:33.034 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 343930 ']' 00:20:33.034 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 343930 00:20:33.034 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:33.034 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:33.034 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 343930 00:20:33.034 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:33.034 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:33.034 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 343930' 00:20:33.034 killing process with pid 343930 00:20:33.034 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 343930 00:20:33.034 Received shutdown signal, test time was about 10.000000 seconds 00:20:33.034 00:20:33.034 Latency(us) 00:20:33.034 [2024-12-09T05:19:27.621Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:33.034 [2024-12-09T05:19:27.621Z] =================================================================================================================== 00:20:33.034 [2024-12-09T05:19:27.621Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:33.034 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 343930 00:20:33.034 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6tEDwyT3Oj 00:20:33.034 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:33.034 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6tEDwyT3Oj 00:20:33.034 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:33.034 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:33.034 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:33.034 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:33.034 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6tEDwyT3Oj 00:20:33.034 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:33.034 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:33.034 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:33.034 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.6tEDwyT3Oj 00:20:33.034 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:33.034 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=346014 00:20:33.034 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:33.034 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 346014 /var/tmp/bdevperf.sock 00:20:33.034 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 346014 ']' 00:20:33.034 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:33.034 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:33.034 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:33.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:33.034 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:33.034 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:33.034 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:33.034 [2024-12-09 06:19:27.547844] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:20:33.034 [2024-12-09 06:19:27.547896] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid346014 ] 00:20:33.034 [2024-12-09 06:19:27.606145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:33.294 [2024-12-09 06:19:27.634614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:33.294 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:33.294 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:33.294 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.6tEDwyT3Oj 00:20:33.554 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:33.554 [2024-12-09 06:19:28.044021] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:33.554 [2024-12-09 06:19:28.053383] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:33.554 [2024-12-09 06:19:28.054126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e6420 (107): Transport endpoint is not connected 00:20:33.554 [2024-12-09 06:19:28.055122] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e6420 (9): Bad file descriptor 00:20:33.554 [2024-12-09 06:19:28.056123] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:20:33.554 [2024-12-09 06:19:28.056133] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:33.554 [2024-12-09 06:19:28.056139] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:20:33.554 [2024-12-09 06:19:28.056146] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:20:33.554 request: 00:20:33.554 { 00:20:33.554 "name": "TLSTEST", 00:20:33.554 "trtype": "tcp", 00:20:33.554 "traddr": "10.0.0.2", 00:20:33.554 "adrfam": "ipv4", 00:20:33.554 "trsvcid": "4420", 00:20:33.554 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:33.554 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:33.554 "prchk_reftag": false, 00:20:33.554 "prchk_guard": false, 00:20:33.554 "hdgst": false, 00:20:33.554 "ddgst": false, 00:20:33.554 "psk": "key0", 00:20:33.554 "allow_unrecognized_csi": false, 00:20:33.554 "method": "bdev_nvme_attach_controller", 00:20:33.554 "req_id": 1 00:20:33.554 } 00:20:33.554 Got JSON-RPC error response 00:20:33.554 response: 00:20:33.554 { 00:20:33.554 "code": -5, 00:20:33.554 "message": "Input/output error" 00:20:33.554 } 00:20:33.554 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 346014 00:20:33.554 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 346014 ']' 00:20:33.554 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 346014 00:20:33.554 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:33.554 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:33.554 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 346014 00:20:33.554 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:33.554 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:33.554 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 346014' 00:20:33.554 killing process with pid 346014 00:20:33.554 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 346014 00:20:33.554 Received shutdown signal, test time was about 10.000000 seconds 00:20:33.554 00:20:33.554 Latency(us) 00:20:33.554 [2024-12-09T05:19:28.141Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:33.554 [2024-12-09T05:19:28.141Z] =================================================================================================================== 00:20:33.554 [2024-12-09T05:19:28.141Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:33.554 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 346014 00:20:33.814 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:33.814 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:33.815 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:33.815 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:33.815 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:33.815 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.EOMzywe4Jv 00:20:33.815 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:33.815 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.EOMzywe4Jv 00:20:33.815 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:33.815 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:33.815 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:33.815 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:33.815 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.EOMzywe4Jv 00:20:33.815 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:33.815 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:33.815 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:33.815 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.EOMzywe4Jv 00:20:33.815 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:33.815 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=346026 00:20:33.815 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:33.815 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 346026 /var/tmp/bdevperf.sock 00:20:33.815 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:33.815 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 346026 ']' 00:20:33.815 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:33.815 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:33.815 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:33.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:33.815 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:33.815 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:33.815 [2024-12-09 06:19:28.281683] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:20:33.815 [2024-12-09 06:19:28.281736] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid346026 ] 00:20:33.815 [2024-12-09 06:19:28.341916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:33.815 [2024-12-09 06:19:28.370374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:34.075 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:34.075 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:34.075 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.EOMzywe4Jv 00:20:34.075 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:20:34.336 [2024-12-09 06:19:28.775733] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:34.336 [2024-12-09 06:19:28.785074] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:34.336 [2024-12-09 06:19:28.785093] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:34.336 [2024-12-09 06:19:28.785112] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:34.336 [2024-12-09 06:19:28.785937] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1420 (107): Transport endpoint is not connected 00:20:34.336 [2024-12-09 06:19:28.786932] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1420 (9): Bad file descriptor 00:20:34.336 [2024-12-09 06:19:28.787933] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:20:34.336 [2024-12-09 06:19:28.787941] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:34.336 [2024-12-09 06:19:28.787951] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:20:34.336 [2024-12-09 06:19:28.787960] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:20:34.336 request: 00:20:34.336 { 00:20:34.336 "name": "TLSTEST", 00:20:34.336 "trtype": "tcp", 00:20:34.336 "traddr": "10.0.0.2", 00:20:34.336 "adrfam": "ipv4", 00:20:34.336 "trsvcid": "4420", 00:20:34.336 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:34.336 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:34.336 "prchk_reftag": false, 00:20:34.336 "prchk_guard": false, 00:20:34.336 "hdgst": false, 00:20:34.336 "ddgst": false, 00:20:34.336 "psk": "key0", 00:20:34.336 "allow_unrecognized_csi": false, 00:20:34.336 "method": "bdev_nvme_attach_controller", 00:20:34.336 "req_id": 1 00:20:34.336 } 00:20:34.336 Got JSON-RPC error response 00:20:34.336 response: 00:20:34.336 { 00:20:34.336 "code": -5, 00:20:34.336 "message": "Input/output error" 00:20:34.336 } 00:20:34.336 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 346026 00:20:34.336 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 346026 ']' 00:20:34.336 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 346026 00:20:34.336 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:34.336 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:34.336 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 346026 00:20:34.337 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:34.337 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:34.337 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 346026' 00:20:34.337 killing process with pid 346026 00:20:34.337 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 346026 00:20:34.337 Received shutdown signal, test time was about 10.000000 seconds 00:20:34.337 00:20:34.337 Latency(us) 00:20:34.337 [2024-12-09T05:19:28.924Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:34.337 [2024-12-09T05:19:28.924Z] =================================================================================================================== 00:20:34.337 [2024-12-09T05:19:28.924Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:34.337 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 346026 00:20:34.597 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:34.597 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:34.597 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:34.597 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:34.597 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:34.597 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.EOMzywe4Jv 00:20:34.597 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:34.597 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.EOMzywe4Jv 00:20:34.597 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:34.597 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:34.597 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:34.597 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:34.597 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.EOMzywe4Jv 00:20:34.597 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:34.597 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:34.597 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:34.597 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.EOMzywe4Jv 00:20:34.597 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:34.597 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=346199 00:20:34.597 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:34.597 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 346199 /var/tmp/bdevperf.sock 00:20:34.597 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:34.597 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 346199 ']' 00:20:34.597 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:34.597 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:34.597 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:34.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:34.597 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:34.597 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:34.597 [2024-12-09 06:19:29.016045] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:20:34.597 [2024-12-09 06:19:29.016097] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid346199 ] 00:20:34.597 [2024-12-09 06:19:29.075181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:34.597 [2024-12-09 06:19:29.103973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:34.597 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:34.597 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:34.597 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.EOMzywe4Jv 00:20:34.857 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:35.116 [2024-12-09 06:19:29.509804] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:35.116 [2024-12-09 06:19:29.516853] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:35.116 [2024-12-09 06:19:29.516871] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:35.116 [2024-12-09 06:19:29.516891] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:35.116 [2024-12-09 06:19:29.517102] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbe1420 (107): Transport endpoint is not connected 00:20:35.116 [2024-12-09 06:19:29.518098] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbe1420 (9): Bad file descriptor 00:20:35.116 [2024-12-09 06:19:29.519100] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:20:35.116 [2024-12-09 06:19:29.519110] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:35.116 [2024-12-09 06:19:29.519116] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:20:35.116 [2024-12-09 06:19:29.519126] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:20:35.116 request: 00:20:35.116 { 00:20:35.116 "name": "TLSTEST", 00:20:35.116 "trtype": "tcp", 00:20:35.116 "traddr": "10.0.0.2", 00:20:35.116 "adrfam": "ipv4", 00:20:35.116 "trsvcid": "4420", 00:20:35.117 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:35.117 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:35.117 "prchk_reftag": false, 00:20:35.117 "prchk_guard": false, 00:20:35.117 "hdgst": false, 00:20:35.117 "ddgst": false, 00:20:35.117 "psk": "key0", 00:20:35.117 "allow_unrecognized_csi": false, 00:20:35.117 "method": "bdev_nvme_attach_controller", 00:20:35.117 "req_id": 1 00:20:35.117 } 00:20:35.117 Got JSON-RPC error response 00:20:35.117 response: 00:20:35.117 { 00:20:35.117 "code": -5, 00:20:35.117 "message": "Input/output error" 00:20:35.117 } 00:20:35.117 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 346199 00:20:35.117 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 346199 ']' 00:20:35.117 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 346199 00:20:35.117 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:35.117 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:35.117 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 346199 00:20:35.117 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:35.117 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:35.117 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 346199' 00:20:35.117 killing process with pid 346199 00:20:35.117 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 346199 00:20:35.117 Received shutdown signal, test time was about 10.000000 seconds 00:20:35.117 00:20:35.117 Latency(us) 00:20:35.117 [2024-12-09T05:19:29.704Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:35.117 [2024-12-09T05:19:29.704Z] =================================================================================================================== 00:20:35.117 [2024-12-09T05:19:29.704Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:35.117 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 346199 00:20:35.117 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:35.117 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:35.117 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:35.117 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:35.117 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:35.117 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:35.117 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:35.117 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:35.117 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:35.117 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:35.117 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:35.117 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:35.117 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:35.117 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:35.117 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:35.117 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:35.117 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:20:35.117 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:35.117 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=346351 00:20:35.117 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:35.117 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 346351 /var/tmp/bdevperf.sock 00:20:35.117 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:35.117 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 346351 ']' 00:20:35.117 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:35.117 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:35.117 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:35.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:35.117 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:35.117 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:35.377 [2024-12-09 06:19:29.743634] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:20:35.377 [2024-12-09 06:19:29.743687] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid346351 ] 00:20:35.377 [2024-12-09 06:19:29.802245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:35.377 [2024-12-09 06:19:29.831045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:35.377 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:35.377 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:35.377 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:20:35.636 [2024-12-09 06:19:30.056086] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:20:35.636 [2024-12-09 06:19:30.056115] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:35.636 request: 00:20:35.636 { 00:20:35.636 "name": "key0", 00:20:35.636 "path": "", 00:20:35.636 "method": "keyring_file_add_key", 00:20:35.636 "req_id": 1 00:20:35.636 } 00:20:35.636 Got JSON-RPC error response 00:20:35.636 response: 00:20:35.636 { 00:20:35.636 "code": -1, 00:20:35.636 "message": "Operation not permitted" 00:20:35.636 } 00:20:35.636 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:35.895 [2024-12-09 06:19:30.232606] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:35.895 [2024-12-09 06:19:30.232631] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:20:35.895 request: 00:20:35.895 { 00:20:35.895 "name": "TLSTEST", 00:20:35.895 "trtype": "tcp", 00:20:35.895 "traddr": "10.0.0.2", 00:20:35.895 "adrfam": "ipv4", 00:20:35.895 "trsvcid": "4420", 00:20:35.895 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:35.895 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:35.895 "prchk_reftag": false, 00:20:35.895 "prchk_guard": false, 00:20:35.895 "hdgst": false, 00:20:35.895 "ddgst": false, 00:20:35.895 "psk": "key0", 00:20:35.895 "allow_unrecognized_csi": false, 00:20:35.895 "method": "bdev_nvme_attach_controller", 00:20:35.895 "req_id": 1 00:20:35.896 } 00:20:35.896 Got JSON-RPC error response 00:20:35.896 response: 00:20:35.896 { 00:20:35.896 "code": -126, 00:20:35.896 "message": "Required key not available" 00:20:35.896 } 00:20:35.896 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 346351 00:20:35.896 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 346351 ']' 00:20:35.896 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 346351 00:20:35.896 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:35.896 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:35.896 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 346351 00:20:35.896 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:35.896 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:35.896 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 346351' 00:20:35.896 killing process with pid 346351 00:20:35.896 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 346351 00:20:35.896 Received shutdown signal, test time was about 10.000000 seconds 00:20:35.896 00:20:35.896 Latency(us) 00:20:35.896 [2024-12-09T05:19:30.483Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:35.896 [2024-12-09T05:19:30.483Z] =================================================================================================================== 00:20:35.896 [2024-12-09T05:19:30.483Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:35.896 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 346351 00:20:35.896 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:35.896 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:35.896 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:35.896 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:35.896 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:35.896 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 341509 00:20:35.896 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 341509 ']' 00:20:35.896 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 341509 00:20:35.896 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:35.896 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:35.896 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 341509 00:20:35.896 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:35.896 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:35.896 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 341509' 00:20:35.896 killing process with pid 341509 00:20:35.896 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 341509 00:20:35.896 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 341509 00:20:36.157 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:20:36.157 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:20:36.157 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:20:36.157 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:36.157 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:36.157 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:20:36.157 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:20:36.157 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:36.157 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:20:36.157 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.JG4sKhEbG2 00:20:36.157 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:36.157 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.JG4sKhEbG2 00:20:36.157 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:20:36.157 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:36.157 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:36.157 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:36.157 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=346440 00:20:36.157 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 346440 00:20:36.157 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 346440 ']' 00:20:36.157 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:36.157 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:36.157 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:36.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:36.157 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:36.157 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:36.157 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:36.157 [2024-12-09 06:19:30.657183] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:20:36.157 [2024-12-09 06:19:30.657236] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:36.157 [2024-12-09 06:19:30.721438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:36.417 [2024-12-09 06:19:30.751407] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:36.417 [2024-12-09 06:19:30.751437] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:36.417 [2024-12-09 06:19:30.751442] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:36.417 [2024-12-09 06:19:30.751447] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:36.417 [2024-12-09 06:19:30.751456] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:36.417 [2024-12-09 06:19:30.751898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:36.417 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:36.417 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:36.417 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:36.417 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:36.417 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:36.417 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:36.417 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.JG4sKhEbG2 00:20:36.417 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.JG4sKhEbG2 00:20:36.417 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:36.678 [2024-12-09 06:19:31.022001] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:36.678 06:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:36.678 06:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:36.938 [2024-12-09 06:19:31.354824] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:36.938 [2024-12-09 06:19:31.355005] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:36.938 06:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:36.938 malloc0 00:20:37.197 06:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:37.197 06:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.JG4sKhEbG2 00:20:37.455 06:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:37.455 06:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.JG4sKhEbG2 00:20:37.455 06:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:37.455 06:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:37.455 06:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:37.455 06:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.JG4sKhEbG2 00:20:37.455 06:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:37.455 06:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=346702 00:20:37.455 06:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:37.455 06:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 346702 /var/tmp/bdevperf.sock 00:20:37.455 06:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 346702 ']' 00:20:37.455 06:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:37.455 06:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:37.455 06:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:37.455 06:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:37.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:37.455 06:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:37.455 06:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:37.456 [2024-12-09 06:19:32.031676] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:20:37.456 [2024-12-09 06:19:32.031728] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid346702 ] 00:20:37.714 [2024-12-09 06:19:32.089501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:37.714 [2024-12-09 06:19:32.118970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:37.714 06:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:37.714 06:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:37.714 06:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.JG4sKhEbG2 00:20:37.973 06:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:37.973 [2024-12-09 06:19:32.516581] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:38.232 TLSTESTn1 00:20:38.232 06:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:38.232 Running I/O for 10 seconds... 00:20:40.552 1831.00 IOPS, 7.15 MiB/s [2024-12-09T05:19:36.085Z] 2314.50 IOPS, 9.04 MiB/s [2024-12-09T05:19:37.024Z] 2028.67 IOPS, 7.92 MiB/s [2024-12-09T05:19:37.964Z] 1837.75 IOPS, 7.18 MiB/s [2024-12-09T05:19:38.905Z] 1768.20 IOPS, 6.91 MiB/s [2024-12-09T05:19:39.846Z] 1789.33 IOPS, 6.99 MiB/s [2024-12-09T05:19:40.785Z] 1845.71 IOPS, 7.21 MiB/s [2024-12-09T05:19:42.166Z] 1831.38 IOPS, 7.15 MiB/s [2024-12-09T05:19:43.107Z] 1800.22 IOPS, 7.03 MiB/s [2024-12-09T05:19:43.107Z] 1863.10 IOPS, 7.28 MiB/s 00:20:48.520 Latency(us) 00:20:48.520 [2024-12-09T05:19:43.107Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:48.520 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:48.520 Verification LBA range: start 0x0 length 0x2000 00:20:48.520 TLSTESTn1 : 10.07 1863.10 7.28 0.00 0.00 68533.66 5217.67 129862.10 00:20:48.520 [2024-12-09T05:19:43.107Z] =================================================================================================================== 00:20:48.520 [2024-12-09T05:19:43.107Z] Total : 1863.10 7.28 0.00 0.00 68533.66 5217.67 129862.10 00:20:48.520 { 00:20:48.520 "results": [ 00:20:48.520 { 00:20:48.520 "job": "TLSTESTn1", 00:20:48.520 "core_mask": "0x4", 00:20:48.520 "workload": "verify", 00:20:48.520 "status": "finished", 00:20:48.520 "verify_range": { 00:20:48.520 "start": 0, 00:20:48.520 "length": 8192 00:20:48.520 }, 00:20:48.520 "queue_depth": 128, 00:20:48.520 "io_size": 4096, 00:20:48.520 "runtime": 10.068677, 00:20:48.520 "iops": 1863.1047554708528, 00:20:48.520 "mibps": 7.277752951058019, 00:20:48.520 "io_failed": 0, 00:20:48.520 "io_timeout": 0, 00:20:48.520 "avg_latency_us": 68533.66391615102, 00:20:48.520 "min_latency_us": 5217.673846153846, 00:20:48.520 "max_latency_us": 129862.10461538461 00:20:48.520 } 00:20:48.520 ], 00:20:48.520 "core_count": 1 00:20:48.520 } 00:20:48.520 06:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:48.520 06:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 346702 00:20:48.520 06:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 346702 ']' 00:20:48.520 06:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 346702 00:20:48.520 06:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:48.520 06:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:48.520 06:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 346702 00:20:48.520 06:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:48.520 06:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:48.520 06:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 346702' 00:20:48.520 killing process with pid 346702 00:20:48.520 06:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 346702 00:20:48.520 Received shutdown signal, test time was about 10.000000 seconds 00:20:48.520 00:20:48.520 Latency(us) 00:20:48.520 [2024-12-09T05:19:43.107Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:48.520 [2024-12-09T05:19:43.107Z] =================================================================================================================== 00:20:48.520 [2024-12-09T05:19:43.107Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:48.520 06:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 346702 00:20:48.520 06:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.JG4sKhEbG2 00:20:48.520 06:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.JG4sKhEbG2 00:20:48.520 06:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:48.520 06:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.JG4sKhEbG2 00:20:48.520 06:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:48.520 06:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:48.520 06:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:48.520 06:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:48.520 06:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.JG4sKhEbG2 00:20:48.520 06:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:48.521 06:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:48.521 06:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:48.521 06:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.JG4sKhEbG2 00:20:48.521 06:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:48.521 06:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=348533 00:20:48.521 06:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:48.521 06:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 348533 /var/tmp/bdevperf.sock 00:20:48.521 06:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 348533 ']' 00:20:48.521 06:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:48.521 06:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:48.521 06:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:48.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:48.521 06:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:48.521 06:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:48.521 06:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:48.521 [2024-12-09 06:19:43.053237] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:20:48.521 [2024-12-09 06:19:43.053293] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid348533 ] 00:20:48.781 [2024-12-09 06:19:43.115532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:48.781 [2024-12-09 06:19:43.144416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:48.781 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:48.781 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:48.781 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.JG4sKhEbG2 00:20:49.042 [2024-12-09 06:19:43.377575] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.JG4sKhEbG2': 0100666 00:20:49.042 [2024-12-09 06:19:43.377594] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:49.042 request: 00:20:49.042 { 00:20:49.042 "name": "key0", 00:20:49.042 "path": "/tmp/tmp.JG4sKhEbG2", 00:20:49.042 "method": "keyring_file_add_key", 00:20:49.042 "req_id": 1 00:20:49.042 } 00:20:49.042 Got JSON-RPC error response 00:20:49.042 response: 00:20:49.042 { 00:20:49.042 "code": -1, 00:20:49.042 "message": "Operation not permitted" 00:20:49.042 } 00:20:49.042 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:49.042 [2024-12-09 06:19:43.534032] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:49.042 [2024-12-09 06:19:43.534056] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:20:49.042 request: 00:20:49.042 { 00:20:49.042 "name": "TLSTEST", 00:20:49.042 "trtype": "tcp", 00:20:49.042 "traddr": "10.0.0.2", 00:20:49.042 "adrfam": "ipv4", 00:20:49.042 "trsvcid": "4420", 00:20:49.042 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:49.042 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:49.042 "prchk_reftag": false, 00:20:49.042 "prchk_guard": false, 00:20:49.042 "hdgst": false, 00:20:49.042 "ddgst": false, 00:20:49.042 "psk": "key0", 00:20:49.042 "allow_unrecognized_csi": false, 00:20:49.042 "method": "bdev_nvme_attach_controller", 00:20:49.042 "req_id": 1 00:20:49.042 } 00:20:49.042 Got JSON-RPC error response 00:20:49.042 response: 00:20:49.042 { 00:20:49.042 "code": -126, 00:20:49.042 "message": "Required key not available" 00:20:49.042 } 00:20:49.042 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 348533 00:20:49.042 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 348533 ']' 00:20:49.042 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 348533 00:20:49.042 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:49.043 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:49.043 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 348533 00:20:49.043 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:49.043 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:49.043 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 348533' 00:20:49.043 killing process with pid 348533 00:20:49.043 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 348533 00:20:49.043 Received shutdown signal, test time was about 10.000000 seconds 00:20:49.043 00:20:49.043 Latency(us) 00:20:49.043 [2024-12-09T05:19:43.630Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:49.043 [2024-12-09T05:19:43.630Z] =================================================================================================================== 00:20:49.043 [2024-12-09T05:19:43.630Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:49.043 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 348533 00:20:49.303 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:49.303 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:49.303 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:49.303 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:49.303 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:49.303 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 346440 00:20:49.303 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 346440 ']' 00:20:49.303 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 346440 00:20:49.303 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:49.303 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:49.303 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 346440 00:20:49.303 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:49.303 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:49.303 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 346440' 00:20:49.303 killing process with pid 346440 00:20:49.303 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 346440 00:20:49.303 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 346440 00:20:49.303 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:20:49.303 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:49.303 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:49.303 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:49.303 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=348785 00:20:49.303 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 348785 00:20:49.303 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 348785 ']' 00:20:49.303 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:49.303 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:49.303 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:49.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:49.303 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:49.303 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:49.303 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:49.563 [2024-12-09 06:19:43.930943] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:20:49.563 [2024-12-09 06:19:43.930998] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:49.563 [2024-12-09 06:19:43.995070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:49.563 [2024-12-09 06:19:44.025516] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:49.563 [2024-12-09 06:19:44.025547] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:49.563 [2024-12-09 06:19:44.025552] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:49.563 [2024-12-09 06:19:44.025557] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:49.563 [2024-12-09 06:19:44.025562] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:49.563 [2024-12-09 06:19:44.026028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:49.563 06:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:49.563 06:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:49.563 06:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:49.563 06:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:49.563 06:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:49.823 06:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:49.823 06:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.JG4sKhEbG2 00:20:49.823 06:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:49.823 06:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.JG4sKhEbG2 00:20:49.823 06:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:20:49.823 06:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:49.823 06:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:20:49.823 06:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:49.823 06:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.JG4sKhEbG2 00:20:49.823 06:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.JG4sKhEbG2 00:20:49.823 06:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:49.823 [2024-12-09 06:19:44.312721] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:49.823 06:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:50.083 06:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:50.083 [2024-12-09 06:19:44.657567] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:50.083 [2024-12-09 06:19:44.657752] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:50.343 06:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:50.343 malloc0 00:20:50.343 06:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:50.605 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.JG4sKhEbG2 00:20:50.605 [2024-12-09 06:19:45.156488] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.JG4sKhEbG2': 0100666 00:20:50.605 [2024-12-09 06:19:45.156508] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:50.605 request: 00:20:50.605 { 00:20:50.605 "name": "key0", 00:20:50.605 "path": "/tmp/tmp.JG4sKhEbG2", 00:20:50.605 "method": "keyring_file_add_key", 00:20:50.605 "req_id": 1 00:20:50.605 } 00:20:50.605 Got JSON-RPC error response 00:20:50.605 response: 00:20:50.605 { 00:20:50.605 "code": -1, 00:20:50.605 "message": "Operation not permitted" 00:20:50.605 } 00:20:50.605 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:50.867 [2024-12-09 06:19:45.328932] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:20:50.867 [2024-12-09 06:19:45.328959] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:20:50.867 request: 00:20:50.867 { 00:20:50.867 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:50.867 "host": "nqn.2016-06.io.spdk:host1", 00:20:50.867 "psk": "key0", 00:20:50.867 "method": "nvmf_subsystem_add_host", 00:20:50.867 "req_id": 1 00:20:50.867 } 00:20:50.867 Got JSON-RPC error response 00:20:50.867 response: 00:20:50.867 { 00:20:50.867 "code": -32603, 00:20:50.867 "message": "Internal error" 00:20:50.867 } 00:20:50.867 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:50.867 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:50.867 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:50.867 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:50.867 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 348785 00:20:50.867 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 348785 ']' 00:20:50.867 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 348785 00:20:50.867 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:50.867 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:50.867 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 348785 00:20:50.867 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:50.867 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:50.867 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 348785' 00:20:50.867 killing process with pid 348785 00:20:50.867 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 348785 00:20:50.867 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 348785 00:20:51.127 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.JG4sKhEbG2 00:20:51.127 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:20:51.127 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:51.127 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:51.127 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:51.127 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=349030 00:20:51.127 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 349030 00:20:51.127 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 349030 ']' 00:20:51.127 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:51.127 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:51.127 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:51.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:51.127 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:51.127 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:51.127 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:51.127 [2024-12-09 06:19:45.570707] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:20:51.127 [2024-12-09 06:19:45.570761] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:51.127 [2024-12-09 06:19:45.635463] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:51.127 [2024-12-09 06:19:45.665999] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:51.127 [2024-12-09 06:19:45.666032] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:51.127 [2024-12-09 06:19:45.666038] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:51.127 [2024-12-09 06:19:45.666043] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:51.127 [2024-12-09 06:19:45.666047] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:51.127 [2024-12-09 06:19:45.666516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:51.386 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:51.386 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:51.386 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:51.386 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:51.386 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:51.386 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:51.386 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.JG4sKhEbG2 00:20:51.386 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.JG4sKhEbG2 00:20:51.386 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:51.386 [2024-12-09 06:19:45.945197] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:51.386 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:51.646 06:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:51.905 [2024-12-09 06:19:46.274006] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:51.905 [2024-12-09 06:19:46.274187] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:51.905 06:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:51.905 malloc0 00:20:51.905 06:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:52.165 06:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.JG4sKhEbG2 00:20:52.424 06:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:52.424 06:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=349234 00:20:52.424 06:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:52.424 06:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 349234 /var/tmp/bdevperf.sock 00:20:52.424 06:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 349234 ']' 00:20:52.424 06:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:52.424 06:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:52.424 06:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:52.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:52.424 06:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:52.424 06:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:52.424 06:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:52.684 [2024-12-09 06:19:47.032144] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:20:52.684 [2024-12-09 06:19:47.032213] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid349234 ] 00:20:52.684 [2024-12-09 06:19:47.093477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:52.684 [2024-12-09 06:19:47.122810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:52.684 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:52.684 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:52.684 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.JG4sKhEbG2 00:20:52.945 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:52.945 [2024-12-09 06:19:47.516853] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:53.205 TLSTESTn1 00:20:53.205 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:20:53.466 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:20:53.466 "subsystems": [ 00:20:53.466 { 00:20:53.466 "subsystem": "keyring", 00:20:53.466 "config": [ 00:20:53.466 { 00:20:53.466 "method": "keyring_file_add_key", 00:20:53.466 "params": { 00:20:53.466 "name": "key0", 00:20:53.466 "path": "/tmp/tmp.JG4sKhEbG2" 00:20:53.466 } 00:20:53.466 } 00:20:53.466 ] 00:20:53.466 }, 00:20:53.466 { 00:20:53.466 "subsystem": "iobuf", 00:20:53.466 "config": [ 00:20:53.466 { 00:20:53.466 "method": "iobuf_set_options", 00:20:53.466 "params": { 00:20:53.466 "small_pool_count": 8192, 00:20:53.466 "large_pool_count": 1024, 00:20:53.466 "small_bufsize": 8192, 00:20:53.466 "large_bufsize": 135168, 00:20:53.466 "enable_numa": false 00:20:53.466 } 00:20:53.466 } 00:20:53.466 ] 00:20:53.466 }, 00:20:53.466 { 00:20:53.466 "subsystem": "sock", 00:20:53.466 "config": [ 00:20:53.466 { 00:20:53.466 "method": "sock_set_default_impl", 00:20:53.466 "params": { 00:20:53.466 "impl_name": "posix" 00:20:53.466 } 00:20:53.466 }, 00:20:53.466 { 00:20:53.466 "method": "sock_impl_set_options", 00:20:53.466 "params": { 00:20:53.466 "impl_name": "ssl", 00:20:53.466 "recv_buf_size": 4096, 00:20:53.466 "send_buf_size": 4096, 00:20:53.466 "enable_recv_pipe": true, 00:20:53.466 "enable_quickack": false, 00:20:53.466 "enable_placement_id": 0, 00:20:53.466 "enable_zerocopy_send_server": true, 00:20:53.466 "enable_zerocopy_send_client": false, 00:20:53.466 "zerocopy_threshold": 0, 00:20:53.466 "tls_version": 0, 00:20:53.466 "enable_ktls": false 00:20:53.466 } 00:20:53.466 }, 00:20:53.466 { 00:20:53.466 "method": "sock_impl_set_options", 00:20:53.466 "params": { 00:20:53.466 "impl_name": "posix", 00:20:53.466 "recv_buf_size": 2097152, 00:20:53.466 "send_buf_size": 2097152, 00:20:53.466 "enable_recv_pipe": true, 00:20:53.466 "enable_quickack": false, 00:20:53.466 "enable_placement_id": 0, 00:20:53.466 "enable_zerocopy_send_server": true, 00:20:53.466 "enable_zerocopy_send_client": false, 00:20:53.466 "zerocopy_threshold": 0, 00:20:53.466 "tls_version": 0, 00:20:53.466 "enable_ktls": false 00:20:53.466 } 00:20:53.466 } 00:20:53.466 ] 00:20:53.466 }, 00:20:53.466 { 00:20:53.466 "subsystem": "vmd", 00:20:53.466 "config": [] 00:20:53.466 }, 00:20:53.466 { 00:20:53.466 "subsystem": "accel", 00:20:53.466 "config": [ 00:20:53.466 { 00:20:53.466 "method": "accel_set_options", 00:20:53.466 "params": { 00:20:53.466 "small_cache_size": 128, 00:20:53.466 "large_cache_size": 16, 00:20:53.466 "task_count": 2048, 00:20:53.466 "sequence_count": 2048, 00:20:53.466 "buf_count": 2048 00:20:53.466 } 00:20:53.466 } 00:20:53.466 ] 00:20:53.466 }, 00:20:53.466 { 00:20:53.466 "subsystem": "bdev", 00:20:53.466 "config": [ 00:20:53.466 { 00:20:53.466 "method": "bdev_set_options", 00:20:53.466 "params": { 00:20:53.466 "bdev_io_pool_size": 65535, 00:20:53.466 "bdev_io_cache_size": 256, 00:20:53.466 "bdev_auto_examine": true, 00:20:53.466 "iobuf_small_cache_size": 128, 00:20:53.466 "iobuf_large_cache_size": 16 00:20:53.466 } 00:20:53.466 }, 00:20:53.466 { 00:20:53.466 "method": "bdev_raid_set_options", 00:20:53.466 "params": { 00:20:53.466 "process_window_size_kb": 1024, 00:20:53.466 "process_max_bandwidth_mb_sec": 0 00:20:53.466 } 00:20:53.466 }, 00:20:53.466 { 00:20:53.466 "method": "bdev_iscsi_set_options", 00:20:53.466 "params": { 00:20:53.466 "timeout_sec": 30 00:20:53.466 } 00:20:53.466 }, 00:20:53.466 { 00:20:53.466 "method": "bdev_nvme_set_options", 00:20:53.466 "params": { 00:20:53.466 "action_on_timeout": "none", 00:20:53.466 "timeout_us": 0, 00:20:53.466 "timeout_admin_us": 0, 00:20:53.466 "keep_alive_timeout_ms": 10000, 00:20:53.466 "arbitration_burst": 0, 00:20:53.466 "low_priority_weight": 0, 00:20:53.466 "medium_priority_weight": 0, 00:20:53.466 "high_priority_weight": 0, 00:20:53.466 "nvme_adminq_poll_period_us": 10000, 00:20:53.466 "nvme_ioq_poll_period_us": 0, 00:20:53.466 "io_queue_requests": 0, 00:20:53.466 "delay_cmd_submit": true, 00:20:53.466 "transport_retry_count": 4, 00:20:53.466 "bdev_retry_count": 3, 00:20:53.466 "transport_ack_timeout": 0, 00:20:53.466 "ctrlr_loss_timeout_sec": 0, 00:20:53.466 "reconnect_delay_sec": 0, 00:20:53.466 "fast_io_fail_timeout_sec": 0, 00:20:53.466 "disable_auto_failback": false, 00:20:53.466 "generate_uuids": false, 00:20:53.466 "transport_tos": 0, 00:20:53.466 "nvme_error_stat": false, 00:20:53.466 "rdma_srq_size": 0, 00:20:53.466 "io_path_stat": false, 00:20:53.466 "allow_accel_sequence": false, 00:20:53.466 "rdma_max_cq_size": 0, 00:20:53.466 "rdma_cm_event_timeout_ms": 0, 00:20:53.466 "dhchap_digests": [ 00:20:53.466 "sha256", 00:20:53.466 "sha384", 00:20:53.466 "sha512" 00:20:53.466 ], 00:20:53.466 "dhchap_dhgroups": [ 00:20:53.466 "null", 00:20:53.466 "ffdhe2048", 00:20:53.466 "ffdhe3072", 00:20:53.466 "ffdhe4096", 00:20:53.466 "ffdhe6144", 00:20:53.466 "ffdhe8192" 00:20:53.466 ] 00:20:53.466 } 00:20:53.466 }, 00:20:53.466 { 00:20:53.466 "method": "bdev_nvme_set_hotplug", 00:20:53.466 "params": { 00:20:53.466 "period_us": 100000, 00:20:53.466 "enable": false 00:20:53.466 } 00:20:53.466 }, 00:20:53.466 { 00:20:53.466 "method": "bdev_malloc_create", 00:20:53.466 "params": { 00:20:53.466 "name": "malloc0", 00:20:53.466 "num_blocks": 8192, 00:20:53.466 "block_size": 4096, 00:20:53.466 "physical_block_size": 4096, 00:20:53.466 "uuid": "89bc4668-9ab2-4bd5-8814-8bb321cf35ed", 00:20:53.466 "optimal_io_boundary": 0, 00:20:53.466 "md_size": 0, 00:20:53.466 "dif_type": 0, 00:20:53.466 "dif_is_head_of_md": false, 00:20:53.466 "dif_pi_format": 0 00:20:53.466 } 00:20:53.466 }, 00:20:53.466 { 00:20:53.466 "method": "bdev_wait_for_examine" 00:20:53.466 } 00:20:53.466 ] 00:20:53.466 }, 00:20:53.466 { 00:20:53.466 "subsystem": "nbd", 00:20:53.466 "config": [] 00:20:53.466 }, 00:20:53.466 { 00:20:53.466 "subsystem": "scheduler", 00:20:53.466 "config": [ 00:20:53.466 { 00:20:53.466 "method": "framework_set_scheduler", 00:20:53.466 "params": { 00:20:53.466 "name": "static" 00:20:53.466 } 00:20:53.466 } 00:20:53.466 ] 00:20:53.466 }, 00:20:53.466 { 00:20:53.466 "subsystem": "nvmf", 00:20:53.466 "config": [ 00:20:53.466 { 00:20:53.466 "method": "nvmf_set_config", 00:20:53.466 "params": { 00:20:53.466 "discovery_filter": "match_any", 00:20:53.466 "admin_cmd_passthru": { 00:20:53.466 "identify_ctrlr": false 00:20:53.466 }, 00:20:53.466 "dhchap_digests": [ 00:20:53.466 "sha256", 00:20:53.466 "sha384", 00:20:53.466 "sha512" 00:20:53.466 ], 00:20:53.467 "dhchap_dhgroups": [ 00:20:53.467 "null", 00:20:53.467 "ffdhe2048", 00:20:53.467 "ffdhe3072", 00:20:53.467 "ffdhe4096", 00:20:53.467 "ffdhe6144", 00:20:53.467 "ffdhe8192" 00:20:53.467 ] 00:20:53.467 } 00:20:53.467 }, 00:20:53.467 { 00:20:53.467 "method": "nvmf_set_max_subsystems", 00:20:53.467 "params": { 00:20:53.467 "max_subsystems": 1024 00:20:53.467 } 00:20:53.467 }, 00:20:53.467 { 00:20:53.467 "method": "nvmf_set_crdt", 00:20:53.467 "params": { 00:20:53.467 "crdt1": 0, 00:20:53.467 "crdt2": 0, 00:20:53.467 "crdt3": 0 00:20:53.467 } 00:20:53.467 }, 00:20:53.467 { 00:20:53.467 "method": "nvmf_create_transport", 00:20:53.467 "params": { 00:20:53.467 "trtype": "TCP", 00:20:53.467 "max_queue_depth": 128, 00:20:53.467 "max_io_qpairs_per_ctrlr": 127, 00:20:53.467 "in_capsule_data_size": 4096, 00:20:53.467 "max_io_size": 131072, 00:20:53.467 "io_unit_size": 131072, 00:20:53.467 "max_aq_depth": 128, 00:20:53.467 "num_shared_buffers": 511, 00:20:53.467 "buf_cache_size": 4294967295, 00:20:53.467 "dif_insert_or_strip": false, 00:20:53.467 "zcopy": false, 00:20:53.467 "c2h_success": false, 00:20:53.467 "sock_priority": 0, 00:20:53.467 "abort_timeout_sec": 1, 00:20:53.467 "ack_timeout": 0, 00:20:53.467 "data_wr_pool_size": 0 00:20:53.467 } 00:20:53.467 }, 00:20:53.467 { 00:20:53.467 "method": "nvmf_create_subsystem", 00:20:53.467 "params": { 00:20:53.467 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:53.467 "allow_any_host": false, 00:20:53.467 "serial_number": "SPDK00000000000001", 00:20:53.467 "model_number": "SPDK bdev Controller", 00:20:53.467 "max_namespaces": 10, 00:20:53.467 "min_cntlid": 1, 00:20:53.467 "max_cntlid": 65519, 00:20:53.467 "ana_reporting": false 00:20:53.467 } 00:20:53.467 }, 00:20:53.467 { 00:20:53.467 "method": "nvmf_subsystem_add_host", 00:20:53.467 "params": { 00:20:53.467 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:53.467 "host": "nqn.2016-06.io.spdk:host1", 00:20:53.467 "psk": "key0" 00:20:53.467 } 00:20:53.467 }, 00:20:53.467 { 00:20:53.467 "method": "nvmf_subsystem_add_ns", 00:20:53.467 "params": { 00:20:53.467 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:53.467 "namespace": { 00:20:53.467 "nsid": 1, 00:20:53.467 "bdev_name": "malloc0", 00:20:53.467 "nguid": "89BC46689AB24BD588148BB321CF35ED", 00:20:53.467 "uuid": "89bc4668-9ab2-4bd5-8814-8bb321cf35ed", 00:20:53.467 "no_auto_visible": false 00:20:53.467 } 00:20:53.467 } 00:20:53.467 }, 00:20:53.467 { 00:20:53.467 "method": "nvmf_subsystem_add_listener", 00:20:53.467 "params": { 00:20:53.467 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:53.467 "listen_address": { 00:20:53.467 "trtype": "TCP", 00:20:53.467 "adrfam": "IPv4", 00:20:53.467 "traddr": "10.0.0.2", 00:20:53.467 "trsvcid": "4420" 00:20:53.467 }, 00:20:53.467 "secure_channel": true 00:20:53.467 } 00:20:53.467 } 00:20:53.467 ] 00:20:53.467 } 00:20:53.467 ] 00:20:53.467 }' 00:20:53.467 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:53.728 06:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:20:53.728 "subsystems": [ 00:20:53.728 { 00:20:53.728 "subsystem": "keyring", 00:20:53.728 "config": [ 00:20:53.728 { 00:20:53.728 "method": "keyring_file_add_key", 00:20:53.728 "params": { 00:20:53.728 "name": "key0", 00:20:53.728 "path": "/tmp/tmp.JG4sKhEbG2" 00:20:53.728 } 00:20:53.728 } 00:20:53.728 ] 00:20:53.728 }, 00:20:53.728 { 00:20:53.728 "subsystem": "iobuf", 00:20:53.728 "config": [ 00:20:53.728 { 00:20:53.728 "method": "iobuf_set_options", 00:20:53.728 "params": { 00:20:53.728 "small_pool_count": 8192, 00:20:53.728 "large_pool_count": 1024, 00:20:53.728 "small_bufsize": 8192, 00:20:53.728 "large_bufsize": 135168, 00:20:53.728 "enable_numa": false 00:20:53.728 } 00:20:53.728 } 00:20:53.728 ] 00:20:53.728 }, 00:20:53.728 { 00:20:53.728 "subsystem": "sock", 00:20:53.728 "config": [ 00:20:53.728 { 00:20:53.728 "method": "sock_set_default_impl", 00:20:53.728 "params": { 00:20:53.728 "impl_name": "posix" 00:20:53.728 } 00:20:53.728 }, 00:20:53.728 { 00:20:53.728 "method": "sock_impl_set_options", 00:20:53.728 "params": { 00:20:53.728 "impl_name": "ssl", 00:20:53.728 "recv_buf_size": 4096, 00:20:53.728 "send_buf_size": 4096, 00:20:53.728 "enable_recv_pipe": true, 00:20:53.728 "enable_quickack": false, 00:20:53.728 "enable_placement_id": 0, 00:20:53.728 "enable_zerocopy_send_server": true, 00:20:53.728 "enable_zerocopy_send_client": false, 00:20:53.728 "zerocopy_threshold": 0, 00:20:53.728 "tls_version": 0, 00:20:53.728 "enable_ktls": false 00:20:53.728 } 00:20:53.728 }, 00:20:53.728 { 00:20:53.728 "method": "sock_impl_set_options", 00:20:53.728 "params": { 00:20:53.728 "impl_name": "posix", 00:20:53.728 "recv_buf_size": 2097152, 00:20:53.728 "send_buf_size": 2097152, 00:20:53.728 "enable_recv_pipe": true, 00:20:53.728 "enable_quickack": false, 00:20:53.728 "enable_placement_id": 0, 00:20:53.728 "enable_zerocopy_send_server": true, 00:20:53.728 "enable_zerocopy_send_client": false, 00:20:53.728 "zerocopy_threshold": 0, 00:20:53.728 "tls_version": 0, 00:20:53.728 "enable_ktls": false 00:20:53.728 } 00:20:53.728 } 00:20:53.728 ] 00:20:53.728 }, 00:20:53.728 { 00:20:53.728 "subsystem": "vmd", 00:20:53.728 "config": [] 00:20:53.728 }, 00:20:53.728 { 00:20:53.728 "subsystem": "accel", 00:20:53.728 "config": [ 00:20:53.728 { 00:20:53.728 "method": "accel_set_options", 00:20:53.728 "params": { 00:20:53.728 "small_cache_size": 128, 00:20:53.728 "large_cache_size": 16, 00:20:53.728 "task_count": 2048, 00:20:53.728 "sequence_count": 2048, 00:20:53.728 "buf_count": 2048 00:20:53.728 } 00:20:53.728 } 00:20:53.728 ] 00:20:53.728 }, 00:20:53.728 { 00:20:53.728 "subsystem": "bdev", 00:20:53.728 "config": [ 00:20:53.728 { 00:20:53.728 "method": "bdev_set_options", 00:20:53.728 "params": { 00:20:53.728 "bdev_io_pool_size": 65535, 00:20:53.728 "bdev_io_cache_size": 256, 00:20:53.728 "bdev_auto_examine": true, 00:20:53.728 "iobuf_small_cache_size": 128, 00:20:53.728 "iobuf_large_cache_size": 16 00:20:53.728 } 00:20:53.728 }, 00:20:53.728 { 00:20:53.728 "method": "bdev_raid_set_options", 00:20:53.728 "params": { 00:20:53.728 "process_window_size_kb": 1024, 00:20:53.728 "process_max_bandwidth_mb_sec": 0 00:20:53.728 } 00:20:53.728 }, 00:20:53.728 { 00:20:53.728 "method": "bdev_iscsi_set_options", 00:20:53.728 "params": { 00:20:53.728 "timeout_sec": 30 00:20:53.728 } 00:20:53.728 }, 00:20:53.728 { 00:20:53.728 "method": "bdev_nvme_set_options", 00:20:53.728 "params": { 00:20:53.728 "action_on_timeout": "none", 00:20:53.728 "timeout_us": 0, 00:20:53.728 "timeout_admin_us": 0, 00:20:53.728 "keep_alive_timeout_ms": 10000, 00:20:53.728 "arbitration_burst": 0, 00:20:53.728 "low_priority_weight": 0, 00:20:53.728 "medium_priority_weight": 0, 00:20:53.728 "high_priority_weight": 0, 00:20:53.728 "nvme_adminq_poll_period_us": 10000, 00:20:53.728 "nvme_ioq_poll_period_us": 0, 00:20:53.728 "io_queue_requests": 512, 00:20:53.728 "delay_cmd_submit": true, 00:20:53.728 "transport_retry_count": 4, 00:20:53.728 "bdev_retry_count": 3, 00:20:53.728 "transport_ack_timeout": 0, 00:20:53.728 "ctrlr_loss_timeout_sec": 0, 00:20:53.728 "reconnect_delay_sec": 0, 00:20:53.728 "fast_io_fail_timeout_sec": 0, 00:20:53.728 "disable_auto_failback": false, 00:20:53.728 "generate_uuids": false, 00:20:53.728 "transport_tos": 0, 00:20:53.728 "nvme_error_stat": false, 00:20:53.728 "rdma_srq_size": 0, 00:20:53.728 "io_path_stat": false, 00:20:53.728 "allow_accel_sequence": false, 00:20:53.728 "rdma_max_cq_size": 0, 00:20:53.728 "rdma_cm_event_timeout_ms": 0, 00:20:53.728 "dhchap_digests": [ 00:20:53.728 "sha256", 00:20:53.728 "sha384", 00:20:53.728 "sha512" 00:20:53.728 ], 00:20:53.728 "dhchap_dhgroups": [ 00:20:53.728 "null", 00:20:53.728 "ffdhe2048", 00:20:53.728 "ffdhe3072", 00:20:53.728 "ffdhe4096", 00:20:53.728 "ffdhe6144", 00:20:53.728 "ffdhe8192" 00:20:53.728 ] 00:20:53.728 } 00:20:53.728 }, 00:20:53.728 { 00:20:53.728 "method": "bdev_nvme_attach_controller", 00:20:53.728 "params": { 00:20:53.728 "name": "TLSTEST", 00:20:53.728 "trtype": "TCP", 00:20:53.728 "adrfam": "IPv4", 00:20:53.728 "traddr": "10.0.0.2", 00:20:53.728 "trsvcid": "4420", 00:20:53.728 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:53.728 "prchk_reftag": false, 00:20:53.728 "prchk_guard": false, 00:20:53.728 "ctrlr_loss_timeout_sec": 0, 00:20:53.728 "reconnect_delay_sec": 0, 00:20:53.728 "fast_io_fail_timeout_sec": 0, 00:20:53.728 "psk": "key0", 00:20:53.728 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:53.729 "hdgst": false, 00:20:53.729 "ddgst": false, 00:20:53.729 "multipath": "multipath" 00:20:53.729 } 00:20:53.729 }, 00:20:53.729 { 00:20:53.729 "method": "bdev_nvme_set_hotplug", 00:20:53.729 "params": { 00:20:53.729 "period_us": 100000, 00:20:53.729 "enable": false 00:20:53.729 } 00:20:53.729 }, 00:20:53.729 { 00:20:53.729 "method": "bdev_wait_for_examine" 00:20:53.729 } 00:20:53.729 ] 00:20:53.729 }, 00:20:53.729 { 00:20:53.729 "subsystem": "nbd", 00:20:53.729 "config": [] 00:20:53.729 } 00:20:53.729 ] 00:20:53.729 }' 00:20:53.729 06:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 349234 00:20:53.729 06:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 349234 ']' 00:20:53.729 06:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 349234 00:20:53.729 06:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:53.729 06:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:53.729 06:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 349234 00:20:53.729 06:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:53.729 06:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:53.729 06:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 349234' 00:20:53.729 killing process with pid 349234 00:20:53.729 06:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 349234 00:20:53.729 Received shutdown signal, test time was about 10.000000 seconds 00:20:53.729 00:20:53.729 Latency(us) 00:20:53.729 [2024-12-09T05:19:48.316Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:53.729 [2024-12-09T05:19:48.316Z] =================================================================================================================== 00:20:53.729 [2024-12-09T05:19:48.316Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:53.729 06:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 349234 00:20:53.729 06:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 349030 00:20:53.729 06:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 349030 ']' 00:20:53.729 06:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 349030 00:20:53.729 06:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:53.729 06:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:53.729 06:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 349030 00:20:53.989 06:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:53.989 06:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:53.989 06:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 349030' 00:20:53.989 killing process with pid 349030 00:20:53.989 06:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 349030 00:20:53.989 06:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 349030 00:20:53.989 06:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:53.989 06:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:53.989 06:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:53.989 06:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:53.989 06:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:20:53.989 "subsystems": [ 00:20:53.989 { 00:20:53.989 "subsystem": "keyring", 00:20:53.989 "config": [ 00:20:53.989 { 00:20:53.989 "method": "keyring_file_add_key", 00:20:53.989 "params": { 00:20:53.989 "name": "key0", 00:20:53.989 "path": "/tmp/tmp.JG4sKhEbG2" 00:20:53.989 } 00:20:53.989 } 00:20:53.989 ] 00:20:53.989 }, 00:20:53.989 { 00:20:53.989 "subsystem": "iobuf", 00:20:53.989 "config": [ 00:20:53.989 { 00:20:53.989 "method": "iobuf_set_options", 00:20:53.989 "params": { 00:20:53.989 "small_pool_count": 8192, 00:20:53.990 "large_pool_count": 1024, 00:20:53.990 "small_bufsize": 8192, 00:20:53.990 "large_bufsize": 135168, 00:20:53.990 "enable_numa": false 00:20:53.990 } 00:20:53.990 } 00:20:53.990 ] 00:20:53.990 }, 00:20:53.990 { 00:20:53.990 "subsystem": "sock", 00:20:53.990 "config": [ 00:20:53.990 { 00:20:53.990 "method": "sock_set_default_impl", 00:20:53.990 "params": { 00:20:53.990 "impl_name": "posix" 00:20:53.990 } 00:20:53.990 }, 00:20:53.990 { 00:20:53.990 "method": "sock_impl_set_options", 00:20:53.990 "params": { 00:20:53.990 "impl_name": "ssl", 00:20:53.990 "recv_buf_size": 4096, 00:20:53.990 "send_buf_size": 4096, 00:20:53.990 "enable_recv_pipe": true, 00:20:53.990 "enable_quickack": false, 00:20:53.990 "enable_placement_id": 0, 00:20:53.990 "enable_zerocopy_send_server": true, 00:20:53.990 "enable_zerocopy_send_client": false, 00:20:53.990 "zerocopy_threshold": 0, 00:20:53.990 "tls_version": 0, 00:20:53.990 "enable_ktls": false 00:20:53.990 } 00:20:53.990 }, 00:20:53.990 { 00:20:53.990 "method": "sock_impl_set_options", 00:20:53.990 "params": { 00:20:53.990 "impl_name": "posix", 00:20:53.990 "recv_buf_size": 2097152, 00:20:53.990 "send_buf_size": 2097152, 00:20:53.990 "enable_recv_pipe": true, 00:20:53.990 "enable_quickack": false, 00:20:53.990 "enable_placement_id": 0, 00:20:53.990 "enable_zerocopy_send_server": true, 00:20:53.990 "enable_zerocopy_send_client": false, 00:20:53.990 "zerocopy_threshold": 0, 00:20:53.990 "tls_version": 0, 00:20:53.990 "enable_ktls": false 00:20:53.990 } 00:20:53.990 } 00:20:53.990 ] 00:20:53.990 }, 00:20:53.990 { 00:20:53.990 "subsystem": "vmd", 00:20:53.990 "config": [] 00:20:53.990 }, 00:20:53.990 { 00:20:53.990 "subsystem": "accel", 00:20:53.990 "config": [ 00:20:53.990 { 00:20:53.990 "method": "accel_set_options", 00:20:53.990 "params": { 00:20:53.990 "small_cache_size": 128, 00:20:53.990 "large_cache_size": 16, 00:20:53.990 "task_count": 2048, 00:20:53.990 "sequence_count": 2048, 00:20:53.990 "buf_count": 2048 00:20:53.990 } 00:20:53.990 } 00:20:53.990 ] 00:20:53.990 }, 00:20:53.990 { 00:20:53.990 "subsystem": "bdev", 00:20:53.990 "config": [ 00:20:53.990 { 00:20:53.990 "method": "bdev_set_options", 00:20:53.990 "params": { 00:20:53.990 "bdev_io_pool_size": 65535, 00:20:53.990 "bdev_io_cache_size": 256, 00:20:53.990 "bdev_auto_examine": true, 00:20:53.990 "iobuf_small_cache_size": 128, 00:20:53.990 "iobuf_large_cache_size": 16 00:20:53.990 } 00:20:53.990 }, 00:20:53.990 { 00:20:53.990 "method": "bdev_raid_set_options", 00:20:53.990 "params": { 00:20:53.990 "process_window_size_kb": 1024, 00:20:53.990 "process_max_bandwidth_mb_sec": 0 00:20:53.990 } 00:20:53.990 }, 00:20:53.990 { 00:20:53.990 "method": "bdev_iscsi_set_options", 00:20:53.990 "params": { 00:20:53.990 "timeout_sec": 30 00:20:53.990 } 00:20:53.990 }, 00:20:53.990 { 00:20:53.990 "method": "bdev_nvme_set_options", 00:20:53.990 "params": { 00:20:53.990 "action_on_timeout": "none", 00:20:53.990 "timeout_us": 0, 00:20:53.990 "timeout_admin_us": 0, 00:20:53.990 "keep_alive_timeout_ms": 10000, 00:20:53.990 "arbitration_burst": 0, 00:20:53.990 "low_priority_weight": 0, 00:20:53.990 "medium_priority_weight": 0, 00:20:53.990 "high_priority_weight": 0, 00:20:53.990 "nvme_adminq_poll_period_us": 10000, 00:20:53.990 "nvme_ioq_poll_period_us": 0, 00:20:53.990 "io_queue_requests": 0, 00:20:53.990 "delay_cmd_submit": true, 00:20:53.990 "transport_retry_count": 4, 00:20:53.990 "bdev_retry_count": 3, 00:20:53.990 "transport_ack_timeout": 0, 00:20:53.990 "ctrlr_loss_timeout_sec": 0, 00:20:53.990 "reconnect_delay_sec": 0, 00:20:53.990 "fast_io_fail_timeout_sec": 0, 00:20:53.990 "disable_auto_failback": false, 00:20:53.990 "generate_uuids": false, 00:20:53.990 "transport_tos": 0, 00:20:53.990 "nvme_error_stat": false, 00:20:53.990 "rdma_srq_size": 0, 00:20:53.990 "io_path_stat": false, 00:20:53.990 "allow_accel_sequence": false, 00:20:53.990 "rdma_max_cq_size": 0, 00:20:53.990 "rdma_cm_event_timeout_ms": 0, 00:20:53.990 "dhchap_digests": [ 00:20:53.990 "sha256", 00:20:53.990 "sha384", 00:20:53.990 "sha512" 00:20:53.990 ], 00:20:53.990 "dhchap_dhgroups": [ 00:20:53.990 "null", 00:20:53.990 "ffdhe2048", 00:20:53.990 "ffdhe3072", 00:20:53.990 "ffdhe4096", 00:20:53.990 "ffdhe6144", 00:20:53.990 "ffdhe8192" 00:20:53.990 ] 00:20:53.990 } 00:20:53.990 }, 00:20:53.990 { 00:20:53.990 "method": "bdev_nvme_set_hotplug", 00:20:53.990 "params": { 00:20:53.990 "period_us": 100000, 00:20:53.990 "enable": false 00:20:53.990 } 00:20:53.990 }, 00:20:53.990 { 00:20:53.990 "method": "bdev_malloc_create", 00:20:53.990 "params": { 00:20:53.990 "name": "malloc0", 00:20:53.990 "num_blocks": 8192, 00:20:53.990 "block_size": 4096, 00:20:53.990 "physical_block_size": 4096, 00:20:53.990 "uuid": "89bc4668-9ab2-4bd5-8814-8bb321cf35ed", 00:20:53.990 "optimal_io_boundary": 0, 00:20:53.990 "md_size": 0, 00:20:53.990 "dif_type": 0, 00:20:53.990 "dif_is_head_of_md": false, 00:20:53.990 "dif_pi_format": 0 00:20:53.990 } 00:20:53.990 }, 00:20:53.990 { 00:20:53.990 "method": "bdev_wait_for_examine" 00:20:53.990 } 00:20:53.990 ] 00:20:53.990 }, 00:20:53.990 { 00:20:53.990 "subsystem": "nbd", 00:20:53.990 "config": [] 00:20:53.990 }, 00:20:53.990 { 00:20:53.990 "subsystem": "scheduler", 00:20:53.990 "config": [ 00:20:53.990 { 00:20:53.990 "method": "framework_set_scheduler", 00:20:53.990 "params": { 00:20:53.990 "name": "static" 00:20:53.990 } 00:20:53.990 } 00:20:53.990 ] 00:20:53.990 }, 00:20:53.990 { 00:20:53.990 "subsystem": "nvmf", 00:20:53.990 "config": [ 00:20:53.990 { 00:20:53.990 "method": "nvmf_set_config", 00:20:53.990 "params": { 00:20:53.990 "discovery_filter": "match_any", 00:20:53.990 "admin_cmd_passthru": { 00:20:53.990 "identify_ctrlr": false 00:20:53.990 }, 00:20:53.990 "dhchap_digests": [ 00:20:53.990 "sha256", 00:20:53.990 "sha384", 00:20:53.990 "sha512" 00:20:53.990 ], 00:20:53.990 "dhchap_dhgroups": [ 00:20:53.990 "null", 00:20:53.990 "ffdhe2048", 00:20:53.990 "ffdhe3072", 00:20:53.990 "ffdhe4096", 00:20:53.990 "ffdhe6144", 00:20:53.990 "ffdhe8192" 00:20:53.990 ] 00:20:53.990 } 00:20:53.990 }, 00:20:53.990 { 00:20:53.990 "method": "nvmf_set_max_subsystems", 00:20:53.990 "params": { 00:20:53.990 "max_subsystems": 1024 00:20:53.990 } 00:20:53.990 }, 00:20:53.990 { 00:20:53.990 "method": "nvmf_set_crdt", 00:20:53.990 "params": { 00:20:53.990 "crdt1": 0, 00:20:53.990 "crdt2": 0, 00:20:53.990 "crdt3": 0 00:20:53.990 } 00:20:53.990 }, 00:20:53.990 { 00:20:53.990 "method": "nvmf_create_transport", 00:20:53.990 "params": { 00:20:53.990 "trtype": "TCP", 00:20:53.990 "max_queue_depth": 128, 00:20:53.990 "max_io_qpairs_per_ctrlr": 127, 00:20:53.990 "in_capsule_data_size": 4096, 00:20:53.990 "max_io_size": 131072, 00:20:53.990 "io_unit_size": 131072, 00:20:53.990 "max_aq_depth": 128, 00:20:53.990 "num_shared_buffers": 511, 00:20:53.990 "buf_cache_size": 4294967295, 00:20:53.990 "dif_insert_or_strip": false, 00:20:53.990 "zcopy": false, 00:20:53.990 "c2h_success": false, 00:20:53.990 "sock_priority": 0, 00:20:53.990 "abort_timeout_sec": 1, 00:20:53.990 "ack_timeout": 0, 00:20:53.990 "data_wr_pool_size": 0 00:20:53.990 } 00:20:53.990 }, 00:20:53.990 { 00:20:53.990 "method": "nvmf_create_subsystem", 00:20:53.990 "params": { 00:20:53.991 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:53.991 "allow_any_host": false, 00:20:53.991 "serial_number": "SPDK00000000000001", 00:20:53.991 "model_number": "SPDK bdev Controller", 00:20:53.991 "max_namespaces": 10, 00:20:53.991 "min_cntlid": 1, 00:20:53.991 "max_cntlid": 65519, 00:20:53.991 "ana_reporting": false 00:20:53.991 } 00:20:53.991 }, 00:20:53.991 { 00:20:53.991 "method": "nvmf_subsystem_add_host", 00:20:53.991 "params": { 00:20:53.991 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:53.991 "host": "nqn.2016-06.io.spdk:host1", 00:20:53.991 "psk": "key0" 00:20:53.991 } 00:20:53.991 }, 00:20:53.991 { 00:20:53.991 "method": "nvmf_subsystem_add_ns", 00:20:53.991 "params": { 00:20:53.991 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:53.991 "namespace": { 00:20:53.991 "nsid": 1, 00:20:53.991 "bdev_name": "malloc0", 00:20:53.991 "nguid": "89BC46689AB24BD588148BB321CF35ED", 00:20:53.991 "uuid": "89bc4668-9ab2-4bd5-8814-8bb321cf35ed", 00:20:53.991 "no_auto_visible": false 00:20:53.991 } 00:20:53.991 } 00:20:53.991 }, 00:20:53.991 { 00:20:53.991 "method": "nvmf_subsystem_add_listener", 00:20:53.991 "params": { 00:20:53.991 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:53.991 "listen_address": { 00:20:53.991 "trtype": "TCP", 00:20:53.991 "adrfam": "IPv4", 00:20:53.991 "traddr": "10.0.0.2", 00:20:53.991 "trsvcid": "4420" 00:20:53.991 }, 00:20:53.991 "secure_channel": true 00:20:53.991 } 00:20:53.991 } 00:20:53.991 ] 00:20:53.991 } 00:20:53.991 ] 00:20:53.991 }' 00:20:53.991 06:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=349533 00:20:53.991 06:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 349533 00:20:53.991 06:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:53.991 06:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 349533 ']' 00:20:53.991 06:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:53.991 06:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:53.991 06:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:53.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:53.991 06:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:53.991 06:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:53.991 [2024-12-09 06:19:48.508372] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:20:53.991 [2024-12-09 06:19:48.508437] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:54.251 [2024-12-09 06:19:48.574111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:54.251 [2024-12-09 06:19:48.600717] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:54.251 [2024-12-09 06:19:48.600749] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:54.251 [2024-12-09 06:19:48.600755] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:54.251 [2024-12-09 06:19:48.600761] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:54.251 [2024-12-09 06:19:48.600765] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:54.251 [2024-12-09 06:19:48.601262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:54.251 [2024-12-09 06:19:48.794052] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:54.251 [2024-12-09 06:19:48.826069] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:54.251 [2024-12-09 06:19:48.826259] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:54.820 06:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:54.820 06:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:54.820 06:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:54.820 06:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:54.820 06:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:54.820 06:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:54.820 06:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=349797 00:20:54.820 06:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 349797 /var/tmp/bdevperf.sock 00:20:54.820 06:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 349797 ']' 00:20:54.820 06:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:54.820 06:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:54.820 06:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:54.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:54.820 06:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:54.820 06:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:54.821 06:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:54.821 06:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:20:54.821 "subsystems": [ 00:20:54.821 { 00:20:54.821 "subsystem": "keyring", 00:20:54.821 "config": [ 00:20:54.821 { 00:20:54.821 "method": "keyring_file_add_key", 00:20:54.821 "params": { 00:20:54.821 "name": "key0", 00:20:54.821 "path": "/tmp/tmp.JG4sKhEbG2" 00:20:54.821 } 00:20:54.821 } 00:20:54.821 ] 00:20:54.821 }, 00:20:54.821 { 00:20:54.821 "subsystem": "iobuf", 00:20:54.821 "config": [ 00:20:54.821 { 00:20:54.821 "method": "iobuf_set_options", 00:20:54.821 "params": { 00:20:54.821 "small_pool_count": 8192, 00:20:54.821 "large_pool_count": 1024, 00:20:54.821 "small_bufsize": 8192, 00:20:54.821 "large_bufsize": 135168, 00:20:54.821 "enable_numa": false 00:20:54.821 } 00:20:54.821 } 00:20:54.821 ] 00:20:54.821 }, 00:20:54.821 { 00:20:54.821 "subsystem": "sock", 00:20:54.821 "config": [ 00:20:54.821 { 00:20:54.821 "method": "sock_set_default_impl", 00:20:54.821 "params": { 00:20:54.821 "impl_name": "posix" 00:20:54.821 } 00:20:54.821 }, 00:20:54.821 { 00:20:54.821 "method": "sock_impl_set_options", 00:20:54.821 "params": { 00:20:54.821 "impl_name": "ssl", 00:20:54.821 "recv_buf_size": 4096, 00:20:54.821 "send_buf_size": 4096, 00:20:54.821 "enable_recv_pipe": true, 00:20:54.821 "enable_quickack": false, 00:20:54.821 "enable_placement_id": 0, 00:20:54.821 "enable_zerocopy_send_server": true, 00:20:54.821 "enable_zerocopy_send_client": false, 00:20:54.821 "zerocopy_threshold": 0, 00:20:54.821 "tls_version": 0, 00:20:54.821 "enable_ktls": false 00:20:54.821 } 00:20:54.821 }, 00:20:54.821 { 00:20:54.821 "method": "sock_impl_set_options", 00:20:54.821 "params": { 00:20:54.821 "impl_name": "posix", 00:20:54.821 "recv_buf_size": 2097152, 00:20:54.821 "send_buf_size": 2097152, 00:20:54.821 "enable_recv_pipe": true, 00:20:54.821 "enable_quickack": false, 00:20:54.821 "enable_placement_id": 0, 00:20:54.821 "enable_zerocopy_send_server": true, 00:20:54.821 "enable_zerocopy_send_client": false, 00:20:54.821 "zerocopy_threshold": 0, 00:20:54.821 "tls_version": 0, 00:20:54.821 "enable_ktls": false 00:20:54.821 } 00:20:54.821 } 00:20:54.821 ] 00:20:54.821 }, 00:20:54.821 { 00:20:54.821 "subsystem": "vmd", 00:20:54.821 "config": [] 00:20:54.821 }, 00:20:54.821 { 00:20:54.821 "subsystem": "accel", 00:20:54.821 "config": [ 00:20:54.821 { 00:20:54.821 "method": "accel_set_options", 00:20:54.821 "params": { 00:20:54.821 "small_cache_size": 128, 00:20:54.821 "large_cache_size": 16, 00:20:54.821 "task_count": 2048, 00:20:54.821 "sequence_count": 2048, 00:20:54.821 "buf_count": 2048 00:20:54.821 } 00:20:54.821 } 00:20:54.821 ] 00:20:54.821 }, 00:20:54.821 { 00:20:54.821 "subsystem": "bdev", 00:20:54.821 "config": [ 00:20:54.821 { 00:20:54.821 "method": "bdev_set_options", 00:20:54.821 "params": { 00:20:54.821 "bdev_io_pool_size": 65535, 00:20:54.821 "bdev_io_cache_size": 256, 00:20:54.821 "bdev_auto_examine": true, 00:20:54.821 "iobuf_small_cache_size": 128, 00:20:54.821 "iobuf_large_cache_size": 16 00:20:54.821 } 00:20:54.821 }, 00:20:54.821 { 00:20:54.821 "method": "bdev_raid_set_options", 00:20:54.821 "params": { 00:20:54.821 "process_window_size_kb": 1024, 00:20:54.821 "process_max_bandwidth_mb_sec": 0 00:20:54.821 } 00:20:54.821 }, 00:20:54.821 { 00:20:54.821 "method": "bdev_iscsi_set_options", 00:20:54.821 "params": { 00:20:54.821 "timeout_sec": 30 00:20:54.821 } 00:20:54.821 }, 00:20:54.821 { 00:20:54.821 "method": "bdev_nvme_set_options", 00:20:54.821 "params": { 00:20:54.821 "action_on_timeout": "none", 00:20:54.821 "timeout_us": 0, 00:20:54.821 "timeout_admin_us": 0, 00:20:54.821 "keep_alive_timeout_ms": 10000, 00:20:54.821 "arbitration_burst": 0, 00:20:54.821 "low_priority_weight": 0, 00:20:54.821 "medium_priority_weight": 0, 00:20:54.821 "high_priority_weight": 0, 00:20:54.821 "nvme_adminq_poll_period_us": 10000, 00:20:54.821 "nvme_ioq_poll_period_us": 0, 00:20:54.821 "io_queue_requests": 512, 00:20:54.821 "delay_cmd_submit": true, 00:20:54.821 "transport_retry_count": 4, 00:20:54.821 "bdev_retry_count": 3, 00:20:54.821 "transport_ack_timeout": 0, 00:20:54.821 "ctrlr_loss_timeout_sec": 0, 00:20:54.821 "reconnect_delay_sec": 0, 00:20:54.821 "fast_io_fail_timeout_sec": 0, 00:20:54.821 "disable_auto_failback": false, 00:20:54.821 "generate_uuids": false, 00:20:54.821 "transport_tos": 0, 00:20:54.821 "nvme_error_stat": false, 00:20:54.821 "rdma_srq_size": 0, 00:20:54.821 "io_path_stat": false, 00:20:54.821 "allow_accel_sequence": false, 00:20:54.821 "rdma_max_cq_size": 0, 00:20:54.821 "rdma_cm_event_timeout_ms": 0, 00:20:54.821 "dhchap_digests": [ 00:20:54.821 "sha256", 00:20:54.821 "sha384", 00:20:54.821 "sha512" 00:20:54.821 ], 00:20:54.821 "dhchap_dhgroups": [ 00:20:54.821 "null", 00:20:54.821 "ffdhe2048", 00:20:54.821 "ffdhe3072", 00:20:54.821 "ffdhe4096", 00:20:54.821 "ffdhe6144", 00:20:54.821 "ffdhe8192" 00:20:54.821 ] 00:20:54.821 } 00:20:54.821 }, 00:20:54.821 { 00:20:54.821 "method": "bdev_nvme_attach_controller", 00:20:54.821 "params": { 00:20:54.821 "name": "TLSTEST", 00:20:54.821 "trtype": "TCP", 00:20:54.821 "adrfam": "IPv4", 00:20:54.821 "traddr": "10.0.0.2", 00:20:54.821 "trsvcid": "4420", 00:20:54.821 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:54.821 "prchk_reftag": false, 00:20:54.821 "prchk_guard": false, 00:20:54.821 "ctrlr_loss_timeout_sec": 0, 00:20:54.821 "reconnect_delay_sec": 0, 00:20:54.821 "fast_io_fail_timeout_sec": 0, 00:20:54.821 "psk": "key0", 00:20:54.821 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:54.821 "hdgst": false, 00:20:54.821 "ddgst": false, 00:20:54.821 "multipath": "multipath" 00:20:54.821 } 00:20:54.821 }, 00:20:54.821 { 00:20:54.821 "method": "bdev_nvme_set_hotplug", 00:20:54.821 "params": { 00:20:54.821 "period_us": 100000, 00:20:54.821 "enable": false 00:20:54.821 } 00:20:54.821 }, 00:20:54.821 { 00:20:54.821 "method": "bdev_wait_for_examine" 00:20:54.821 } 00:20:54.821 ] 00:20:54.821 }, 00:20:54.821 { 00:20:54.821 "subsystem": "nbd", 00:20:54.821 "config": [] 00:20:54.821 } 00:20:54.821 ] 00:20:54.821 }' 00:20:54.821 [2024-12-09 06:19:49.397610] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:20:54.821 [2024-12-09 06:19:49.397664] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid349797 ] 00:20:55.081 [2024-12-09 06:19:49.456019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:55.081 [2024-12-09 06:19:49.485030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:55.081 [2024-12-09 06:19:49.619328] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:55.651 06:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:55.651 06:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:55.651 06:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:55.911 Running I/O for 10 seconds... 00:20:57.792 3584.00 IOPS, 14.00 MiB/s [2024-12-09T05:19:53.323Z] 2676.50 IOPS, 10.46 MiB/s [2024-12-09T05:19:54.703Z] 2306.67 IOPS, 9.01 MiB/s [2024-12-09T05:19:55.654Z] 2562.00 IOPS, 10.01 MiB/s [2024-12-09T05:19:56.596Z] 2941.80 IOPS, 11.49 MiB/s [2024-12-09T05:19:57.534Z] 2849.00 IOPS, 11.13 MiB/s [2024-12-09T05:19:58.473Z] 2708.86 IOPS, 10.58 MiB/s [2024-12-09T05:19:59.413Z] 2874.75 IOPS, 11.23 MiB/s [2024-12-09T05:20:00.353Z] 3006.44 IOPS, 11.74 MiB/s [2024-12-09T05:20:00.353Z] 2970.90 IOPS, 11.61 MiB/s 00:21:05.766 Latency(us) 00:21:05.766 [2024-12-09T05:20:00.353Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:05.766 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:05.766 Verification LBA range: start 0x0 length 0x2000 00:21:05.766 TLSTESTn1 : 10.01 2979.39 11.64 0.00 0.00 42918.75 4763.96 64124.46 00:21:05.766 [2024-12-09T05:20:00.353Z] =================================================================================================================== 00:21:05.766 [2024-12-09T05:20:00.353Z] Total : 2979.39 11.64 0.00 0.00 42918.75 4763.96 64124.46 00:21:05.766 { 00:21:05.766 "results": [ 00:21:05.766 { 00:21:05.766 "job": "TLSTESTn1", 00:21:05.766 "core_mask": "0x4", 00:21:05.766 "workload": "verify", 00:21:05.766 "status": "finished", 00:21:05.766 "verify_range": { 00:21:05.766 "start": 0, 00:21:05.766 "length": 8192 00:21:05.766 }, 00:21:05.766 "queue_depth": 128, 00:21:05.766 "io_size": 4096, 00:21:05.766 "runtime": 10.014482, 00:21:05.766 "iops": 2979.3852542747595, 00:21:05.766 "mibps": 11.63822364951078, 00:21:05.766 "io_failed": 0, 00:21:05.766 "io_timeout": 0, 00:21:05.766 "avg_latency_us": 42918.74633302482, 00:21:05.766 "min_latency_us": 4763.963076923077, 00:21:05.766 "max_latency_us": 64124.45538461539 00:21:05.766 } 00:21:05.766 ], 00:21:05.766 "core_count": 1 00:21:05.766 } 00:21:05.766 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:05.766 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 349797 00:21:05.766 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 349797 ']' 00:21:05.766 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 349797 00:21:05.766 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:05.766 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:05.766 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 349797 00:21:06.025 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:06.025 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:06.025 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 349797' 00:21:06.025 killing process with pid 349797 00:21:06.025 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 349797 00:21:06.025 Received shutdown signal, test time was about 10.000000 seconds 00:21:06.025 00:21:06.025 Latency(us) 00:21:06.025 [2024-12-09T05:20:00.612Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:06.025 [2024-12-09T05:20:00.612Z] =================================================================================================================== 00:21:06.025 [2024-12-09T05:20:00.612Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:06.025 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 349797 00:21:06.025 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 349533 00:21:06.026 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 349533 ']' 00:21:06.026 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 349533 00:21:06.026 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:06.026 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:06.026 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 349533 00:21:06.026 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:06.026 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:06.026 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 349533' 00:21:06.026 killing process with pid 349533 00:21:06.026 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 349533 00:21:06.026 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 349533 00:21:06.286 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:21:06.286 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:06.286 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:06.286 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:06.286 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=351682 00:21:06.286 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 351682 00:21:06.286 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 351682 ']' 00:21:06.286 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:06.286 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:06.286 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:06.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:06.286 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:06.286 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:06.286 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:06.286 [2024-12-09 06:20:00.730317] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:21:06.286 [2024-12-09 06:20:00.730370] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:06.286 [2024-12-09 06:20:00.822873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:06.548 [2024-12-09 06:20:00.871560] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:06.548 [2024-12-09 06:20:00.871611] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:06.548 [2024-12-09 06:20:00.871619] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:06.548 [2024-12-09 06:20:00.871627] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:06.548 [2024-12-09 06:20:00.871632] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:06.548 [2024-12-09 06:20:00.872343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:07.116 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:07.116 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:07.116 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:07.116 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:07.116 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:07.116 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:07.116 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.JG4sKhEbG2 00:21:07.116 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.JG4sKhEbG2 00:21:07.116 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:07.375 [2024-12-09 06:20:01.757837] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:07.375 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:07.375 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:07.636 [2024-12-09 06:20:02.106699] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:07.636 [2024-12-09 06:20:02.107013] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:07.636 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:07.913 malloc0 00:21:07.913 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:08.174 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.JG4sKhEbG2 00:21:08.174 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:08.435 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=352017 00:21:08.435 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:08.435 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 352017 /var/tmp/bdevperf.sock 00:21:08.435 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 352017 ']' 00:21:08.435 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:08.435 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:08.435 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:08.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:08.435 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:08.435 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:08.435 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:08.435 [2024-12-09 06:20:02.922416] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:21:08.435 [2024-12-09 06:20:02.922490] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid352017 ] 00:21:08.435 [2024-12-09 06:20:02.986043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:08.695 [2024-12-09 06:20:03.023930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:08.695 06:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:08.695 06:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:08.695 06:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.JG4sKhEbG2 00:21:08.695 06:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:08.956 [2024-12-09 06:20:03.426526] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:08.956 nvme0n1 00:21:08.956 06:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:09.216 Running I/O for 1 seconds... 00:21:10.158 4543.00 IOPS, 17.75 MiB/s 00:21:10.158 Latency(us) 00:21:10.158 [2024-12-09T05:20:04.745Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:10.158 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:10.158 Verification LBA range: start 0x0 length 0x2000 00:21:10.158 nvme0n1 : 1.07 4365.77 17.05 0.00 0.00 28482.26 5973.86 112116.97 00:21:10.158 [2024-12-09T05:20:04.745Z] =================================================================================================================== 00:21:10.158 [2024-12-09T05:20:04.745Z] Total : 4365.77 17.05 0.00 0.00 28482.26 5973.86 112116.97 00:21:10.158 { 00:21:10.158 "results": [ 00:21:10.158 { 00:21:10.158 "job": "nvme0n1", 00:21:10.158 "core_mask": "0x2", 00:21:10.158 "workload": "verify", 00:21:10.158 "status": "finished", 00:21:10.158 "verify_range": { 00:21:10.158 "start": 0, 00:21:10.158 "length": 8192 00:21:10.158 }, 00:21:10.158 "queue_depth": 128, 00:21:10.158 "io_size": 4096, 00:21:10.158 "runtime": 1.069915, 00:21:10.158 "iops": 4365.767374043732, 00:21:10.158 "mibps": 17.05377880485833, 00:21:10.158 "io_failed": 0, 00:21:10.158 "io_timeout": 0, 00:21:10.158 "avg_latency_us": 28482.255024290633, 00:21:10.158 "min_latency_us": 5973.858461538462, 00:21:10.158 "max_latency_us": 112116.97230769231 00:21:10.158 } 00:21:10.158 ], 00:21:10.158 "core_count": 1 00:21:10.158 } 00:21:10.158 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 352017 00:21:10.158 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 352017 ']' 00:21:10.158 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 352017 00:21:10.158 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:10.158 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:10.158 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 352017 00:21:10.419 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:10.419 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:10.419 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 352017' 00:21:10.419 killing process with pid 352017 00:21:10.419 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 352017 00:21:10.419 Received shutdown signal, test time was about 1.000000 seconds 00:21:10.419 00:21:10.419 Latency(us) 00:21:10.419 [2024-12-09T05:20:05.006Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:10.419 [2024-12-09T05:20:05.006Z] =================================================================================================================== 00:21:10.419 [2024-12-09T05:20:05.006Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:10.419 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 352017 00:21:10.419 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 351682 00:21:10.419 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 351682 ']' 00:21:10.419 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 351682 00:21:10.419 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:10.419 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:10.419 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 351682 00:21:10.419 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:10.419 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:10.419 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 351682' 00:21:10.419 killing process with pid 351682 00:21:10.419 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 351682 00:21:10.419 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 351682 00:21:10.682 06:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:21:10.682 06:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:10.682 06:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:10.682 06:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:10.682 06:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=352337 00:21:10.682 06:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 352337 00:21:10.682 06:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:10.682 06:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 352337 ']' 00:21:10.682 06:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:10.682 06:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:10.682 06:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:10.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:10.682 06:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:10.682 06:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:10.682 [2024-12-09 06:20:05.112352] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:21:10.682 [2024-12-09 06:20:05.112411] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:10.682 [2024-12-09 06:20:05.206407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:10.682 [2024-12-09 06:20:05.252734] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:10.682 [2024-12-09 06:20:05.252791] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:10.682 [2024-12-09 06:20:05.252799] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:10.682 [2024-12-09 06:20:05.252806] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:10.682 [2024-12-09 06:20:05.252812] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:10.682 [2024-12-09 06:20:05.253603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:11.622 06:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:11.622 06:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:11.622 06:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:11.622 06:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:11.622 06:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:11.622 06:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:11.622 06:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:21:11.622 06:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.622 06:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:11.622 [2024-12-09 06:20:05.971152] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:11.622 malloc0 00:21:11.622 [2024-12-09 06:20:06.001160] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:11.622 [2024-12-09 06:20:06.001487] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:11.622 06:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.622 06:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=352578 00:21:11.622 06:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 352578 /var/tmp/bdevperf.sock 00:21:11.622 06:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 352578 ']' 00:21:11.622 06:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:11.622 06:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:11.622 06:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:11.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:11.622 06:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:11.622 06:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:11.622 06:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:11.622 [2024-12-09 06:20:06.066256] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:21:11.622 [2024-12-09 06:20:06.066316] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid352578 ] 00:21:11.622 [2024-12-09 06:20:06.129927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:11.622 [2024-12-09 06:20:06.167439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:11.882 06:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:11.882 06:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:11.882 06:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.JG4sKhEbG2 00:21:11.882 06:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:12.141 [2024-12-09 06:20:06.554063] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:12.141 nvme0n1 00:21:12.141 06:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:12.401 Running I/O for 1 seconds... 00:21:13.338 1567.00 IOPS, 6.12 MiB/s 00:21:13.338 Latency(us) 00:21:13.338 [2024-12-09T05:20:07.925Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:13.338 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:13.338 Verification LBA range: start 0x0 length 0x2000 00:21:13.339 nvme0n1 : 1.10 1547.11 6.04 0.00 0.00 79888.90 6175.51 91952.05 00:21:13.339 [2024-12-09T05:20:07.926Z] =================================================================================================================== 00:21:13.339 [2024-12-09T05:20:07.926Z] Total : 1547.11 6.04 0.00 0.00 79888.90 6175.51 91952.05 00:21:13.339 { 00:21:13.339 "results": [ 00:21:13.339 { 00:21:13.339 "job": "nvme0n1", 00:21:13.339 "core_mask": "0x2", 00:21:13.339 "workload": "verify", 00:21:13.339 "status": "finished", 00:21:13.339 "verify_range": { 00:21:13.339 "start": 0, 00:21:13.339 "length": 8192 00:21:13.339 }, 00:21:13.339 "queue_depth": 128, 00:21:13.339 "io_size": 4096, 00:21:13.339 "runtime": 1.096241, 00:21:13.339 "iops": 1547.1050617519322, 00:21:13.339 "mibps": 6.043379147468485, 00:21:13.339 "io_failed": 0, 00:21:13.339 "io_timeout": 0, 00:21:13.339 "avg_latency_us": 79888.89822931784, 00:21:13.339 "min_latency_us": 6175.507692307692, 00:21:13.339 "max_latency_us": 91952.04923076923 00:21:13.339 } 00:21:13.339 ], 00:21:13.339 "core_count": 1 00:21:13.339 } 00:21:13.339 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:21:13.339 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.339 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:13.598 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.598 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:21:13.598 "subsystems": [ 00:21:13.598 { 00:21:13.598 "subsystem": "keyring", 00:21:13.598 "config": [ 00:21:13.598 { 00:21:13.598 "method": "keyring_file_add_key", 00:21:13.598 "params": { 00:21:13.598 "name": "key0", 00:21:13.598 "path": "/tmp/tmp.JG4sKhEbG2" 00:21:13.598 } 00:21:13.598 } 00:21:13.598 ] 00:21:13.598 }, 00:21:13.598 { 00:21:13.598 "subsystem": "iobuf", 00:21:13.598 "config": [ 00:21:13.598 { 00:21:13.598 "method": "iobuf_set_options", 00:21:13.598 "params": { 00:21:13.598 "small_pool_count": 8192, 00:21:13.598 "large_pool_count": 1024, 00:21:13.598 "small_bufsize": 8192, 00:21:13.598 "large_bufsize": 135168, 00:21:13.598 "enable_numa": false 00:21:13.598 } 00:21:13.598 } 00:21:13.598 ] 00:21:13.598 }, 00:21:13.598 { 00:21:13.598 "subsystem": "sock", 00:21:13.598 "config": [ 00:21:13.598 { 00:21:13.598 "method": "sock_set_default_impl", 00:21:13.598 "params": { 00:21:13.598 "impl_name": "posix" 00:21:13.598 } 00:21:13.598 }, 00:21:13.598 { 00:21:13.598 "method": "sock_impl_set_options", 00:21:13.598 "params": { 00:21:13.598 "impl_name": "ssl", 00:21:13.598 "recv_buf_size": 4096, 00:21:13.598 "send_buf_size": 4096, 00:21:13.598 "enable_recv_pipe": true, 00:21:13.598 "enable_quickack": false, 00:21:13.598 "enable_placement_id": 0, 00:21:13.598 "enable_zerocopy_send_server": true, 00:21:13.598 "enable_zerocopy_send_client": false, 00:21:13.598 "zerocopy_threshold": 0, 00:21:13.598 "tls_version": 0, 00:21:13.598 "enable_ktls": false 00:21:13.598 } 00:21:13.598 }, 00:21:13.598 { 00:21:13.598 "method": "sock_impl_set_options", 00:21:13.598 "params": { 00:21:13.598 "impl_name": "posix", 00:21:13.598 "recv_buf_size": 2097152, 00:21:13.598 "send_buf_size": 2097152, 00:21:13.598 "enable_recv_pipe": true, 00:21:13.598 "enable_quickack": false, 00:21:13.598 "enable_placement_id": 0, 00:21:13.598 "enable_zerocopy_send_server": true, 00:21:13.598 "enable_zerocopy_send_client": false, 00:21:13.598 "zerocopy_threshold": 0, 00:21:13.598 "tls_version": 0, 00:21:13.598 "enable_ktls": false 00:21:13.598 } 00:21:13.598 } 00:21:13.598 ] 00:21:13.598 }, 00:21:13.598 { 00:21:13.598 "subsystem": "vmd", 00:21:13.598 "config": [] 00:21:13.598 }, 00:21:13.598 { 00:21:13.598 "subsystem": "accel", 00:21:13.598 "config": [ 00:21:13.598 { 00:21:13.598 "method": "accel_set_options", 00:21:13.598 "params": { 00:21:13.598 "small_cache_size": 128, 00:21:13.598 "large_cache_size": 16, 00:21:13.598 "task_count": 2048, 00:21:13.598 "sequence_count": 2048, 00:21:13.598 "buf_count": 2048 00:21:13.598 } 00:21:13.598 } 00:21:13.598 ] 00:21:13.598 }, 00:21:13.598 { 00:21:13.598 "subsystem": "bdev", 00:21:13.598 "config": [ 00:21:13.598 { 00:21:13.598 "method": "bdev_set_options", 00:21:13.598 "params": { 00:21:13.598 "bdev_io_pool_size": 65535, 00:21:13.598 "bdev_io_cache_size": 256, 00:21:13.598 "bdev_auto_examine": true, 00:21:13.598 "iobuf_small_cache_size": 128, 00:21:13.598 "iobuf_large_cache_size": 16 00:21:13.598 } 00:21:13.598 }, 00:21:13.598 { 00:21:13.598 "method": "bdev_raid_set_options", 00:21:13.598 "params": { 00:21:13.598 "process_window_size_kb": 1024, 00:21:13.598 "process_max_bandwidth_mb_sec": 0 00:21:13.598 } 00:21:13.598 }, 00:21:13.598 { 00:21:13.598 "method": "bdev_iscsi_set_options", 00:21:13.598 "params": { 00:21:13.598 "timeout_sec": 30 00:21:13.598 } 00:21:13.598 }, 00:21:13.598 { 00:21:13.598 "method": "bdev_nvme_set_options", 00:21:13.598 "params": { 00:21:13.598 "action_on_timeout": "none", 00:21:13.598 "timeout_us": 0, 00:21:13.598 "timeout_admin_us": 0, 00:21:13.598 "keep_alive_timeout_ms": 10000, 00:21:13.598 "arbitration_burst": 0, 00:21:13.598 "low_priority_weight": 0, 00:21:13.598 "medium_priority_weight": 0, 00:21:13.598 "high_priority_weight": 0, 00:21:13.598 "nvme_adminq_poll_period_us": 10000, 00:21:13.598 "nvme_ioq_poll_period_us": 0, 00:21:13.598 "io_queue_requests": 0, 00:21:13.598 "delay_cmd_submit": true, 00:21:13.598 "transport_retry_count": 4, 00:21:13.598 "bdev_retry_count": 3, 00:21:13.598 "transport_ack_timeout": 0, 00:21:13.598 "ctrlr_loss_timeout_sec": 0, 00:21:13.598 "reconnect_delay_sec": 0, 00:21:13.598 "fast_io_fail_timeout_sec": 0, 00:21:13.598 "disable_auto_failback": false, 00:21:13.598 "generate_uuids": false, 00:21:13.598 "transport_tos": 0, 00:21:13.598 "nvme_error_stat": false, 00:21:13.598 "rdma_srq_size": 0, 00:21:13.598 "io_path_stat": false, 00:21:13.598 "allow_accel_sequence": false, 00:21:13.598 "rdma_max_cq_size": 0, 00:21:13.598 "rdma_cm_event_timeout_ms": 0, 00:21:13.598 "dhchap_digests": [ 00:21:13.598 "sha256", 00:21:13.598 "sha384", 00:21:13.598 "sha512" 00:21:13.598 ], 00:21:13.598 "dhchap_dhgroups": [ 00:21:13.598 "null", 00:21:13.598 "ffdhe2048", 00:21:13.598 "ffdhe3072", 00:21:13.598 "ffdhe4096", 00:21:13.598 "ffdhe6144", 00:21:13.598 "ffdhe8192" 00:21:13.598 ] 00:21:13.598 } 00:21:13.598 }, 00:21:13.598 { 00:21:13.598 "method": "bdev_nvme_set_hotplug", 00:21:13.598 "params": { 00:21:13.598 "period_us": 100000, 00:21:13.598 "enable": false 00:21:13.598 } 00:21:13.598 }, 00:21:13.598 { 00:21:13.598 "method": "bdev_malloc_create", 00:21:13.598 "params": { 00:21:13.598 "name": "malloc0", 00:21:13.598 "num_blocks": 8192, 00:21:13.598 "block_size": 4096, 00:21:13.598 "physical_block_size": 4096, 00:21:13.598 "uuid": "55a32d31-a95f-4b0a-999f-80eb4e289213", 00:21:13.598 "optimal_io_boundary": 0, 00:21:13.598 "md_size": 0, 00:21:13.598 "dif_type": 0, 00:21:13.598 "dif_is_head_of_md": false, 00:21:13.598 "dif_pi_format": 0 00:21:13.598 } 00:21:13.598 }, 00:21:13.598 { 00:21:13.598 "method": "bdev_wait_for_examine" 00:21:13.598 } 00:21:13.598 ] 00:21:13.598 }, 00:21:13.598 { 00:21:13.598 "subsystem": "nbd", 00:21:13.598 "config": [] 00:21:13.598 }, 00:21:13.598 { 00:21:13.598 "subsystem": "scheduler", 00:21:13.598 "config": [ 00:21:13.598 { 00:21:13.598 "method": "framework_set_scheduler", 00:21:13.598 "params": { 00:21:13.598 "name": "static" 00:21:13.598 } 00:21:13.598 } 00:21:13.598 ] 00:21:13.598 }, 00:21:13.598 { 00:21:13.598 "subsystem": "nvmf", 00:21:13.598 "config": [ 00:21:13.598 { 00:21:13.599 "method": "nvmf_set_config", 00:21:13.599 "params": { 00:21:13.599 "discovery_filter": "match_any", 00:21:13.599 "admin_cmd_passthru": { 00:21:13.599 "identify_ctrlr": false 00:21:13.599 }, 00:21:13.599 "dhchap_digests": [ 00:21:13.599 "sha256", 00:21:13.599 "sha384", 00:21:13.599 "sha512" 00:21:13.599 ], 00:21:13.599 "dhchap_dhgroups": [ 00:21:13.599 "null", 00:21:13.599 "ffdhe2048", 00:21:13.599 "ffdhe3072", 00:21:13.599 "ffdhe4096", 00:21:13.599 "ffdhe6144", 00:21:13.599 "ffdhe8192" 00:21:13.599 ] 00:21:13.599 } 00:21:13.599 }, 00:21:13.599 { 00:21:13.599 "method": "nvmf_set_max_subsystems", 00:21:13.599 "params": { 00:21:13.599 "max_subsystems": 1024 00:21:13.599 } 00:21:13.599 }, 00:21:13.599 { 00:21:13.599 "method": "nvmf_set_crdt", 00:21:13.599 "params": { 00:21:13.599 "crdt1": 0, 00:21:13.599 "crdt2": 0, 00:21:13.599 "crdt3": 0 00:21:13.599 } 00:21:13.599 }, 00:21:13.599 { 00:21:13.599 "method": "nvmf_create_transport", 00:21:13.599 "params": { 00:21:13.599 "trtype": "TCP", 00:21:13.599 "max_queue_depth": 128, 00:21:13.599 "max_io_qpairs_per_ctrlr": 127, 00:21:13.599 "in_capsule_data_size": 4096, 00:21:13.599 "max_io_size": 131072, 00:21:13.599 "io_unit_size": 131072, 00:21:13.599 "max_aq_depth": 128, 00:21:13.599 "num_shared_buffers": 511, 00:21:13.599 "buf_cache_size": 4294967295, 00:21:13.599 "dif_insert_or_strip": false, 00:21:13.599 "zcopy": false, 00:21:13.599 "c2h_success": false, 00:21:13.599 "sock_priority": 0, 00:21:13.599 "abort_timeout_sec": 1, 00:21:13.599 "ack_timeout": 0, 00:21:13.599 "data_wr_pool_size": 0 00:21:13.599 } 00:21:13.599 }, 00:21:13.599 { 00:21:13.599 "method": "nvmf_create_subsystem", 00:21:13.599 "params": { 00:21:13.599 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:13.599 "allow_any_host": false, 00:21:13.599 "serial_number": "00000000000000000000", 00:21:13.599 "model_number": "SPDK bdev Controller", 00:21:13.599 "max_namespaces": 32, 00:21:13.599 "min_cntlid": 1, 00:21:13.599 "max_cntlid": 65519, 00:21:13.599 "ana_reporting": false 00:21:13.599 } 00:21:13.599 }, 00:21:13.599 { 00:21:13.599 "method": "nvmf_subsystem_add_host", 00:21:13.599 "params": { 00:21:13.599 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:13.599 "host": "nqn.2016-06.io.spdk:host1", 00:21:13.599 "psk": "key0" 00:21:13.599 } 00:21:13.599 }, 00:21:13.599 { 00:21:13.599 "method": "nvmf_subsystem_add_ns", 00:21:13.599 "params": { 00:21:13.599 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:13.599 "namespace": { 00:21:13.599 "nsid": 1, 00:21:13.599 "bdev_name": "malloc0", 00:21:13.599 "nguid": "55A32D31A95F4B0A999F80EB4E289213", 00:21:13.599 "uuid": "55a32d31-a95f-4b0a-999f-80eb4e289213", 00:21:13.599 "no_auto_visible": false 00:21:13.599 } 00:21:13.599 } 00:21:13.599 }, 00:21:13.599 { 00:21:13.599 "method": "nvmf_subsystem_add_listener", 00:21:13.599 "params": { 00:21:13.599 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:13.599 "listen_address": { 00:21:13.599 "trtype": "TCP", 00:21:13.599 "adrfam": "IPv4", 00:21:13.599 "traddr": "10.0.0.2", 00:21:13.599 "trsvcid": "4420" 00:21:13.599 }, 00:21:13.599 "secure_channel": false, 00:21:13.599 "sock_impl": "ssl" 00:21:13.599 } 00:21:13.599 } 00:21:13.599 ] 00:21:13.599 } 00:21:13.599 ] 00:21:13.599 }' 00:21:13.599 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:13.859 06:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:21:13.859 "subsystems": [ 00:21:13.859 { 00:21:13.859 "subsystem": "keyring", 00:21:13.859 "config": [ 00:21:13.859 { 00:21:13.859 "method": "keyring_file_add_key", 00:21:13.859 "params": { 00:21:13.859 "name": "key0", 00:21:13.859 "path": "/tmp/tmp.JG4sKhEbG2" 00:21:13.859 } 00:21:13.859 } 00:21:13.859 ] 00:21:13.859 }, 00:21:13.859 { 00:21:13.859 "subsystem": "iobuf", 00:21:13.859 "config": [ 00:21:13.859 { 00:21:13.859 "method": "iobuf_set_options", 00:21:13.859 "params": { 00:21:13.859 "small_pool_count": 8192, 00:21:13.859 "large_pool_count": 1024, 00:21:13.859 "small_bufsize": 8192, 00:21:13.859 "large_bufsize": 135168, 00:21:13.859 "enable_numa": false 00:21:13.859 } 00:21:13.859 } 00:21:13.859 ] 00:21:13.859 }, 00:21:13.859 { 00:21:13.859 "subsystem": "sock", 00:21:13.859 "config": [ 00:21:13.859 { 00:21:13.859 "method": "sock_set_default_impl", 00:21:13.859 "params": { 00:21:13.859 "impl_name": "posix" 00:21:13.859 } 00:21:13.859 }, 00:21:13.859 { 00:21:13.859 "method": "sock_impl_set_options", 00:21:13.859 "params": { 00:21:13.859 "impl_name": "ssl", 00:21:13.859 "recv_buf_size": 4096, 00:21:13.859 "send_buf_size": 4096, 00:21:13.859 "enable_recv_pipe": true, 00:21:13.859 "enable_quickack": false, 00:21:13.859 "enable_placement_id": 0, 00:21:13.859 "enable_zerocopy_send_server": true, 00:21:13.859 "enable_zerocopy_send_client": false, 00:21:13.859 "zerocopy_threshold": 0, 00:21:13.859 "tls_version": 0, 00:21:13.859 "enable_ktls": false 00:21:13.859 } 00:21:13.859 }, 00:21:13.859 { 00:21:13.859 "method": "sock_impl_set_options", 00:21:13.859 "params": { 00:21:13.859 "impl_name": "posix", 00:21:13.859 "recv_buf_size": 2097152, 00:21:13.859 "send_buf_size": 2097152, 00:21:13.859 "enable_recv_pipe": true, 00:21:13.859 "enable_quickack": false, 00:21:13.859 "enable_placement_id": 0, 00:21:13.859 "enable_zerocopy_send_server": true, 00:21:13.859 "enable_zerocopy_send_client": false, 00:21:13.859 "zerocopy_threshold": 0, 00:21:13.859 "tls_version": 0, 00:21:13.859 "enable_ktls": false 00:21:13.859 } 00:21:13.859 } 00:21:13.859 ] 00:21:13.859 }, 00:21:13.859 { 00:21:13.859 "subsystem": "vmd", 00:21:13.859 "config": [] 00:21:13.859 }, 00:21:13.859 { 00:21:13.859 "subsystem": "accel", 00:21:13.859 "config": [ 00:21:13.859 { 00:21:13.859 "method": "accel_set_options", 00:21:13.859 "params": { 00:21:13.859 "small_cache_size": 128, 00:21:13.859 "large_cache_size": 16, 00:21:13.859 "task_count": 2048, 00:21:13.859 "sequence_count": 2048, 00:21:13.859 "buf_count": 2048 00:21:13.859 } 00:21:13.859 } 00:21:13.859 ] 00:21:13.859 }, 00:21:13.859 { 00:21:13.859 "subsystem": "bdev", 00:21:13.859 "config": [ 00:21:13.859 { 00:21:13.859 "method": "bdev_set_options", 00:21:13.859 "params": { 00:21:13.859 "bdev_io_pool_size": 65535, 00:21:13.859 "bdev_io_cache_size": 256, 00:21:13.859 "bdev_auto_examine": true, 00:21:13.859 "iobuf_small_cache_size": 128, 00:21:13.859 "iobuf_large_cache_size": 16 00:21:13.859 } 00:21:13.859 }, 00:21:13.859 { 00:21:13.859 "method": "bdev_raid_set_options", 00:21:13.859 "params": { 00:21:13.859 "process_window_size_kb": 1024, 00:21:13.859 "process_max_bandwidth_mb_sec": 0 00:21:13.859 } 00:21:13.859 }, 00:21:13.859 { 00:21:13.859 "method": "bdev_iscsi_set_options", 00:21:13.859 "params": { 00:21:13.859 "timeout_sec": 30 00:21:13.859 } 00:21:13.859 }, 00:21:13.859 { 00:21:13.859 "method": "bdev_nvme_set_options", 00:21:13.859 "params": { 00:21:13.859 "action_on_timeout": "none", 00:21:13.859 "timeout_us": 0, 00:21:13.859 "timeout_admin_us": 0, 00:21:13.859 "keep_alive_timeout_ms": 10000, 00:21:13.859 "arbitration_burst": 0, 00:21:13.859 "low_priority_weight": 0, 00:21:13.859 "medium_priority_weight": 0, 00:21:13.859 "high_priority_weight": 0, 00:21:13.859 "nvme_adminq_poll_period_us": 10000, 00:21:13.859 "nvme_ioq_poll_period_us": 0, 00:21:13.859 "io_queue_requests": 512, 00:21:13.859 "delay_cmd_submit": true, 00:21:13.859 "transport_retry_count": 4, 00:21:13.859 "bdev_retry_count": 3, 00:21:13.859 "transport_ack_timeout": 0, 00:21:13.859 "ctrlr_loss_timeout_sec": 0, 00:21:13.859 "reconnect_delay_sec": 0, 00:21:13.859 "fast_io_fail_timeout_sec": 0, 00:21:13.859 "disable_auto_failback": false, 00:21:13.859 "generate_uuids": false, 00:21:13.859 "transport_tos": 0, 00:21:13.859 "nvme_error_stat": false, 00:21:13.859 "rdma_srq_size": 0, 00:21:13.859 "io_path_stat": false, 00:21:13.859 "allow_accel_sequence": false, 00:21:13.859 "rdma_max_cq_size": 0, 00:21:13.859 "rdma_cm_event_timeout_ms": 0, 00:21:13.859 "dhchap_digests": [ 00:21:13.859 "sha256", 00:21:13.859 "sha384", 00:21:13.859 "sha512" 00:21:13.859 ], 00:21:13.859 "dhchap_dhgroups": [ 00:21:13.859 "null", 00:21:13.859 "ffdhe2048", 00:21:13.859 "ffdhe3072", 00:21:13.859 "ffdhe4096", 00:21:13.859 "ffdhe6144", 00:21:13.859 "ffdhe8192" 00:21:13.859 ] 00:21:13.859 } 00:21:13.859 }, 00:21:13.859 { 00:21:13.859 "method": "bdev_nvme_attach_controller", 00:21:13.859 "params": { 00:21:13.859 "name": "nvme0", 00:21:13.859 "trtype": "TCP", 00:21:13.859 "adrfam": "IPv4", 00:21:13.859 "traddr": "10.0.0.2", 00:21:13.859 "trsvcid": "4420", 00:21:13.859 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:13.859 "prchk_reftag": false, 00:21:13.859 "prchk_guard": false, 00:21:13.859 "ctrlr_loss_timeout_sec": 0, 00:21:13.859 "reconnect_delay_sec": 0, 00:21:13.859 "fast_io_fail_timeout_sec": 0, 00:21:13.859 "psk": "key0", 00:21:13.859 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:13.859 "hdgst": false, 00:21:13.859 "ddgst": false, 00:21:13.859 "multipath": "multipath" 00:21:13.859 } 00:21:13.859 }, 00:21:13.859 { 00:21:13.859 "method": "bdev_nvme_set_hotplug", 00:21:13.859 "params": { 00:21:13.859 "period_us": 100000, 00:21:13.859 "enable": false 00:21:13.859 } 00:21:13.859 }, 00:21:13.859 { 00:21:13.859 "method": "bdev_enable_histogram", 00:21:13.859 "params": { 00:21:13.859 "name": "nvme0n1", 00:21:13.859 "enable": true 00:21:13.859 } 00:21:13.859 }, 00:21:13.859 { 00:21:13.859 "method": "bdev_wait_for_examine" 00:21:13.859 } 00:21:13.859 ] 00:21:13.859 }, 00:21:13.859 { 00:21:13.859 "subsystem": "nbd", 00:21:13.859 "config": [] 00:21:13.859 } 00:21:13.859 ] 00:21:13.859 }' 00:21:13.859 06:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 352578 00:21:13.859 06:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 352578 ']' 00:21:13.859 06:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 352578 00:21:13.859 06:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:13.859 06:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:13.859 06:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 352578 00:21:13.859 06:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:13.859 06:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:13.859 06:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 352578' 00:21:13.859 killing process with pid 352578 00:21:13.859 06:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 352578 00:21:13.859 Received shutdown signal, test time was about 1.000000 seconds 00:21:13.859 00:21:13.859 Latency(us) 00:21:13.859 [2024-12-09T05:20:08.446Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:13.859 [2024-12-09T05:20:08.446Z] =================================================================================================================== 00:21:13.859 [2024-12-09T05:20:08.446Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:13.859 06:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 352578 00:21:13.859 06:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 352337 00:21:13.859 06:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 352337 ']' 00:21:13.859 06:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 352337 00:21:13.859 06:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:13.859 06:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:13.859 06:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 352337 00:21:13.859 06:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:13.859 06:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:13.859 06:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 352337' 00:21:13.859 killing process with pid 352337 00:21:13.859 06:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 352337 00:21:13.859 06:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 352337 00:21:14.120 06:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:21:14.120 06:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:14.120 06:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:14.120 06:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:21:14.120 "subsystems": [ 00:21:14.120 { 00:21:14.120 "subsystem": "keyring", 00:21:14.120 "config": [ 00:21:14.120 { 00:21:14.120 "method": "keyring_file_add_key", 00:21:14.120 "params": { 00:21:14.120 "name": "key0", 00:21:14.120 "path": "/tmp/tmp.JG4sKhEbG2" 00:21:14.120 } 00:21:14.120 } 00:21:14.120 ] 00:21:14.120 }, 00:21:14.120 { 00:21:14.120 "subsystem": "iobuf", 00:21:14.120 "config": [ 00:21:14.120 { 00:21:14.120 "method": "iobuf_set_options", 00:21:14.120 "params": { 00:21:14.120 "small_pool_count": 8192, 00:21:14.120 "large_pool_count": 1024, 00:21:14.120 "small_bufsize": 8192, 00:21:14.120 "large_bufsize": 135168, 00:21:14.120 "enable_numa": false 00:21:14.120 } 00:21:14.120 } 00:21:14.120 ] 00:21:14.120 }, 00:21:14.120 { 00:21:14.120 "subsystem": "sock", 00:21:14.120 "config": [ 00:21:14.120 { 00:21:14.120 "method": "sock_set_default_impl", 00:21:14.120 "params": { 00:21:14.120 "impl_name": "posix" 00:21:14.120 } 00:21:14.120 }, 00:21:14.120 { 00:21:14.120 "method": "sock_impl_set_options", 00:21:14.120 "params": { 00:21:14.120 "impl_name": "ssl", 00:21:14.120 "recv_buf_size": 4096, 00:21:14.120 "send_buf_size": 4096, 00:21:14.120 "enable_recv_pipe": true, 00:21:14.120 "enable_quickack": false, 00:21:14.120 "enable_placement_id": 0, 00:21:14.120 "enable_zerocopy_send_server": true, 00:21:14.120 "enable_zerocopy_send_client": false, 00:21:14.120 "zerocopy_threshold": 0, 00:21:14.120 "tls_version": 0, 00:21:14.120 "enable_ktls": false 00:21:14.120 } 00:21:14.120 }, 00:21:14.120 { 00:21:14.120 "method": "sock_impl_set_options", 00:21:14.120 "params": { 00:21:14.120 "impl_name": "posix", 00:21:14.120 "recv_buf_size": 2097152, 00:21:14.120 "send_buf_size": 2097152, 00:21:14.120 "enable_recv_pipe": true, 00:21:14.120 "enable_quickack": false, 00:21:14.120 "enable_placement_id": 0, 00:21:14.120 "enable_zerocopy_send_server": true, 00:21:14.120 "enable_zerocopy_send_client": false, 00:21:14.120 "zerocopy_threshold": 0, 00:21:14.120 "tls_version": 0, 00:21:14.120 "enable_ktls": false 00:21:14.120 } 00:21:14.120 } 00:21:14.120 ] 00:21:14.120 }, 00:21:14.120 { 00:21:14.120 "subsystem": "vmd", 00:21:14.120 "config": [] 00:21:14.120 }, 00:21:14.120 { 00:21:14.120 "subsystem": "accel", 00:21:14.120 "config": [ 00:21:14.120 { 00:21:14.120 "method": "accel_set_options", 00:21:14.120 "params": { 00:21:14.120 "small_cache_size": 128, 00:21:14.120 "large_cache_size": 16, 00:21:14.120 "task_count": 2048, 00:21:14.120 "sequence_count": 2048, 00:21:14.120 "buf_count": 2048 00:21:14.120 } 00:21:14.120 } 00:21:14.120 ] 00:21:14.120 }, 00:21:14.120 { 00:21:14.120 "subsystem": "bdev", 00:21:14.120 "config": [ 00:21:14.120 { 00:21:14.120 "method": "bdev_set_options", 00:21:14.120 "params": { 00:21:14.120 "bdev_io_pool_size": 65535, 00:21:14.120 "bdev_io_cache_size": 256, 00:21:14.120 "bdev_auto_examine": true, 00:21:14.120 "iobuf_small_cache_size": 128, 00:21:14.120 "iobuf_large_cache_size": 16 00:21:14.120 } 00:21:14.120 }, 00:21:14.120 { 00:21:14.120 "method": "bdev_raid_set_options", 00:21:14.120 "params": { 00:21:14.120 "process_window_size_kb": 1024, 00:21:14.120 "process_max_bandwidth_mb_sec": 0 00:21:14.120 } 00:21:14.120 }, 00:21:14.120 { 00:21:14.120 "method": "bdev_iscsi_set_options", 00:21:14.120 "params": { 00:21:14.120 "timeout_sec": 30 00:21:14.120 } 00:21:14.120 }, 00:21:14.120 { 00:21:14.120 "method": "bdev_nvme_set_options", 00:21:14.120 "params": { 00:21:14.120 "action_on_timeout": "none", 00:21:14.120 "timeout_us": 0, 00:21:14.120 "timeout_admin_us": 0, 00:21:14.120 "keep_alive_timeout_ms": 10000, 00:21:14.120 "arbitration_burst": 0, 00:21:14.120 "low_priority_weight": 0, 00:21:14.120 "medium_priority_weight": 0, 00:21:14.120 "high_priority_weight": 0, 00:21:14.120 "nvme_adminq_poll_period_us": 10000, 00:21:14.120 "nvme_ioq_poll_period_us": 0, 00:21:14.120 "io_queue_requests": 0, 00:21:14.120 "delay_cmd_submit": true, 00:21:14.120 "transport_retry_count": 4, 00:21:14.120 "bdev_retry_count": 3, 00:21:14.120 "transport_ack_timeout": 0, 00:21:14.120 "ctrlr_loss_timeout_sec": 0, 00:21:14.120 "reconnect_delay_sec": 0, 00:21:14.120 "fast_io_fail_timeout_sec": 0, 00:21:14.120 "disable_auto_failback": false, 00:21:14.120 "generate_uuids": false, 00:21:14.120 "transport_tos": 0, 00:21:14.120 "nvme_error_stat": false, 00:21:14.120 "rdma_srq_size": 0, 00:21:14.120 "io_path_stat": false, 00:21:14.120 "allow_accel_sequence": false, 00:21:14.120 "rdma_max_cq_size": 0, 00:21:14.120 "rdma_cm_event_timeout_ms": 0, 00:21:14.120 "dhchap_digests": [ 00:21:14.120 "sha256", 00:21:14.120 "sha384", 00:21:14.120 "sha512" 00:21:14.120 ], 00:21:14.120 "dhchap_dhgroups": [ 00:21:14.120 "null", 00:21:14.120 "ffdhe2048", 00:21:14.120 "ffdhe3072", 00:21:14.120 "ffdhe4096", 00:21:14.120 "ffdhe6144", 00:21:14.120 "ffdhe8192" 00:21:14.120 ] 00:21:14.120 } 00:21:14.120 }, 00:21:14.120 { 00:21:14.120 "method": "bdev_nvme_set_hotplug", 00:21:14.120 "params": { 00:21:14.120 "period_us": 100000, 00:21:14.120 "enable": false 00:21:14.120 } 00:21:14.120 }, 00:21:14.120 { 00:21:14.120 "method": "bdev_malloc_create", 00:21:14.120 "params": { 00:21:14.120 "name": "malloc0", 00:21:14.120 "num_blocks": 8192, 00:21:14.120 "block_size": 4096, 00:21:14.120 "physical_block_size": 4096, 00:21:14.120 "uuid": "55a32d31-a95f-4b0a-999f-80eb4e289213", 00:21:14.120 "optimal_io_boundary": 0, 00:21:14.120 "md_size": 0, 00:21:14.120 "dif_type": 0, 00:21:14.120 "dif_is_head_of_md": false, 00:21:14.120 "dif_pi_format": 0 00:21:14.120 } 00:21:14.120 }, 00:21:14.120 { 00:21:14.120 "method": "bdev_wait_for_examine" 00:21:14.120 } 00:21:14.120 ] 00:21:14.120 }, 00:21:14.120 { 00:21:14.120 "subsystem": "nbd", 00:21:14.120 "config": [] 00:21:14.120 }, 00:21:14.120 { 00:21:14.120 "subsystem": "scheduler", 00:21:14.120 "config": [ 00:21:14.120 { 00:21:14.120 "method": "framework_set_scheduler", 00:21:14.120 "params": { 00:21:14.120 "name": "static" 00:21:14.120 } 00:21:14.120 } 00:21:14.120 ] 00:21:14.120 }, 00:21:14.120 { 00:21:14.120 "subsystem": "nvmf", 00:21:14.120 "config": [ 00:21:14.120 { 00:21:14.120 "method": "nvmf_set_config", 00:21:14.120 "params": { 00:21:14.120 "discovery_filter": "match_any", 00:21:14.120 "admin_cmd_passthru": { 00:21:14.120 "identify_ctrlr": false 00:21:14.120 }, 00:21:14.120 "dhchap_digests": [ 00:21:14.120 "sha256", 00:21:14.120 "sha384", 00:21:14.120 "sha512" 00:21:14.120 ], 00:21:14.120 "dhchap_dhgroups": [ 00:21:14.120 "null", 00:21:14.120 "ffdhe2048", 00:21:14.120 "ffdhe3072", 00:21:14.120 "ffdhe4096", 00:21:14.120 "ffdhe6144", 00:21:14.120 "ffdhe8192" 00:21:14.120 ] 00:21:14.120 } 00:21:14.120 }, 00:21:14.120 { 00:21:14.120 "method": "nvmf_set_max_subsystems", 00:21:14.120 "params": { 00:21:14.120 "max_subsystems": 1024 00:21:14.120 } 00:21:14.120 }, 00:21:14.120 { 00:21:14.120 "method": "nvmf_set_crdt", 00:21:14.120 "params": { 00:21:14.120 "crdt1": 0, 00:21:14.120 "crdt2": 0, 00:21:14.120 "crdt3": 0 00:21:14.120 } 00:21:14.120 }, 00:21:14.120 { 00:21:14.120 "method": "nvmf_create_transport", 00:21:14.120 "params": { 00:21:14.120 "trtype": "TCP", 00:21:14.120 "max_queue_depth": 128, 00:21:14.120 "max_io_qpairs_per_ctrlr": 127, 00:21:14.120 "in_capsule_data_size": 4096, 00:21:14.120 "max_io_size": 131072, 00:21:14.120 "io_unit_size": 131072, 00:21:14.120 "max_aq_depth": 128, 00:21:14.120 06:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:14.120 "num_shared_buffers": 511, 00:21:14.120 "buf_cache_size": 4294967295, 00:21:14.120 "dif_insert_or_strip": false, 00:21:14.120 "zcopy": false, 00:21:14.120 "c2h_success": false, 00:21:14.120 "sock_priority": 0, 00:21:14.120 "abort_timeout_sec": 1, 00:21:14.120 "ack_timeout": 0, 00:21:14.120 "data_wr_pool_size": 0 00:21:14.120 } 00:21:14.120 }, 00:21:14.120 { 00:21:14.120 "method": "nvmf_create_subsystem", 00:21:14.120 "params": { 00:21:14.120 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:14.120 "allow_any_host": false, 00:21:14.120 "serial_number": "00000000000000000000", 00:21:14.120 "model_number": "SPDK bdev Controller", 00:21:14.120 "max_namespaces": 32, 00:21:14.120 "min_cntlid": 1, 00:21:14.120 "max_cntlid": 65519, 00:21:14.120 "ana_reporting": false 00:21:14.120 } 00:21:14.120 }, 00:21:14.120 { 00:21:14.120 "method": "nvmf_subsystem_add_host", 00:21:14.120 "params": { 00:21:14.120 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:14.120 "host": "nqn.2016-06.io.spdk:host1", 00:21:14.120 "psk": "key0" 00:21:14.120 } 00:21:14.120 }, 00:21:14.120 { 00:21:14.120 "method": "nvmf_subsystem_add_ns", 00:21:14.120 "params": { 00:21:14.120 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:14.120 "namespace": { 00:21:14.120 "nsid": 1, 00:21:14.120 "bdev_name": "malloc0", 00:21:14.120 "nguid": "55A32D31A95F4B0A999F80EB4E289213", 00:21:14.120 "uuid": "55a32d31-a95f-4b0a-999f-80eb4e289213", 00:21:14.121 "no_auto_visible": false 00:21:14.121 } 00:21:14.121 } 00:21:14.121 }, 00:21:14.121 { 00:21:14.121 "method": "nvmf_subsystem_add_listener", 00:21:14.121 "params": { 00:21:14.121 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:14.121 "listen_address": { 00:21:14.121 "trtype": "TCP", 00:21:14.121 "adrfam": "IPv4", 00:21:14.121 "traddr": "10.0.0.2", 00:21:14.121 "trsvcid": "4420" 00:21:14.121 }, 00:21:14.121 "secure_channel": false, 00:21:14.121 "sock_impl": "ssl" 00:21:14.121 } 00:21:14.121 } 00:21:14.121 ] 00:21:14.121 } 00:21:14.121 ] 00:21:14.121 }' 00:21:14.121 06:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=352985 00:21:14.121 06:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 352985 00:21:14.121 06:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 352985 ']' 00:21:14.121 06:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:14.121 06:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:14.121 06:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:14.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:14.121 06:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:14.121 06:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:14.121 06:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:21:14.121 [2024-12-09 06:20:08.586317] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:21:14.121 [2024-12-09 06:20:08.586368] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:14.121 [2024-12-09 06:20:08.673050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:14.121 [2024-12-09 06:20:08.702152] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:14.121 [2024-12-09 06:20:08.702183] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:14.121 [2024-12-09 06:20:08.702189] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:14.121 [2024-12-09 06:20:08.702194] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:14.121 [2024-12-09 06:20:08.702198] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:14.121 [2024-12-09 06:20:08.702665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:14.381 [2024-12-09 06:20:08.895647] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:14.381 [2024-12-09 06:20:08.927670] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:14.381 [2024-12-09 06:20:08.927847] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:14.951 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:14.951 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:14.951 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:14.951 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:14.951 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:14.951 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:14.951 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=353072 00:21:14.951 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 353072 /var/tmp/bdevperf.sock 00:21:14.951 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 353072 ']' 00:21:14.951 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:14.951 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:14.951 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:14.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:14.951 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:14.951 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:14.951 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:21:14.951 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:21:14.951 "subsystems": [ 00:21:14.951 { 00:21:14.951 "subsystem": "keyring", 00:21:14.951 "config": [ 00:21:14.951 { 00:21:14.951 "method": "keyring_file_add_key", 00:21:14.951 "params": { 00:21:14.951 "name": "key0", 00:21:14.951 "path": "/tmp/tmp.JG4sKhEbG2" 00:21:14.951 } 00:21:14.951 } 00:21:14.951 ] 00:21:14.951 }, 00:21:14.951 { 00:21:14.951 "subsystem": "iobuf", 00:21:14.951 "config": [ 00:21:14.951 { 00:21:14.951 "method": "iobuf_set_options", 00:21:14.951 "params": { 00:21:14.951 "small_pool_count": 8192, 00:21:14.951 "large_pool_count": 1024, 00:21:14.951 "small_bufsize": 8192, 00:21:14.951 "large_bufsize": 135168, 00:21:14.951 "enable_numa": false 00:21:14.951 } 00:21:14.951 } 00:21:14.951 ] 00:21:14.951 }, 00:21:14.951 { 00:21:14.951 "subsystem": "sock", 00:21:14.951 "config": [ 00:21:14.951 { 00:21:14.951 "method": "sock_set_default_impl", 00:21:14.951 "params": { 00:21:14.951 "impl_name": "posix" 00:21:14.951 } 00:21:14.951 }, 00:21:14.951 { 00:21:14.951 "method": "sock_impl_set_options", 00:21:14.951 "params": { 00:21:14.951 "impl_name": "ssl", 00:21:14.951 "recv_buf_size": 4096, 00:21:14.951 "send_buf_size": 4096, 00:21:14.951 "enable_recv_pipe": true, 00:21:14.951 "enable_quickack": false, 00:21:14.951 "enable_placement_id": 0, 00:21:14.951 "enable_zerocopy_send_server": true, 00:21:14.951 "enable_zerocopy_send_client": false, 00:21:14.951 "zerocopy_threshold": 0, 00:21:14.951 "tls_version": 0, 00:21:14.951 "enable_ktls": false 00:21:14.951 } 00:21:14.951 }, 00:21:14.951 { 00:21:14.951 "method": "sock_impl_set_options", 00:21:14.951 "params": { 00:21:14.951 "impl_name": "posix", 00:21:14.951 "recv_buf_size": 2097152, 00:21:14.951 "send_buf_size": 2097152, 00:21:14.951 "enable_recv_pipe": true, 00:21:14.951 "enable_quickack": false, 00:21:14.951 "enable_placement_id": 0, 00:21:14.951 "enable_zerocopy_send_server": true, 00:21:14.951 "enable_zerocopy_send_client": false, 00:21:14.951 "zerocopy_threshold": 0, 00:21:14.951 "tls_version": 0, 00:21:14.951 "enable_ktls": false 00:21:14.951 } 00:21:14.951 } 00:21:14.952 ] 00:21:14.952 }, 00:21:14.952 { 00:21:14.952 "subsystem": "vmd", 00:21:14.952 "config": [] 00:21:14.952 }, 00:21:14.952 { 00:21:14.952 "subsystem": "accel", 00:21:14.952 "config": [ 00:21:14.952 { 00:21:14.952 "method": "accel_set_options", 00:21:14.952 "params": { 00:21:14.952 "small_cache_size": 128, 00:21:14.952 "large_cache_size": 16, 00:21:14.952 "task_count": 2048, 00:21:14.952 "sequence_count": 2048, 00:21:14.952 "buf_count": 2048 00:21:14.952 } 00:21:14.952 } 00:21:14.952 ] 00:21:14.952 }, 00:21:14.952 { 00:21:14.952 "subsystem": "bdev", 00:21:14.952 "config": [ 00:21:14.952 { 00:21:14.952 "method": "bdev_set_options", 00:21:14.952 "params": { 00:21:14.952 "bdev_io_pool_size": 65535, 00:21:14.952 "bdev_io_cache_size": 256, 00:21:14.952 "bdev_auto_examine": true, 00:21:14.952 "iobuf_small_cache_size": 128, 00:21:14.952 "iobuf_large_cache_size": 16 00:21:14.952 } 00:21:14.952 }, 00:21:14.952 { 00:21:14.952 "method": "bdev_raid_set_options", 00:21:14.952 "params": { 00:21:14.952 "process_window_size_kb": 1024, 00:21:14.952 "process_max_bandwidth_mb_sec": 0 00:21:14.952 } 00:21:14.952 }, 00:21:14.952 { 00:21:14.952 "method": "bdev_iscsi_set_options", 00:21:14.952 "params": { 00:21:14.952 "timeout_sec": 30 00:21:14.952 } 00:21:14.952 }, 00:21:14.952 { 00:21:14.952 "method": "bdev_nvme_set_options", 00:21:14.952 "params": { 00:21:14.952 "action_on_timeout": "none", 00:21:14.952 "timeout_us": 0, 00:21:14.952 "timeout_admin_us": 0, 00:21:14.952 "keep_alive_timeout_ms": 10000, 00:21:14.952 "arbitration_burst": 0, 00:21:14.952 "low_priority_weight": 0, 00:21:14.952 "medium_priority_weight": 0, 00:21:14.952 "high_priority_weight": 0, 00:21:14.952 "nvme_adminq_poll_period_us": 10000, 00:21:14.952 "nvme_ioq_poll_period_us": 0, 00:21:14.952 "io_queue_requests": 512, 00:21:14.952 "delay_cmd_submit": true, 00:21:14.952 "transport_retry_count": 4, 00:21:14.952 "bdev_retry_count": 3, 00:21:14.952 "transport_ack_timeout": 0, 00:21:14.952 "ctrlr_loss_timeout_sec": 0, 00:21:14.952 "reconnect_delay_sec": 0, 00:21:14.952 "fast_io_fail_timeout_sec": 0, 00:21:14.952 "disable_auto_failback": false, 00:21:14.952 "generate_uuids": false, 00:21:14.952 "transport_tos": 0, 00:21:14.952 "nvme_error_stat": false, 00:21:14.952 "rdma_srq_size": 0, 00:21:14.952 "io_path_stat": false, 00:21:14.952 "allow_accel_sequence": false, 00:21:14.952 "rdma_max_cq_size": 0, 00:21:14.952 "rdma_cm_event_timeout_ms": 0, 00:21:14.952 "dhchap_digests": [ 00:21:14.952 "sha256", 00:21:14.952 "sha384", 00:21:14.952 "sha512" 00:21:14.952 ], 00:21:14.952 "dhchap_dhgroups": [ 00:21:14.952 "null", 00:21:14.952 "ffdhe2048", 00:21:14.952 "ffdhe3072", 00:21:14.952 "ffdhe4096", 00:21:14.952 "ffdhe6144", 00:21:14.952 "ffdhe8192" 00:21:14.952 ] 00:21:14.952 } 00:21:14.952 }, 00:21:14.952 { 00:21:14.952 "method": "bdev_nvme_attach_controller", 00:21:14.952 "params": { 00:21:14.952 "name": "nvme0", 00:21:14.952 "trtype": "TCP", 00:21:14.952 "adrfam": "IPv4", 00:21:14.952 "traddr": "10.0.0.2", 00:21:14.952 "trsvcid": "4420", 00:21:14.952 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:14.952 "prchk_reftag": false, 00:21:14.952 "prchk_guard": false, 00:21:14.952 "ctrlr_loss_timeout_sec": 0, 00:21:14.952 "reconnect_delay_sec": 0, 00:21:14.952 "fast_io_fail_timeout_sec": 0, 00:21:14.952 "psk": "key0", 00:21:14.952 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:14.952 "hdgst": false, 00:21:14.952 "ddgst": false, 00:21:14.952 "multipath": "multipath" 00:21:14.952 } 00:21:14.952 }, 00:21:14.952 { 00:21:14.952 "method": "bdev_nvme_set_hotplug", 00:21:14.952 "params": { 00:21:14.952 "period_us": 100000, 00:21:14.952 "enable": false 00:21:14.952 } 00:21:14.952 }, 00:21:14.952 { 00:21:14.952 "method": "bdev_enable_histogram", 00:21:14.952 "params": { 00:21:14.952 "name": "nvme0n1", 00:21:14.952 "enable": true 00:21:14.952 } 00:21:14.952 }, 00:21:14.952 { 00:21:14.952 "method": "bdev_wait_for_examine" 00:21:14.952 } 00:21:14.952 ] 00:21:14.952 }, 00:21:14.952 { 00:21:14.952 "subsystem": "nbd", 00:21:14.952 "config": [] 00:21:14.952 } 00:21:14.952 ] 00:21:14.952 }' 00:21:14.952 [2024-12-09 06:20:09.470792] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:21:14.952 [2024-12-09 06:20:09.470840] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid353072 ] 00:21:14.952 [2024-12-09 06:20:09.528621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:15.212 [2024-12-09 06:20:09.559083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:15.212 [2024-12-09 06:20:09.694522] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:15.783 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:15.783 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:15.783 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:15.783 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:21:16.043 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.043 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:16.043 Running I/O for 1 seconds... 00:21:16.985 4106.00 IOPS, 16.04 MiB/s 00:21:16.985 Latency(us) 00:21:16.985 [2024-12-09T05:20:11.572Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:16.985 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:16.985 Verification LBA range: start 0x0 length 0x2000 00:21:16.985 nvme0n1 : 1.03 4117.04 16.08 0.00 0.00 30716.98 5268.09 175031.53 00:21:16.985 [2024-12-09T05:20:11.572Z] =================================================================================================================== 00:21:16.985 [2024-12-09T05:20:11.572Z] Total : 4117.04 16.08 0.00 0.00 30716.98 5268.09 175031.53 00:21:16.985 { 00:21:16.985 "results": [ 00:21:16.985 { 00:21:16.985 "job": "nvme0n1", 00:21:16.985 "core_mask": "0x2", 00:21:16.985 "workload": "verify", 00:21:16.985 "status": "finished", 00:21:16.985 "verify_range": { 00:21:16.985 "start": 0, 00:21:16.985 "length": 8192 00:21:16.985 }, 00:21:16.985 "queue_depth": 128, 00:21:16.985 "io_size": 4096, 00:21:16.985 "runtime": 1.028651, 00:21:16.985 "iops": 4117.042612120145, 00:21:16.985 "mibps": 16.082197703594318, 00:21:16.985 "io_failed": 0, 00:21:16.985 "io_timeout": 0, 00:21:16.985 "avg_latency_us": 30716.9779398783, 00:21:16.985 "min_latency_us": 5268.086153846154, 00:21:16.985 "max_latency_us": 175031.5323076923 00:21:16.985 } 00:21:16.985 ], 00:21:16.985 "core_count": 1 00:21:16.985 } 00:21:16.985 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:21:16.985 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:21:16.985 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:21:16.985 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:21:16.985 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:21:16.985 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:21:16.985 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:17.244 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:21:17.244 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:21:17.244 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:21:17.244 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:17.244 nvmf_trace.0 00:21:17.244 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:21:17.244 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 353072 00:21:17.244 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 353072 ']' 00:21:17.244 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 353072 00:21:17.244 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:17.244 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:17.244 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 353072 00:21:17.244 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:17.244 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:17.244 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 353072' 00:21:17.244 killing process with pid 353072 00:21:17.244 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 353072 00:21:17.244 Received shutdown signal, test time was about 1.000000 seconds 00:21:17.244 00:21:17.244 Latency(us) 00:21:17.244 [2024-12-09T05:20:11.831Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:17.244 [2024-12-09T05:20:11.831Z] =================================================================================================================== 00:21:17.244 [2024-12-09T05:20:11.831Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:17.244 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 353072 00:21:17.244 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:21:17.244 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:17.244 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:21:17.244 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:17.244 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:21:17.244 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:17.244 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:17.504 rmmod nvme_tcp 00:21:17.504 rmmod nvme_fabrics 00:21:17.504 rmmod nvme_keyring 00:21:17.504 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:17.504 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:21:17.504 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:21:17.504 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 352985 ']' 00:21:17.504 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 352985 00:21:17.504 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 352985 ']' 00:21:17.504 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 352985 00:21:17.504 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:17.504 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:17.504 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 352985 00:21:17.504 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:17.504 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:17.504 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 352985' 00:21:17.504 killing process with pid 352985 00:21:17.504 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 352985 00:21:17.504 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 352985 00:21:17.504 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:17.504 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:17.504 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:17.504 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:21:17.504 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:21:17.504 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:17.504 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:21:17.504 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:17.504 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:17.504 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:17.504 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:17.504 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:20.046 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:20.046 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.EOMzywe4Jv /tmp/tmp.6tEDwyT3Oj /tmp/tmp.JG4sKhEbG2 00:21:20.046 00:21:20.046 real 1m20.555s 00:21:20.046 user 2m7.983s 00:21:20.046 sys 0m23.580s 00:21:20.046 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:20.046 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:20.046 ************************************ 00:21:20.046 END TEST nvmf_tls 00:21:20.046 ************************************ 00:21:20.046 06:20:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:20.046 06:20:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:20.046 06:20:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:20.046 06:20:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:20.046 ************************************ 00:21:20.046 START TEST nvmf_fips 00:21:20.046 ************************************ 00:21:20.046 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:20.046 * Looking for test storage... 00:21:20.046 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:21:20.046 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:20.046 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:21:20.046 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:20.046 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:20.046 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:20.046 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:20.046 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:20.046 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:21:20.046 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:21:20.046 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:21:20.046 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:21:20.046 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:21:20.046 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:21:20.046 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:21:20.046 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:20.046 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:21:20.046 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:21:20.046 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:20.046 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:20.046 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:21:20.046 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:21:20.046 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:20.046 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:21:20.046 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:21:20.046 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:21:20.046 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:21:20.046 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:20.046 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:21:20.046 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:21:20.046 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:20.046 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:20.046 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:21:20.046 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:20.046 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:20.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:20.046 --rc genhtml_branch_coverage=1 00:21:20.046 --rc genhtml_function_coverage=1 00:21:20.046 --rc genhtml_legend=1 00:21:20.046 --rc geninfo_all_blocks=1 00:21:20.046 --rc geninfo_unexecuted_blocks=1 00:21:20.046 00:21:20.046 ' 00:21:20.046 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:20.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:20.046 --rc genhtml_branch_coverage=1 00:21:20.046 --rc genhtml_function_coverage=1 00:21:20.046 --rc genhtml_legend=1 00:21:20.046 --rc geninfo_all_blocks=1 00:21:20.046 --rc geninfo_unexecuted_blocks=1 00:21:20.046 00:21:20.046 ' 00:21:20.046 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:20.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:20.046 --rc genhtml_branch_coverage=1 00:21:20.046 --rc genhtml_function_coverage=1 00:21:20.046 --rc genhtml_legend=1 00:21:20.046 --rc geninfo_all_blocks=1 00:21:20.046 --rc geninfo_unexecuted_blocks=1 00:21:20.046 00:21:20.046 ' 00:21:20.046 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:20.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:20.046 --rc genhtml_branch_coverage=1 00:21:20.046 --rc genhtml_function_coverage=1 00:21:20.046 --rc genhtml_legend=1 00:21:20.046 --rc geninfo_all_blocks=1 00:21:20.046 --rc geninfo_unexecuted_blocks=1 00:21:20.046 00:21:20.046 ' 00:21:20.046 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:20.046 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:21:20.046 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:20.046 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:20.046 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:20.046 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:20.046 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:20.046 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:20.046 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:20.046 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:20.046 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:20.046 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:20.046 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:21:20.046 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:21:20.046 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:20.046 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:20.046 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:20.046 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:20.046 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:20.046 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:21:20.046 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:20.046 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:20.046 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:20.046 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.046 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.046 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.046 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:21:20.047 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.047 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:21:20.047 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:20.047 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:20.047 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:20.047 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:20.047 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:20.047 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:20.047 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:20.047 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:20.047 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:20.047 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:20.047 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:20.047 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:21:20.047 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:21:20.047 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:21:20.047 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:21:20.047 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:21:20.047 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:21:20.047 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:20.047 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:20.047 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:21:20.047 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:21:20.047 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:21:20.047 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:21:20.047 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:21:20.047 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:21:20.047 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:21:20.047 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:20.047 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:21:20.047 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:21:20.047 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:20.047 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:20.047 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:21:20.047 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:21:20.047 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:20.047 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:21:20.047 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:21:20.047 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:21:20.047 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:21:20.047 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:20.047 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:21:20.047 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:21:20.047 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:20.047 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:20.047 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:21:20.047 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:20.047 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:21:20.047 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:21:20.047 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:20.047 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:21:20.047 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:21:20.047 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:21:20.047 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:21:20.047 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:20.047 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:21:20.047 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:21:20.047 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:20.047 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:21:20.047 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:21:20.047 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:21:20.047 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:21:20.047 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:21:20.047 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:21:20.047 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:21:20.047 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:21:20.047 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:21:20.047 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:21:20.047 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:21:20.047 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:21:20.047 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:21:20.047 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:21:20.047 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:21:20.047 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:21:20.047 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:21:20.307 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:21:20.307 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:21:20.307 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:21:20.307 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:21:20.307 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:21:20.307 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:21:20.307 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:21:20.307 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:21:20.307 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:20.307 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:21:20.307 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:20.307 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:21:20.307 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:20.307 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:21:20.307 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:21:20.307 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:21:20.307 Error setting digest 00:21:20.307 40A2BF80F87F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:21:20.307 40A2BF80F87F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:21:20.307 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:21:20.307 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:20.307 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:20.307 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:20.307 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:21:20.307 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:20.307 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:20.308 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:20.308 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:20.308 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:20.308 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:20.308 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:20.308 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:20.308 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:20.308 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:20.308 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:21:20.308 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:28.445 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:28.445 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:21:28.445 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:28.445 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:28.445 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:28.445 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:28.445 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:28.445 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:21:28.445 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:28.445 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:21:28.445 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:21:28.445 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:21:28.445 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:21:28.445 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:21:28.445 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:21:28.445 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:28.445 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:28.445 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:28.445 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:28.445 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:28.445 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:28.445 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:28.445 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:28.445 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:28.445 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:28.445 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:28.445 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:28.445 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:28.445 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:28.445 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:28.445 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:28.445 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:28.445 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:28.445 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:28.445 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:28.445 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:28.445 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:28.445 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:28.445 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:28.445 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:28.445 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:28.445 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:28.445 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:28.445 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:28.445 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:28.446 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:28.446 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:28.446 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:28.446 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:28.446 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:28.446 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:28.446 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:28.446 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:28.446 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:28.446 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:28.446 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:28.446 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:28.446 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:28.446 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:28.446 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:28.446 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:28.446 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:28.446 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:28.446 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:28.446 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:28.446 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:28.446 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:28.446 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:28.446 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:28.446 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:28.446 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:28.446 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:28.446 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:28.446 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:21:28.446 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:28.446 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:28.446 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:28.446 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:28.446 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:28.446 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:28.446 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:28.446 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:28.446 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:28.446 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:28.446 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:28.446 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:28.446 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:28.446 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:28.446 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:28.446 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:28.446 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:28.446 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:28.446 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:28.446 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:28.446 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:28.446 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:28.446 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:28.446 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:28.446 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:28.446 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:28.446 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:28.446 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.638 ms 00:21:28.446 00:21:28.446 --- 10.0.0.2 ping statistics --- 00:21:28.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:28.446 rtt min/avg/max/mdev = 0.638/0.638/0.638/0.000 ms 00:21:28.446 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:28.446 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:28.446 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.300 ms 00:21:28.446 00:21:28.446 --- 10.0.0.1 ping statistics --- 00:21:28.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:28.446 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:21:28.446 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:28.446 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:21:28.446 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:28.446 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:28.446 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:28.446 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:28.446 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:28.446 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:28.446 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:28.446 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:21:28.446 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:28.446 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:28.446 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:28.446 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=357568 00:21:28.446 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 357568 00:21:28.446 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:28.446 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 357568 ']' 00:21:28.446 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:28.446 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:28.446 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:28.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:28.446 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:28.446 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:28.446 [2024-12-09 06:20:22.230538] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:21:28.446 [2024-12-09 06:20:22.230605] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:28.446 [2024-12-09 06:20:22.308119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:28.446 [2024-12-09 06:20:22.357103] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:28.446 [2024-12-09 06:20:22.357152] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:28.446 [2024-12-09 06:20:22.357160] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:28.446 [2024-12-09 06:20:22.357167] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:28.446 [2024-12-09 06:20:22.357172] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:28.446 [2024-12-09 06:20:22.357902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:28.706 06:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:28.706 06:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:21:28.706 06:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:28.706 06:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:28.706 06:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:28.706 06:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:28.706 06:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:21:28.706 06:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:28.706 06:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:21:28.706 06:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.a1m 00:21:28.706 06:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:28.707 06:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.a1m 00:21:28.707 06:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.a1m 00:21:28.707 06:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.a1m 00:21:28.707 06:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:28.707 [2024-12-09 06:20:23.258933] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:28.707 [2024-12-09 06:20:23.274924] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:28.707 [2024-12-09 06:20:23.275201] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:28.966 malloc0 00:21:28.966 06:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:28.966 06:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=357888 00:21:28.966 06:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 357888 /var/tmp/bdevperf.sock 00:21:28.966 06:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:28.966 06:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 357888 ']' 00:21:28.966 06:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:28.966 06:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:28.966 06:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:28.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:28.966 06:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:28.966 06:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:28.966 [2024-12-09 06:20:23.424847] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:21:28.966 [2024-12-09 06:20:23.424923] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid357888 ] 00:21:28.966 [2024-12-09 06:20:23.497555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:28.966 [2024-12-09 06:20:23.546651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:29.936 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:29.936 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:21:29.936 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.a1m 00:21:29.936 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:30.196 [2024-12-09 06:20:24.552018] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:30.196 TLSTESTn1 00:21:30.196 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:30.196 Running I/O for 10 seconds... 00:21:32.514 1376.00 IOPS, 5.38 MiB/s [2024-12-09T05:20:28.041Z] 2092.50 IOPS, 8.17 MiB/s [2024-12-09T05:20:28.980Z] 2095.67 IOPS, 8.19 MiB/s [2024-12-09T05:20:29.919Z] 2030.75 IOPS, 7.93 MiB/s [2024-12-09T05:20:30.860Z] 1988.40 IOPS, 7.77 MiB/s [2024-12-09T05:20:31.797Z] 2135.17 IOPS, 8.34 MiB/s [2024-12-09T05:20:33.179Z] 2038.71 IOPS, 7.96 MiB/s [2024-12-09T05:20:34.119Z] 1958.88 IOPS, 7.65 MiB/s [2024-12-09T05:20:35.059Z] 1935.89 IOPS, 7.56 MiB/s [2024-12-09T05:20:35.059Z] 2017.70 IOPS, 7.88 MiB/s 00:21:40.472 Latency(us) 00:21:40.472 [2024-12-09T05:20:35.059Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:40.472 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:40.472 Verification LBA range: start 0x0 length 0x2000 00:21:40.473 TLSTESTn1 : 10.05 2019.99 7.89 0.00 0.00 63246.15 4990.82 141154.46 00:21:40.473 [2024-12-09T05:20:35.060Z] =================================================================================================================== 00:21:40.473 [2024-12-09T05:20:35.060Z] Total : 2019.99 7.89 0.00 0.00 63246.15 4990.82 141154.46 00:21:40.473 { 00:21:40.473 "results": [ 00:21:40.473 { 00:21:40.473 "job": "TLSTESTn1", 00:21:40.473 "core_mask": "0x4", 00:21:40.473 "workload": "verify", 00:21:40.473 "status": "finished", 00:21:40.473 "verify_range": { 00:21:40.473 "start": 0, 00:21:40.473 "length": 8192 00:21:40.473 }, 00:21:40.473 "queue_depth": 128, 00:21:40.473 "io_size": 4096, 00:21:40.473 "runtime": 10.052021, 00:21:40.473 "iops": 2019.9918006538187, 00:21:40.473 "mibps": 7.890592971303979, 00:21:40.473 "io_failed": 0, 00:21:40.473 "io_timeout": 0, 00:21:40.473 "avg_latency_us": 63246.14847453261, 00:21:40.473 "min_latency_us": 4990.818461538462, 00:21:40.473 "max_latency_us": 141154.46153846153 00:21:40.473 } 00:21:40.473 ], 00:21:40.473 "core_count": 1 00:21:40.473 } 00:21:40.473 06:20:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:21:40.473 06:20:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:21:40.473 06:20:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:21:40.473 06:20:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:21:40.473 06:20:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:21:40.473 06:20:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:40.473 06:20:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:21:40.473 06:20:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:21:40.473 06:20:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:21:40.473 06:20:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:40.473 nvmf_trace.0 00:21:40.473 06:20:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:21:40.473 06:20:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 357888 00:21:40.473 06:20:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 357888 ']' 00:21:40.473 06:20:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 357888 00:21:40.473 06:20:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:21:40.473 06:20:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:40.473 06:20:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 357888 00:21:40.473 06:20:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:40.473 06:20:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:40.473 06:20:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 357888' 00:21:40.473 killing process with pid 357888 00:21:40.473 06:20:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 357888 00:21:40.473 Received shutdown signal, test time was about 10.000000 seconds 00:21:40.473 00:21:40.473 Latency(us) 00:21:40.473 [2024-12-09T05:20:35.060Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:40.473 [2024-12-09T05:20:35.060Z] =================================================================================================================== 00:21:40.473 [2024-12-09T05:20:35.060Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:40.473 06:20:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 357888 00:21:40.745 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:21:40.745 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:40.745 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:21:40.745 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:40.745 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:21:40.745 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:40.745 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:40.745 rmmod nvme_tcp 00:21:40.745 rmmod nvme_fabrics 00:21:40.745 rmmod nvme_keyring 00:21:40.745 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:40.745 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:21:40.745 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:21:40.745 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 357568 ']' 00:21:40.745 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 357568 00:21:40.745 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 357568 ']' 00:21:40.745 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 357568 00:21:40.745 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:21:40.745 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:40.745 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 357568 00:21:40.745 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:40.745 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:40.745 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 357568' 00:21:40.745 killing process with pid 357568 00:21:40.745 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 357568 00:21:40.745 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 357568 00:21:40.745 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:40.745 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:40.745 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:40.745 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:21:40.745 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:21:40.745 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:40.745 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:21:40.745 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:40.745 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:40.745 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:40.745 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:40.745 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:43.291 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:43.291 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.a1m 00:21:43.291 00:21:43.291 real 0m23.175s 00:21:43.291 user 0m25.885s 00:21:43.291 sys 0m8.578s 00:21:43.291 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:43.291 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:43.291 ************************************ 00:21:43.291 END TEST nvmf_fips 00:21:43.291 ************************************ 00:21:43.291 06:20:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:21:43.291 06:20:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:43.291 06:20:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:43.291 06:20:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:43.291 ************************************ 00:21:43.291 START TEST nvmf_control_msg_list 00:21:43.291 ************************************ 00:21:43.291 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:21:43.291 * Looking for test storage... 00:21:43.291 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:43.291 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:43.291 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:21:43.291 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:43.291 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:43.291 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:43.291 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:43.291 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:43.291 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:21:43.291 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:21:43.291 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:21:43.291 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:21:43.291 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:21:43.291 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:21:43.291 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:21:43.291 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:43.291 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:21:43.291 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:21:43.291 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:43.291 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:43.291 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:21:43.291 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:21:43.291 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:43.291 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:21:43.291 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:21:43.291 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:21:43.291 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:21:43.291 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:43.291 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:21:43.291 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:21:43.291 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:43.291 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:43.291 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:21:43.291 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:43.291 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:43.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:43.291 --rc genhtml_branch_coverage=1 00:21:43.291 --rc genhtml_function_coverage=1 00:21:43.291 --rc genhtml_legend=1 00:21:43.291 --rc geninfo_all_blocks=1 00:21:43.291 --rc geninfo_unexecuted_blocks=1 00:21:43.291 00:21:43.291 ' 00:21:43.291 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:43.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:43.291 --rc genhtml_branch_coverage=1 00:21:43.291 --rc genhtml_function_coverage=1 00:21:43.291 --rc genhtml_legend=1 00:21:43.291 --rc geninfo_all_blocks=1 00:21:43.291 --rc geninfo_unexecuted_blocks=1 00:21:43.291 00:21:43.291 ' 00:21:43.291 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:43.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:43.291 --rc genhtml_branch_coverage=1 00:21:43.291 --rc genhtml_function_coverage=1 00:21:43.291 --rc genhtml_legend=1 00:21:43.291 --rc geninfo_all_blocks=1 00:21:43.291 --rc geninfo_unexecuted_blocks=1 00:21:43.291 00:21:43.291 ' 00:21:43.291 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:43.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:43.291 --rc genhtml_branch_coverage=1 00:21:43.291 --rc genhtml_function_coverage=1 00:21:43.291 --rc genhtml_legend=1 00:21:43.291 --rc geninfo_all_blocks=1 00:21:43.291 --rc geninfo_unexecuted_blocks=1 00:21:43.291 00:21:43.291 ' 00:21:43.291 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:43.291 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:21:43.291 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:43.291 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:43.291 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:43.291 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:43.291 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:43.291 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:43.291 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:43.291 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:43.291 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:43.291 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:43.291 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:21:43.291 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:21:43.291 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:43.291 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:43.291 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:43.291 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:43.291 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:43.291 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:21:43.291 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:43.291 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:43.291 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:43.292 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.292 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.292 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.292 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:21:43.292 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.292 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:21:43.292 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:43.292 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:43.292 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:43.292 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:43.292 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:43.292 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:43.292 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:43.292 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:43.292 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:43.292 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:43.292 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:21:43.292 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:43.292 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:43.292 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:43.292 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:43.292 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:43.292 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:43.292 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:43.292 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:43.292 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:43.292 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:43.292 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:21:43.292 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:51.426 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:51.426 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:51.426 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:51.426 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:51.426 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:51.427 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:51.427 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:51.427 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:51.427 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:51.427 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:51.427 06:20:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:51.427 06:20:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:51.427 06:20:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:51.427 06:20:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:51.427 06:20:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:51.427 06:20:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:51.427 06:20:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:51.427 06:20:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:51.427 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:51.427 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.597 ms 00:21:51.427 00:21:51.427 --- 10.0.0.2 ping statistics --- 00:21:51.427 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:51.427 rtt min/avg/max/mdev = 0.597/0.597/0.597/0.000 ms 00:21:51.427 06:20:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:51.427 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:51.427 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.237 ms 00:21:51.427 00:21:51.427 --- 10.0.0.1 ping statistics --- 00:21:51.427 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:51.427 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:21:51.427 06:20:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:51.427 06:20:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:21:51.427 06:20:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:51.427 06:20:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:51.427 06:20:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:51.427 06:20:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:51.427 06:20:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:51.427 06:20:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:51.427 06:20:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:51.427 06:20:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:21:51.427 06:20:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:51.427 06:20:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:51.427 06:20:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:51.427 06:20:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=363761 00:21:51.427 06:20:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 363761 00:21:51.427 06:20:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:51.427 06:20:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 363761 ']' 00:21:51.427 06:20:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:51.427 06:20:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:51.427 06:20:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:51.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:51.427 06:20:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:51.427 06:20:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:51.427 [2024-12-09 06:20:45.287031] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:21:51.427 [2024-12-09 06:20:45.287095] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:51.427 [2024-12-09 06:20:45.384050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:51.427 [2024-12-09 06:20:45.433423] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:51.427 [2024-12-09 06:20:45.433488] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:51.427 [2024-12-09 06:20:45.433495] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:51.427 [2024-12-09 06:20:45.433502] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:51.427 [2024-12-09 06:20:45.433508] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:51.427 [2024-12-09 06:20:45.434252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:51.688 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:51.688 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:21:51.688 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:51.688 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:51.688 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:51.688 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:51.688 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:51.688 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:51.688 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:21:51.688 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.688 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:51.688 [2024-12-09 06:20:46.155404] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:51.688 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.688 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:21:51.688 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.688 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:51.688 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.688 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:51.688 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.688 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:51.688 Malloc0 00:21:51.688 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.688 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:51.688 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.688 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:51.688 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.688 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:51.688 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.688 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:51.688 [2024-12-09 06:20:46.193643] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:51.688 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.688 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=363954 00:21:51.688 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:51.688 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=363955 00:21:51.688 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:51.688 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=363956 00:21:51.688 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 363954 00:21:51.688 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:51.688 [2024-12-09 06:20:46.272344] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:51.949 [2024-12-09 06:20:46.272786] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:51.949 [2024-12-09 06:20:46.282023] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:52.889 Initializing NVMe Controllers 00:21:52.890 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:52.890 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:21:52.890 Initialization complete. Launching workers. 00:21:52.890 ======================================================== 00:21:52.890 Latency(us) 00:21:52.890 Device Information : IOPS MiB/s Average min max 00:21:52.890 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 1853.99 7.24 539.37 197.49 870.26 00:21:52.890 ======================================================== 00:21:52.890 Total : 1853.99 7.24 539.37 197.49 870.26 00:21:52.890 00:21:52.890 Initializing NVMe Controllers 00:21:52.890 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:52.890 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:21:52.890 Initialization complete. Launching workers. 00:21:52.890 ======================================================== 00:21:52.890 Latency(us) 00:21:52.890 Device Information : IOPS MiB/s Average min max 00:21:52.890 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 1872.00 7.31 534.12 131.83 902.86 00:21:52.890 ======================================================== 00:21:52.890 Total : 1872.00 7.31 534.12 131.83 902.86 00:21:52.890 00:21:53.150 06:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 363955 00:21:53.150 06:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 363956 00:21:53.150 Initializing NVMe Controllers 00:21:53.150 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:53.150 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:21:53.150 Initialization complete. Launching workers. 00:21:53.150 ======================================================== 00:21:53.150 Latency(us) 00:21:53.150 Device Information : IOPS MiB/s Average min max 00:21:53.150 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 2124.00 8.30 470.76 160.91 703.70 00:21:53.150 ======================================================== 00:21:53.150 Total : 2124.00 8.30 470.76 160.91 703.70 00:21:53.150 00:21:53.150 06:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:53.150 06:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:21:53.150 06:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:53.150 06:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:21:53.150 06:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:53.150 06:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:21:53.150 06:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:53.150 06:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:53.150 rmmod nvme_tcp 00:21:53.150 rmmod nvme_fabrics 00:21:53.150 rmmod nvme_keyring 00:21:53.150 06:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:53.150 06:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:21:53.150 06:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:21:53.150 06:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 363761 ']' 00:21:53.150 06:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 363761 00:21:53.150 06:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 363761 ']' 00:21:53.150 06:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 363761 00:21:53.150 06:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:21:53.150 06:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:53.150 06:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 363761 00:21:53.150 06:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:53.150 06:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:53.150 06:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 363761' 00:21:53.150 killing process with pid 363761 00:21:53.150 06:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 363761 00:21:53.150 06:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 363761 00:21:53.410 06:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:53.410 06:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:53.410 06:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:53.410 06:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:21:53.410 06:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:21:53.410 06:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:53.410 06:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:21:53.410 06:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:53.410 06:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:53.410 06:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:53.410 06:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:53.410 06:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:55.320 06:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:55.320 00:21:55.320 real 0m12.423s 00:21:55.320 user 0m7.998s 00:21:55.320 sys 0m6.649s 00:21:55.320 06:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:55.320 06:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:55.320 ************************************ 00:21:55.320 END TEST nvmf_control_msg_list 00:21:55.320 ************************************ 00:21:55.581 06:20:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:21:55.581 06:20:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:55.581 06:20:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:55.581 06:20:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:55.581 ************************************ 00:21:55.581 START TEST nvmf_wait_for_buf 00:21:55.581 ************************************ 00:21:55.581 06:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:21:55.581 * Looking for test storage... 00:21:55.581 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:55.581 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:55.581 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:21:55.581 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:55.581 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:55.842 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:55.842 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:55.842 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:55.842 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:21:55.842 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:21:55.842 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:21:55.842 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:21:55.842 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:21:55.842 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:21:55.842 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:21:55.842 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:55.842 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:21:55.842 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:21:55.842 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:55.842 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:55.842 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:21:55.842 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:21:55.842 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:55.842 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:21:55.842 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:21:55.842 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:21:55.842 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:21:55.842 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:55.842 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:21:55.842 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:21:55.842 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:55.843 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:55.843 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:21:55.843 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:55.843 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:55.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:55.843 --rc genhtml_branch_coverage=1 00:21:55.843 --rc genhtml_function_coverage=1 00:21:55.843 --rc genhtml_legend=1 00:21:55.843 --rc geninfo_all_blocks=1 00:21:55.843 --rc geninfo_unexecuted_blocks=1 00:21:55.843 00:21:55.843 ' 00:21:55.843 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:55.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:55.843 --rc genhtml_branch_coverage=1 00:21:55.843 --rc genhtml_function_coverage=1 00:21:55.843 --rc genhtml_legend=1 00:21:55.843 --rc geninfo_all_blocks=1 00:21:55.843 --rc geninfo_unexecuted_blocks=1 00:21:55.843 00:21:55.843 ' 00:21:55.843 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:55.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:55.843 --rc genhtml_branch_coverage=1 00:21:55.843 --rc genhtml_function_coverage=1 00:21:55.843 --rc genhtml_legend=1 00:21:55.843 --rc geninfo_all_blocks=1 00:21:55.843 --rc geninfo_unexecuted_blocks=1 00:21:55.843 00:21:55.843 ' 00:21:55.843 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:55.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:55.843 --rc genhtml_branch_coverage=1 00:21:55.843 --rc genhtml_function_coverage=1 00:21:55.843 --rc genhtml_legend=1 00:21:55.843 --rc geninfo_all_blocks=1 00:21:55.843 --rc geninfo_unexecuted_blocks=1 00:21:55.843 00:21:55.843 ' 00:21:55.843 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:55.843 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:21:55.843 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:55.843 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:55.843 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:55.843 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:55.843 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:55.843 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:55.843 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:55.843 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:55.843 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:55.843 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:55.843 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:21:55.843 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:21:55.843 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:55.843 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:55.843 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:55.843 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:55.843 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:55.843 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:21:55.843 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:55.843 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:55.843 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:55.843 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.843 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.843 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.843 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:21:55.843 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.843 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:21:55.843 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:55.843 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:55.843 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:55.843 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:55.843 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:55.843 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:55.843 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:55.843 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:55.843 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:55.843 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:55.843 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:21:55.843 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:55.843 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:55.843 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:55.843 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:55.843 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:55.843 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:55.843 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:55.843 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:55.843 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:55.843 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:55.843 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:21:55.843 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:03.991 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:03.991 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:22:03.991 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:03.991 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:03.991 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:03.991 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:03.991 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:03.991 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:22:03.991 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:03.991 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:22:03.991 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:22:03.991 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:22:03.991 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:22:03.991 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:22:03.991 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:22:03.991 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:03.991 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:03.991 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:03.991 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:03.991 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:03.991 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:03.991 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:03.991 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:03.991 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:03.991 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:03.991 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:03.991 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:03.991 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:03.991 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:03.991 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:03.991 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:03.991 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:03.991 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:03.991 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:03.991 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:03.991 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:03.991 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:03.991 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:03.991 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:03.991 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:03.991 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:03.991 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:03.991 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:03.991 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:03.991 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:03.991 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:03.992 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:03.992 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:03.992 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:03.992 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:03.992 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:03.992 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:03.992 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:03.992 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:03.992 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:03.992 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:03.992 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:03.992 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:03.992 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:03.992 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:03.992 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:03.992 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:03.992 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:03.992 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:03.992 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:03.992 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:03.992 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:03.992 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:03.992 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:03.992 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:03.992 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:03.992 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:03.992 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:03.992 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:22:03.992 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:03.992 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:03.992 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:03.992 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:03.992 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:03.992 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:03.992 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:03.992 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:03.992 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:03.992 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:03.992 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:03.992 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:03.992 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:03.992 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:03.992 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:03.992 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:03.992 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:03.992 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:03.992 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:03.992 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:03.992 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:03.992 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:03.992 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:03.992 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:03.992 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:03.992 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:03.992 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:03.992 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.627 ms 00:22:03.992 00:22:03.992 --- 10.0.0.2 ping statistics --- 00:22:03.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:03.992 rtt min/avg/max/mdev = 0.627/0.627/0.627/0.000 ms 00:22:03.992 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:03.992 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:03.992 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.344 ms 00:22:03.992 00:22:03.992 --- 10.0.0.1 ping statistics --- 00:22:03.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:03.992 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:22:03.992 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:03.992 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:22:03.992 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:03.992 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:03.992 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:03.992 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:03.992 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:03.992 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:03.992 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:03.992 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:22:03.992 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:03.992 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:03.992 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:03.992 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=368169 00:22:03.992 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 368169 00:22:03.992 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:03.992 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 368169 ']' 00:22:03.992 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:03.992 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:03.992 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:03.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:03.992 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:03.992 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:03.992 [2024-12-09 06:20:57.849369] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:22:03.992 [2024-12-09 06:20:57.849430] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:03.992 [2024-12-09 06:20:57.945922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:03.992 [2024-12-09 06:20:57.994252] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:03.992 [2024-12-09 06:20:57.994301] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:03.992 [2024-12-09 06:20:57.994309] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:03.992 [2024-12-09 06:20:57.994316] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:03.992 [2024-12-09 06:20:57.994321] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:03.992 [2024-12-09 06:20:57.995039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:04.253 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:04.253 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:22:04.253 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:04.253 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:04.253 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:04.253 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:04.253 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:22:04.253 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:04.253 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:22:04.253 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.253 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:04.253 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.253 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:22:04.253 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.253 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:04.253 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.253 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:22:04.253 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.253 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:04.253 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.253 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:22:04.253 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.253 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:04.253 Malloc0 00:22:04.253 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.253 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:22:04.253 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.253 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:04.253 [2024-12-09 06:20:58.819332] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:04.253 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.253 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:22:04.253 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.253 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:04.253 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.253 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:22:04.253 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.253 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:04.513 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.513 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:04.513 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.513 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:04.513 [2024-12-09 06:20:58.843612] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:04.513 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.513 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:04.513 [2024-12-09 06:20:58.949570] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:06.424 Initializing NVMe Controllers 00:22:06.424 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:22:06.424 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:22:06.424 Initialization complete. Launching workers. 00:22:06.424 ======================================================== 00:22:06.424 Latency(us) 00:22:06.424 Device Information : IOPS MiB/s Average min max 00:22:06.424 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 129.00 16.12 32264.24 8010.48 63852.30 00:22:06.424 ======================================================== 00:22:06.424 Total : 129.00 16.12 32264.24 8010.48 63852.30 00:22:06.424 00:22:06.424 06:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:22:06.424 06:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:22:06.424 06:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.424 06:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:06.424 06:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.424 06:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:22:06.424 06:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:22:06.424 06:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:22:06.424 06:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:22:06.424 06:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:06.424 06:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:22:06.424 06:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:06.424 06:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:22:06.424 06:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:06.424 06:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:06.424 rmmod nvme_tcp 00:22:06.424 rmmod nvme_fabrics 00:22:06.424 rmmod nvme_keyring 00:22:06.424 06:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:06.424 06:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:22:06.424 06:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:22:06.424 06:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 368169 ']' 00:22:06.424 06:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 368169 00:22:06.424 06:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 368169 ']' 00:22:06.424 06:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 368169 00:22:06.424 06:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:22:06.424 06:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:06.424 06:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 368169 00:22:06.424 06:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:06.424 06:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:06.424 06:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 368169' 00:22:06.424 killing process with pid 368169 00:22:06.424 06:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 368169 00:22:06.424 06:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 368169 00:22:06.424 06:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:06.424 06:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:06.424 06:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:06.424 06:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:22:06.424 06:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:22:06.424 06:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:06.424 06:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:22:06.424 06:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:06.424 06:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:06.424 06:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:06.424 06:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:06.424 06:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:08.969 06:21:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:08.969 00:22:08.969 real 0m12.976s 00:22:08.969 user 0m5.340s 00:22:08.969 sys 0m6.219s 00:22:08.969 06:21:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:08.969 06:21:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:08.969 ************************************ 00:22:08.969 END TEST nvmf_wait_for_buf 00:22:08.969 ************************************ 00:22:08.969 06:21:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:22:08.969 06:21:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:22:08.969 06:21:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:22:08.969 06:21:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:22:08.969 06:21:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:22:08.969 06:21:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:15.557 06:21:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:15.557 06:21:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:22:15.557 06:21:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:15.557 06:21:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:15.557 06:21:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:15.557 06:21:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:15.557 06:21:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:15.557 06:21:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:22:15.557 06:21:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:15.557 06:21:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:22:15.557 06:21:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:22:15.557 06:21:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:22:15.557 06:21:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:22:15.557 06:21:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:22:15.557 06:21:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:22:15.557 06:21:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:15.557 06:21:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:15.557 06:21:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:15.557 06:21:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:15.557 06:21:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:15.557 06:21:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:15.557 06:21:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:15.557 06:21:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:15.557 06:21:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:15.557 06:21:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:15.557 06:21:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:15.557 06:21:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:15.557 06:21:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:15.557 06:21:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:15.557 06:21:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:15.557 06:21:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:15.557 06:21:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:15.557 06:21:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:15.557 06:21:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:15.557 06:21:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:15.557 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:15.557 06:21:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:15.557 06:21:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:15.557 06:21:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:15.557 06:21:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:15.557 06:21:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:15.557 06:21:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:15.557 06:21:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:15.557 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:15.557 06:21:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:15.557 06:21:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:15.557 06:21:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:15.557 06:21:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:15.557 06:21:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:15.557 06:21:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:15.557 06:21:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:15.557 06:21:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:15.557 06:21:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:15.557 06:21:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:15.557 06:21:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:15.557 06:21:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:15.557 06:21:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:15.557 06:21:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:15.557 06:21:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:15.557 06:21:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:15.557 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:15.557 06:21:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:15.557 06:21:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:15.557 06:21:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:15.557 06:21:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:15.557 06:21:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:15.557 06:21:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:15.557 06:21:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:15.557 06:21:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:15.557 06:21:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:15.557 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:15.557 06:21:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:15.557 06:21:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:15.557 06:21:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:15.557 06:21:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:22:15.557 06:21:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:15.557 06:21:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:15.557 06:21:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:15.557 06:21:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:15.557 ************************************ 00:22:15.557 START TEST nvmf_perf_adq 00:22:15.557 ************************************ 00:22:15.557 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:15.557 * Looking for test storage... 00:22:15.557 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:15.557 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:15.557 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lcov --version 00:22:15.557 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:15.818 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:15.818 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:15.818 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:15.818 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:15.818 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:22:15.818 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:22:15.818 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:22:15.818 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:22:15.818 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:22:15.818 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:22:15.818 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:22:15.818 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:15.818 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:22:15.818 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:22:15.818 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:15.818 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:15.818 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:22:15.818 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:22:15.818 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:15.818 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:22:15.818 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:22:15.818 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:22:15.818 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:22:15.818 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:15.818 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:22:15.818 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:22:15.818 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:15.818 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:15.818 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:22:15.818 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:15.818 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:15.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:15.818 --rc genhtml_branch_coverage=1 00:22:15.818 --rc genhtml_function_coverage=1 00:22:15.818 --rc genhtml_legend=1 00:22:15.818 --rc geninfo_all_blocks=1 00:22:15.818 --rc geninfo_unexecuted_blocks=1 00:22:15.818 00:22:15.818 ' 00:22:15.818 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:15.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:15.818 --rc genhtml_branch_coverage=1 00:22:15.818 --rc genhtml_function_coverage=1 00:22:15.818 --rc genhtml_legend=1 00:22:15.818 --rc geninfo_all_blocks=1 00:22:15.818 --rc geninfo_unexecuted_blocks=1 00:22:15.818 00:22:15.818 ' 00:22:15.818 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:15.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:15.818 --rc genhtml_branch_coverage=1 00:22:15.818 --rc genhtml_function_coverage=1 00:22:15.818 --rc genhtml_legend=1 00:22:15.818 --rc geninfo_all_blocks=1 00:22:15.818 --rc geninfo_unexecuted_blocks=1 00:22:15.818 00:22:15.818 ' 00:22:15.818 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:15.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:15.818 --rc genhtml_branch_coverage=1 00:22:15.818 --rc genhtml_function_coverage=1 00:22:15.818 --rc genhtml_legend=1 00:22:15.818 --rc geninfo_all_blocks=1 00:22:15.818 --rc geninfo_unexecuted_blocks=1 00:22:15.818 00:22:15.818 ' 00:22:15.818 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:15.818 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:22:15.818 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:15.818 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:15.818 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:15.818 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:15.818 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:15.818 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:15.818 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:15.818 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:15.818 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:15.818 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:15.818 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:22:15.818 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:22:15.818 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:15.818 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:15.818 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:15.818 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:15.818 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:15.818 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:22:15.818 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:15.818 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:15.818 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:15.818 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.818 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.818 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.818 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:22:15.818 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.818 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:22:15.818 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:15.818 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:15.818 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:15.818 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:15.818 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:15.818 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:15.818 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:15.818 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:15.818 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:15.818 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:15.818 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:22:15.818 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:15.819 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:23.951 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:23.951 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:23.951 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:23.951 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:23.951 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:23.951 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:23.951 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:23.951 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:23.951 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:23.951 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:23.951 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:23.951 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:23.952 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:23.952 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:23.952 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:23.952 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:23.952 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:23.952 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:23.952 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:23.952 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:23.952 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:23.952 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:23.952 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:23.952 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:23.952 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:23.952 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:23.952 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:23.952 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:23.952 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:23.952 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:23.952 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:23.952 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:23.952 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:23.952 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:23.952 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:23.952 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:23.952 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:23.952 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:23.952 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:23.952 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:23.952 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:23.952 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:23.952 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:23.952 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:23.952 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:23.952 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:23.952 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:23.952 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:23.952 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:23.952 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:23.952 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:23.952 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:23.952 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:23.952 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:23.952 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:23.952 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:23.952 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:23.952 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:23.952 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:23.952 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:23.952 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:23.952 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:23.952 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:23.952 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:23.952 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:23.952 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:23.952 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:23.952 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:23.952 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:23.952 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:23.952 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:23.952 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:23.952 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:23.952 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:23.952 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:22:23.952 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:23.952 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:22:23.952 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:22:23.952 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:22:24.523 06:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:22:27.823 06:21:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:22:33.112 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:22:33.112 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:33.112 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:33.112 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:33.112 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:33.112 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:33.112 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:33.112 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:33.112 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:33.112 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:33.112 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:33.112 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:33.112 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:33.112 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:33.112 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:33.112 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:33.112 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:33.112 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:33.112 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:33.112 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:33.112 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:33.112 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:33.112 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:33.112 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:33.113 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:33.113 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:33.113 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:33.113 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:33.113 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:33.113 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.576 ms 00:22:33.113 00:22:33.113 --- 10.0.0.2 ping statistics --- 00:22:33.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:33.113 rtt min/avg/max/mdev = 0.576/0.576/0.576/0.000 ms 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:33.113 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:33.113 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.264 ms 00:22:33.113 00:22:33.113 --- 10.0.0.1 ping statistics --- 00:22:33.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:33.113 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:33.113 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:33.114 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:33.114 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=378579 00:22:33.114 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 378579 00:22:33.114 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 378579 ']' 00:22:33.114 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:33.114 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:33.114 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:33.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:33.114 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:33.114 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:33.114 [2024-12-09 06:21:27.610377] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:22:33.114 [2024-12-09 06:21:27.610443] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:33.374 [2024-12-09 06:21:27.700750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:33.374 [2024-12-09 06:21:27.753679] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:33.374 [2024-12-09 06:21:27.753728] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:33.374 [2024-12-09 06:21:27.753736] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:33.374 [2024-12-09 06:21:27.753743] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:33.374 [2024-12-09 06:21:27.753750] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:33.374 [2024-12-09 06:21:27.755702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:33.374 [2024-12-09 06:21:27.755822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:33.374 [2024-12-09 06:21:27.755976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:33.374 [2024-12-09 06:21:27.755976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:33.945 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:33.945 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:22:33.945 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:33.945 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:33.945 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:33.945 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:33.945 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:22:33.945 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:33.945 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:33.945 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.945 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:33.945 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.206 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:34.206 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:22:34.206 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.206 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:34.206 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.206 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:34.206 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.206 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:34.206 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.206 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:22:34.206 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.206 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:34.206 [2024-12-09 06:21:28.656634] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:34.206 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.206 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:34.206 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.206 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:34.206 Malloc1 00:22:34.206 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.206 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:34.206 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.206 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:34.206 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.206 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:34.206 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.206 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:34.206 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.206 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:34.206 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.206 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:34.206 [2024-12-09 06:21:28.718839] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:34.206 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.206 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=378699 00:22:34.206 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:22:34.206 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:36.745 06:21:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:22:36.745 06:21:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.745 06:21:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:36.745 06:21:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.745 06:21:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:22:36.745 "tick_rate": 2600000000, 00:22:36.745 "poll_groups": [ 00:22:36.745 { 00:22:36.745 "name": "nvmf_tgt_poll_group_000", 00:22:36.745 "admin_qpairs": 1, 00:22:36.745 "io_qpairs": 1, 00:22:36.745 "current_admin_qpairs": 1, 00:22:36.745 "current_io_qpairs": 1, 00:22:36.745 "pending_bdev_io": 0, 00:22:36.745 "completed_nvme_io": 26054, 00:22:36.745 "transports": [ 00:22:36.745 { 00:22:36.745 "trtype": "TCP" 00:22:36.745 } 00:22:36.745 ] 00:22:36.745 }, 00:22:36.745 { 00:22:36.745 "name": "nvmf_tgt_poll_group_001", 00:22:36.745 "admin_qpairs": 0, 00:22:36.745 "io_qpairs": 1, 00:22:36.745 "current_admin_qpairs": 0, 00:22:36.745 "current_io_qpairs": 1, 00:22:36.745 "pending_bdev_io": 0, 00:22:36.745 "completed_nvme_io": 27279, 00:22:36.745 "transports": [ 00:22:36.745 { 00:22:36.745 "trtype": "TCP" 00:22:36.745 } 00:22:36.745 ] 00:22:36.745 }, 00:22:36.745 { 00:22:36.745 "name": "nvmf_tgt_poll_group_002", 00:22:36.745 "admin_qpairs": 0, 00:22:36.745 "io_qpairs": 1, 00:22:36.745 "current_admin_qpairs": 0, 00:22:36.745 "current_io_qpairs": 1, 00:22:36.745 "pending_bdev_io": 0, 00:22:36.745 "completed_nvme_io": 25975, 00:22:36.745 "transports": [ 00:22:36.745 { 00:22:36.745 "trtype": "TCP" 00:22:36.745 } 00:22:36.745 ] 00:22:36.745 }, 00:22:36.745 { 00:22:36.745 "name": "nvmf_tgt_poll_group_003", 00:22:36.745 "admin_qpairs": 0, 00:22:36.745 "io_qpairs": 1, 00:22:36.745 "current_admin_qpairs": 0, 00:22:36.745 "current_io_qpairs": 1, 00:22:36.745 "pending_bdev_io": 0, 00:22:36.745 "completed_nvme_io": 21992, 00:22:36.745 "transports": [ 00:22:36.745 { 00:22:36.745 "trtype": "TCP" 00:22:36.745 } 00:22:36.745 ] 00:22:36.745 } 00:22:36.745 ] 00:22:36.745 }' 00:22:36.745 06:21:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:22:36.745 06:21:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:22:36.745 06:21:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:22:36.745 06:21:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:22:36.745 06:21:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 378699 00:22:44.876 Initializing NVMe Controllers 00:22:44.876 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:44.876 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:44.876 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:44.876 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:44.876 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:44.876 Initialization complete. Launching workers. 00:22:44.876 ======================================================== 00:22:44.876 Latency(us) 00:22:44.876 Device Information : IOPS MiB/s Average min max 00:22:44.876 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 12836.30 50.14 4986.41 1256.17 7936.63 00:22:44.876 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 14501.40 56.65 4412.91 1482.06 12777.27 00:22:44.876 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13829.20 54.02 4627.78 1296.08 12492.93 00:22:44.876 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 13960.50 54.53 4583.99 1489.11 9279.27 00:22:44.876 ======================================================== 00:22:44.876 Total : 55127.40 215.34 4643.68 1256.17 12777.27 00:22:44.876 00:22:44.876 06:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:22:44.876 06:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:44.876 06:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:22:44.876 06:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:44.876 06:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:22:44.876 06:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:44.876 06:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:44.876 rmmod nvme_tcp 00:22:44.876 rmmod nvme_fabrics 00:22:44.876 rmmod nvme_keyring 00:22:44.876 06:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:44.876 06:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:22:44.876 06:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:22:44.876 06:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 378579 ']' 00:22:44.876 06:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 378579 00:22:44.876 06:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 378579 ']' 00:22:44.876 06:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 378579 00:22:44.876 06:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:22:44.876 06:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:44.876 06:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 378579 00:22:44.876 06:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:44.876 06:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:44.876 06:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 378579' 00:22:44.876 killing process with pid 378579 00:22:44.876 06:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 378579 00:22:44.876 06:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 378579 00:22:44.876 06:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:44.876 06:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:44.876 06:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:44.876 06:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:22:44.876 06:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:44.876 06:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:22:44.876 06:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:22:44.876 06:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:44.876 06:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:44.876 06:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:44.876 06:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:44.876 06:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:46.783 06:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:46.783 06:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:22:46.783 06:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:22:46.783 06:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:22:48.167 06:21:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:22:50.717 06:21:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:22:56.068 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:22:56.068 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:56.068 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:56.068 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:56.068 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:56.068 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:56.068 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:56.068 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:56.068 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:56.068 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:56.068 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:56.068 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:56.068 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:56.068 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:56.068 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:56.068 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:56.068 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:56.068 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:56.068 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:56.068 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:56.068 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:56.068 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:56.068 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:56.068 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:56.068 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:56.068 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:56.068 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:56.068 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:56.068 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:56.068 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:56.068 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:56.068 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:56.068 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:56.068 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:56.068 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:56.068 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:56.068 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:56.068 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:56.068 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:56.068 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:56.068 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:56.068 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:56.068 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:56.068 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:56.068 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:56.068 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:56.068 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:56.068 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:56.068 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:56.068 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:56.068 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:56.068 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:56.068 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:56.068 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:56.068 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:56.068 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:56.068 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:56.068 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:56.068 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:56.068 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:56.068 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:56.068 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:56.068 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:56.069 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:56.069 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:56.069 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:56.069 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:56.069 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:56.069 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:56.069 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:56.069 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:56.069 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:56.069 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:56.069 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:56.069 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:56.069 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:56.069 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:56.069 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:56.069 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:56.069 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:56.069 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:56.069 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:56.069 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:56.069 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:56.069 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:56.069 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:56.069 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:22:56.069 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:56.069 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:56.069 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:56.069 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:56.069 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:56.069 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:56.069 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:56.069 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:56.069 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:56.069 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:56.069 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:56.069 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:56.069 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:56.069 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:56.069 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:56.069 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:56.069 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:56.069 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:56.069 06:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:56.069 06:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:56.069 06:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:56.069 06:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:56.069 06:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:56.069 06:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:56.069 06:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:56.069 06:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:56.069 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:56.069 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.616 ms 00:22:56.069 00:22:56.069 --- 10.0.0.2 ping statistics --- 00:22:56.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:56.069 rtt min/avg/max/mdev = 0.616/0.616/0.616/0.000 ms 00:22:56.069 06:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:56.069 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:56.069 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:22:56.069 00:22:56.069 --- 10.0.0.1 ping statistics --- 00:22:56.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:56.069 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:22:56.069 06:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:56.069 06:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:22:56.069 06:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:56.069 06:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:56.069 06:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:56.069 06:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:56.069 06:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:56.069 06:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:56.069 06:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:56.069 06:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:22:56.069 06:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:22:56.069 06:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:22:56.069 06:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:22:56.069 net.core.busy_poll = 1 00:22:56.069 06:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:22:56.069 net.core.busy_read = 1 00:22:56.069 06:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:22:56.069 06:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:22:56.069 06:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:22:56.069 06:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:22:56.069 06:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:22:56.069 06:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:56.069 06:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:56.069 06:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:56.069 06:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:56.069 06:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=382667 00:22:56.069 06:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 382667 00:22:56.069 06:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:56.069 06:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 382667 ']' 00:22:56.069 06:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:56.069 06:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:56.069 06:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:56.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:56.069 06:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:56.069 06:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:56.069 [2024-12-09 06:21:50.582683] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:22:56.069 [2024-12-09 06:21:50.582751] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:56.330 [2024-12-09 06:21:50.679157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:56.330 [2024-12-09 06:21:50.730621] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:56.330 [2024-12-09 06:21:50.730675] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:56.330 [2024-12-09 06:21:50.730684] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:56.330 [2024-12-09 06:21:50.730691] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:56.330 [2024-12-09 06:21:50.730697] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:56.330 [2024-12-09 06:21:50.732947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:56.330 [2024-12-09 06:21:50.733101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:56.330 [2024-12-09 06:21:50.733256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:56.330 [2024-12-09 06:21:50.733256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:56.902 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:56.902 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:22:56.902 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:56.902 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:56.902 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:56.902 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:56.902 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:22:56.902 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:56.902 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:56.902 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.902 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:56.902 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.163 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:57.163 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:22:57.163 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.163 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:57.163 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.163 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:57.163 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.163 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:57.163 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.163 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:22:57.163 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.163 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:57.163 [2024-12-09 06:21:51.579323] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:57.163 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.163 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:57.163 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.163 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:57.163 Malloc1 00:22:57.163 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.163 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:57.163 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.163 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:57.163 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.163 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:57.163 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.163 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:57.163 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.163 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:57.163 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.163 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:57.163 [2024-12-09 06:21:51.631445] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:57.163 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.163 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=382991 00:22:57.163 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:22:57.163 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:59.076 06:21:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:22:59.076 06:21:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.076 06:21:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:59.335 06:21:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.335 06:21:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:22:59.335 "tick_rate": 2600000000, 00:22:59.335 "poll_groups": [ 00:22:59.335 { 00:22:59.335 "name": "nvmf_tgt_poll_group_000", 00:22:59.335 "admin_qpairs": 1, 00:22:59.335 "io_qpairs": 4, 00:22:59.335 "current_admin_qpairs": 1, 00:22:59.335 "current_io_qpairs": 4, 00:22:59.335 "pending_bdev_io": 0, 00:22:59.335 "completed_nvme_io": 48211, 00:22:59.335 "transports": [ 00:22:59.335 { 00:22:59.335 "trtype": "TCP" 00:22:59.335 } 00:22:59.335 ] 00:22:59.335 }, 00:22:59.335 { 00:22:59.335 "name": "nvmf_tgt_poll_group_001", 00:22:59.335 "admin_qpairs": 0, 00:22:59.335 "io_qpairs": 0, 00:22:59.335 "current_admin_qpairs": 0, 00:22:59.335 "current_io_qpairs": 0, 00:22:59.335 "pending_bdev_io": 0, 00:22:59.335 "completed_nvme_io": 0, 00:22:59.335 "transports": [ 00:22:59.335 { 00:22:59.335 "trtype": "TCP" 00:22:59.335 } 00:22:59.335 ] 00:22:59.335 }, 00:22:59.335 { 00:22:59.335 "name": "nvmf_tgt_poll_group_002", 00:22:59.335 "admin_qpairs": 0, 00:22:59.335 "io_qpairs": 0, 00:22:59.335 "current_admin_qpairs": 0, 00:22:59.335 "current_io_qpairs": 0, 00:22:59.335 "pending_bdev_io": 0, 00:22:59.335 "completed_nvme_io": 0, 00:22:59.335 "transports": [ 00:22:59.335 { 00:22:59.335 "trtype": "TCP" 00:22:59.335 } 00:22:59.335 ] 00:22:59.335 }, 00:22:59.335 { 00:22:59.335 "name": "nvmf_tgt_poll_group_003", 00:22:59.335 "admin_qpairs": 0, 00:22:59.335 "io_qpairs": 0, 00:22:59.335 "current_admin_qpairs": 0, 00:22:59.335 "current_io_qpairs": 0, 00:22:59.335 "pending_bdev_io": 0, 00:22:59.335 "completed_nvme_io": 0, 00:22:59.335 "transports": [ 00:22:59.335 { 00:22:59.335 "trtype": "TCP" 00:22:59.335 } 00:22:59.335 ] 00:22:59.335 } 00:22:59.335 ] 00:22:59.335 }' 00:22:59.335 06:21:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:22:59.335 06:21:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:22:59.335 06:21:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=3 00:22:59.335 06:21:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 3 -lt 2 ]] 00:22:59.335 06:21:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 382991 00:23:07.462 Initializing NVMe Controllers 00:23:07.462 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:07.462 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:07.462 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:07.462 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:07.462 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:07.462 Initialization complete. Launching workers. 00:23:07.462 ======================================================== 00:23:07.462 Latency(us) 00:23:07.462 Device Information : IOPS MiB/s Average min max 00:23:07.462 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6104.30 23.84 10495.93 1407.22 55116.82 00:23:07.462 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5851.20 22.86 10937.56 1224.28 54858.52 00:23:07.462 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 8809.00 34.41 7264.70 1105.10 54096.72 00:23:07.462 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5157.40 20.15 12409.65 1298.70 54782.73 00:23:07.462 ======================================================== 00:23:07.462 Total : 25921.89 101.26 9878.31 1105.10 55116.82 00:23:07.462 00:23:07.462 06:22:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:23:07.462 06:22:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:07.462 06:22:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:23:07.462 06:22:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:07.462 06:22:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:23:07.462 06:22:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:07.462 06:22:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:07.462 rmmod nvme_tcp 00:23:07.462 rmmod nvme_fabrics 00:23:07.462 rmmod nvme_keyring 00:23:07.462 06:22:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:07.462 06:22:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:23:07.462 06:22:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:23:07.462 06:22:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 382667 ']' 00:23:07.462 06:22:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 382667 00:23:07.462 06:22:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 382667 ']' 00:23:07.462 06:22:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 382667 00:23:07.462 06:22:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:23:07.462 06:22:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:07.462 06:22:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 382667 00:23:07.462 06:22:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:07.462 06:22:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:07.462 06:22:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 382667' 00:23:07.462 killing process with pid 382667 00:23:07.462 06:22:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 382667 00:23:07.462 06:22:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 382667 00:23:07.462 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:07.462 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:07.462 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:07.462 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:23:07.462 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:23:07.462 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:07.462 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:23:07.722 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:07.722 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:07.722 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:07.722 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:07.722 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:11.026 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:11.026 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:23:11.026 00:23:11.026 real 0m55.102s 00:23:11.026 user 2m49.583s 00:23:11.026 sys 0m12.197s 00:23:11.026 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:11.026 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:11.026 ************************************ 00:23:11.026 END TEST nvmf_perf_adq 00:23:11.026 ************************************ 00:23:11.026 06:22:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:11.026 06:22:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:11.026 06:22:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:11.026 06:22:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:11.026 ************************************ 00:23:11.026 START TEST nvmf_shutdown 00:23:11.026 ************************************ 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:11.027 * Looking for test storage... 00:23:11.027 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:11.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:11.027 --rc genhtml_branch_coverage=1 00:23:11.027 --rc genhtml_function_coverage=1 00:23:11.027 --rc genhtml_legend=1 00:23:11.027 --rc geninfo_all_blocks=1 00:23:11.027 --rc geninfo_unexecuted_blocks=1 00:23:11.027 00:23:11.027 ' 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:11.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:11.027 --rc genhtml_branch_coverage=1 00:23:11.027 --rc genhtml_function_coverage=1 00:23:11.027 --rc genhtml_legend=1 00:23:11.027 --rc geninfo_all_blocks=1 00:23:11.027 --rc geninfo_unexecuted_blocks=1 00:23:11.027 00:23:11.027 ' 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:11.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:11.027 --rc genhtml_branch_coverage=1 00:23:11.027 --rc genhtml_function_coverage=1 00:23:11.027 --rc genhtml_legend=1 00:23:11.027 --rc geninfo_all_blocks=1 00:23:11.027 --rc geninfo_unexecuted_blocks=1 00:23:11.027 00:23:11.027 ' 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:11.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:11.027 --rc genhtml_branch_coverage=1 00:23:11.027 --rc genhtml_function_coverage=1 00:23:11.027 --rc genhtml_legend=1 00:23:11.027 --rc geninfo_all_blocks=1 00:23:11.027 --rc geninfo_unexecuted_blocks=1 00:23:11.027 00:23:11.027 ' 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:11.027 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:11.027 ************************************ 00:23:11.027 START TEST nvmf_shutdown_tc1 00:23:11.027 ************************************ 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:11.027 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:19.165 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:19.166 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:19.166 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:19.166 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:19.166 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:19.166 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:19.167 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:19.167 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:19.167 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:19.167 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.626 ms 00:23:19.167 00:23:19.167 --- 10.0.0.2 ping statistics --- 00:23:19.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:19.167 rtt min/avg/max/mdev = 0.626/0.626/0.626/0.000 ms 00:23:19.167 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:19.167 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:19.167 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.378 ms 00:23:19.167 00:23:19.167 --- 10.0.0.1 ping statistics --- 00:23:19.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:19.167 rtt min/avg/max/mdev = 0.378/0.378/0.378/0.000 ms 00:23:19.167 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:19.167 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:23:19.167 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:19.167 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:19.167 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:19.167 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:19.167 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:19.167 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:19.167 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:19.167 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:19.167 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:19.167 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:19.167 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:19.167 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=388846 00:23:19.167 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 388846 00:23:19.167 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 388846 ']' 00:23:19.167 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:19.167 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:19.167 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:19.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:19.167 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:19.167 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:19.167 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:19.167 [2024-12-09 06:22:12.652855] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:23:19.167 [2024-12-09 06:22:12.652917] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:19.167 [2024-12-09 06:22:12.731152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:19.167 [2024-12-09 06:22:12.782678] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:19.167 [2024-12-09 06:22:12.782729] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:19.167 [2024-12-09 06:22:12.782737] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:19.167 [2024-12-09 06:22:12.782744] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:19.167 [2024-12-09 06:22:12.782750] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:19.167 [2024-12-09 06:22:12.784674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:19.167 [2024-12-09 06:22:12.784828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:19.167 [2024-12-09 06:22:12.784982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:19.167 [2024-12-09 06:22:12.784983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:19.167 06:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:19.167 06:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:23:19.167 06:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:19.167 06:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:19.167 06:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:19.167 06:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:19.167 06:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:19.167 06:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.167 06:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:19.167 [2024-12-09 06:22:13.536155] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:19.167 06:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.167 06:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:19.167 06:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:19.167 06:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:19.167 06:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:19.167 06:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:19.167 06:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:19.167 06:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:19.167 06:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:19.167 06:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:19.167 06:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:19.167 06:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:19.167 06:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:19.167 06:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:19.167 06:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:19.167 06:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:19.167 06:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:19.167 06:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:19.167 06:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:19.167 06:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:19.167 06:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:19.167 06:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:19.167 06:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:19.167 06:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:19.167 06:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:19.167 06:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:19.167 06:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:19.167 06:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.167 06:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:19.167 Malloc1 00:23:19.167 [2024-12-09 06:22:13.662932] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:19.167 Malloc2 00:23:19.167 Malloc3 00:23:19.428 Malloc4 00:23:19.428 Malloc5 00:23:19.428 Malloc6 00:23:19.428 Malloc7 00:23:19.428 Malloc8 00:23:19.428 Malloc9 00:23:19.689 Malloc10 00:23:19.689 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.689 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:19.689 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:19.689 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:19.689 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=389198 00:23:19.689 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 389198 /var/tmp/bdevperf.sock 00:23:19.689 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 389198 ']' 00:23:19.689 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:19.689 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:19.689 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:23:19.690 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:19.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:19.690 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:19.690 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:19.690 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:19.690 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:23:19.690 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:23:19.690 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:19.690 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:19.690 { 00:23:19.690 "params": { 00:23:19.690 "name": "Nvme$subsystem", 00:23:19.690 "trtype": "$TEST_TRANSPORT", 00:23:19.690 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:19.690 "adrfam": "ipv4", 00:23:19.690 "trsvcid": "$NVMF_PORT", 00:23:19.690 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:19.690 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:19.690 "hdgst": ${hdgst:-false}, 00:23:19.690 "ddgst": ${ddgst:-false} 00:23:19.690 }, 00:23:19.690 "method": "bdev_nvme_attach_controller" 00:23:19.690 } 00:23:19.690 EOF 00:23:19.690 )") 00:23:19.690 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:19.690 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:19.690 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:19.690 { 00:23:19.690 "params": { 00:23:19.690 "name": "Nvme$subsystem", 00:23:19.690 "trtype": "$TEST_TRANSPORT", 00:23:19.690 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:19.690 "adrfam": "ipv4", 00:23:19.690 "trsvcid": "$NVMF_PORT", 00:23:19.690 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:19.690 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:19.690 "hdgst": ${hdgst:-false}, 00:23:19.690 "ddgst": ${ddgst:-false} 00:23:19.690 }, 00:23:19.690 "method": "bdev_nvme_attach_controller" 00:23:19.690 } 00:23:19.690 EOF 00:23:19.690 )") 00:23:19.690 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:19.690 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:19.690 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:19.690 { 00:23:19.690 "params": { 00:23:19.690 "name": "Nvme$subsystem", 00:23:19.690 "trtype": "$TEST_TRANSPORT", 00:23:19.690 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:19.690 "adrfam": "ipv4", 00:23:19.690 "trsvcid": "$NVMF_PORT", 00:23:19.690 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:19.690 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:19.690 "hdgst": ${hdgst:-false}, 00:23:19.690 "ddgst": ${ddgst:-false} 00:23:19.690 }, 00:23:19.690 "method": "bdev_nvme_attach_controller" 00:23:19.690 } 00:23:19.690 EOF 00:23:19.690 )") 00:23:19.690 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:19.690 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:19.690 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:19.690 { 00:23:19.690 "params": { 00:23:19.690 "name": "Nvme$subsystem", 00:23:19.690 "trtype": "$TEST_TRANSPORT", 00:23:19.690 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:19.690 "adrfam": "ipv4", 00:23:19.690 "trsvcid": "$NVMF_PORT", 00:23:19.690 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:19.690 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:19.690 "hdgst": ${hdgst:-false}, 00:23:19.690 "ddgst": ${ddgst:-false} 00:23:19.690 }, 00:23:19.690 "method": "bdev_nvme_attach_controller" 00:23:19.690 } 00:23:19.690 EOF 00:23:19.690 )") 00:23:19.690 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:19.690 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:19.690 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:19.690 { 00:23:19.690 "params": { 00:23:19.690 "name": "Nvme$subsystem", 00:23:19.690 "trtype": "$TEST_TRANSPORT", 00:23:19.690 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:19.690 "adrfam": "ipv4", 00:23:19.690 "trsvcid": "$NVMF_PORT", 00:23:19.690 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:19.690 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:19.690 "hdgst": ${hdgst:-false}, 00:23:19.690 "ddgst": ${ddgst:-false} 00:23:19.690 }, 00:23:19.690 "method": "bdev_nvme_attach_controller" 00:23:19.690 } 00:23:19.690 EOF 00:23:19.690 )") 00:23:19.690 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:19.690 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:19.690 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:19.690 { 00:23:19.690 "params": { 00:23:19.690 "name": "Nvme$subsystem", 00:23:19.690 "trtype": "$TEST_TRANSPORT", 00:23:19.690 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:19.690 "adrfam": "ipv4", 00:23:19.690 "trsvcid": "$NVMF_PORT", 00:23:19.690 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:19.690 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:19.690 "hdgst": ${hdgst:-false}, 00:23:19.690 "ddgst": ${ddgst:-false} 00:23:19.690 }, 00:23:19.690 "method": "bdev_nvme_attach_controller" 00:23:19.690 } 00:23:19.690 EOF 00:23:19.690 )") 00:23:19.690 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:19.690 [2024-12-09 06:22:14.157277] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:23:19.690 [2024-12-09 06:22:14.157332] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:23:19.690 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:19.690 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:19.690 { 00:23:19.690 "params": { 00:23:19.690 "name": "Nvme$subsystem", 00:23:19.690 "trtype": "$TEST_TRANSPORT", 00:23:19.690 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:19.690 "adrfam": "ipv4", 00:23:19.690 "trsvcid": "$NVMF_PORT", 00:23:19.690 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:19.690 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:19.690 "hdgst": ${hdgst:-false}, 00:23:19.690 "ddgst": ${ddgst:-false} 00:23:19.690 }, 00:23:19.690 "method": "bdev_nvme_attach_controller" 00:23:19.690 } 00:23:19.690 EOF 00:23:19.690 )") 00:23:19.690 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:19.690 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:19.690 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:19.690 { 00:23:19.690 "params": { 00:23:19.690 "name": "Nvme$subsystem", 00:23:19.690 "trtype": "$TEST_TRANSPORT", 00:23:19.690 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:19.690 "adrfam": "ipv4", 00:23:19.690 "trsvcid": "$NVMF_PORT", 00:23:19.690 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:19.690 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:19.690 "hdgst": ${hdgst:-false}, 00:23:19.690 "ddgst": ${ddgst:-false} 00:23:19.690 }, 00:23:19.690 "method": "bdev_nvme_attach_controller" 00:23:19.690 } 00:23:19.690 EOF 00:23:19.690 )") 00:23:19.690 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:19.690 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:19.690 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:19.690 { 00:23:19.690 "params": { 00:23:19.690 "name": "Nvme$subsystem", 00:23:19.690 "trtype": "$TEST_TRANSPORT", 00:23:19.690 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:19.690 "adrfam": "ipv4", 00:23:19.690 "trsvcid": "$NVMF_PORT", 00:23:19.690 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:19.690 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:19.690 "hdgst": ${hdgst:-false}, 00:23:19.690 "ddgst": ${ddgst:-false} 00:23:19.690 }, 00:23:19.690 "method": "bdev_nvme_attach_controller" 00:23:19.690 } 00:23:19.690 EOF 00:23:19.690 )") 00:23:19.690 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:19.690 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:19.690 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:19.690 { 00:23:19.690 "params": { 00:23:19.691 "name": "Nvme$subsystem", 00:23:19.691 "trtype": "$TEST_TRANSPORT", 00:23:19.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:19.691 "adrfam": "ipv4", 00:23:19.691 "trsvcid": "$NVMF_PORT", 00:23:19.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:19.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:19.691 "hdgst": ${hdgst:-false}, 00:23:19.691 "ddgst": ${ddgst:-false} 00:23:19.691 }, 00:23:19.691 "method": "bdev_nvme_attach_controller" 00:23:19.691 } 00:23:19.691 EOF 00:23:19.691 )") 00:23:19.691 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:19.691 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:23:19.691 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:23:19.691 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:19.691 "params": { 00:23:19.691 "name": "Nvme1", 00:23:19.691 "trtype": "tcp", 00:23:19.691 "traddr": "10.0.0.2", 00:23:19.691 "adrfam": "ipv4", 00:23:19.691 "trsvcid": "4420", 00:23:19.691 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:19.691 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:19.691 "hdgst": false, 00:23:19.691 "ddgst": false 00:23:19.691 }, 00:23:19.691 "method": "bdev_nvme_attach_controller" 00:23:19.691 },{ 00:23:19.691 "params": { 00:23:19.691 "name": "Nvme2", 00:23:19.691 "trtype": "tcp", 00:23:19.691 "traddr": "10.0.0.2", 00:23:19.691 "adrfam": "ipv4", 00:23:19.691 "trsvcid": "4420", 00:23:19.691 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:19.691 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:19.691 "hdgst": false, 00:23:19.691 "ddgst": false 00:23:19.691 }, 00:23:19.691 "method": "bdev_nvme_attach_controller" 00:23:19.691 },{ 00:23:19.691 "params": { 00:23:19.691 "name": "Nvme3", 00:23:19.691 "trtype": "tcp", 00:23:19.691 "traddr": "10.0.0.2", 00:23:19.691 "adrfam": "ipv4", 00:23:19.691 "trsvcid": "4420", 00:23:19.691 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:19.691 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:19.691 "hdgst": false, 00:23:19.691 "ddgst": false 00:23:19.691 }, 00:23:19.691 "method": "bdev_nvme_attach_controller" 00:23:19.691 },{ 00:23:19.691 "params": { 00:23:19.691 "name": "Nvme4", 00:23:19.691 "trtype": "tcp", 00:23:19.691 "traddr": "10.0.0.2", 00:23:19.691 "adrfam": "ipv4", 00:23:19.691 "trsvcid": "4420", 00:23:19.691 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:19.691 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:19.691 "hdgst": false, 00:23:19.691 "ddgst": false 00:23:19.691 }, 00:23:19.691 "method": "bdev_nvme_attach_controller" 00:23:19.691 },{ 00:23:19.691 "params": { 00:23:19.691 "name": "Nvme5", 00:23:19.691 "trtype": "tcp", 00:23:19.691 "traddr": "10.0.0.2", 00:23:19.691 "adrfam": "ipv4", 00:23:19.691 "trsvcid": "4420", 00:23:19.691 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:19.691 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:19.691 "hdgst": false, 00:23:19.691 "ddgst": false 00:23:19.691 }, 00:23:19.691 "method": "bdev_nvme_attach_controller" 00:23:19.691 },{ 00:23:19.691 "params": { 00:23:19.691 "name": "Nvme6", 00:23:19.691 "trtype": "tcp", 00:23:19.691 "traddr": "10.0.0.2", 00:23:19.691 "adrfam": "ipv4", 00:23:19.691 "trsvcid": "4420", 00:23:19.691 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:19.691 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:19.691 "hdgst": false, 00:23:19.691 "ddgst": false 00:23:19.691 }, 00:23:19.691 "method": "bdev_nvme_attach_controller" 00:23:19.691 },{ 00:23:19.691 "params": { 00:23:19.691 "name": "Nvme7", 00:23:19.691 "trtype": "tcp", 00:23:19.691 "traddr": "10.0.0.2", 00:23:19.691 "adrfam": "ipv4", 00:23:19.691 "trsvcid": "4420", 00:23:19.691 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:19.691 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:19.691 "hdgst": false, 00:23:19.691 "ddgst": false 00:23:19.691 }, 00:23:19.691 "method": "bdev_nvme_attach_controller" 00:23:19.691 },{ 00:23:19.691 "params": { 00:23:19.691 "name": "Nvme8", 00:23:19.691 "trtype": "tcp", 00:23:19.691 "traddr": "10.0.0.2", 00:23:19.691 "adrfam": "ipv4", 00:23:19.691 "trsvcid": "4420", 00:23:19.691 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:19.691 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:19.691 "hdgst": false, 00:23:19.691 "ddgst": false 00:23:19.691 }, 00:23:19.691 "method": "bdev_nvme_attach_controller" 00:23:19.691 },{ 00:23:19.691 "params": { 00:23:19.691 "name": "Nvme9", 00:23:19.691 "trtype": "tcp", 00:23:19.691 "traddr": "10.0.0.2", 00:23:19.691 "adrfam": "ipv4", 00:23:19.691 "trsvcid": "4420", 00:23:19.691 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:19.691 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:19.691 "hdgst": false, 00:23:19.691 "ddgst": false 00:23:19.691 }, 00:23:19.691 "method": "bdev_nvme_attach_controller" 00:23:19.691 },{ 00:23:19.691 "params": { 00:23:19.691 "name": "Nvme10", 00:23:19.691 "trtype": "tcp", 00:23:19.691 "traddr": "10.0.0.2", 00:23:19.691 "adrfam": "ipv4", 00:23:19.691 "trsvcid": "4420", 00:23:19.691 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:19.691 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:19.691 "hdgst": false, 00:23:19.691 "ddgst": false 00:23:19.691 }, 00:23:19.691 "method": "bdev_nvme_attach_controller" 00:23:19.691 }' 00:23:19.691 [2024-12-09 06:22:14.243719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:19.951 [2024-12-09 06:22:14.278142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:21.332 06:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:21.332 06:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:23:21.332 06:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:21.332 06:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.332 06:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:21.332 06:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.332 06:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 389198 00:23:21.332 06:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:23:21.332 06:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:23:22.273 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 389198 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:23:22.273 06:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 388846 00:23:22.273 06:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:23:22.273 06:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:22.273 06:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:23:22.273 06:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:23:22.273 06:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:22.273 06:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:22.273 { 00:23:22.273 "params": { 00:23:22.273 "name": "Nvme$subsystem", 00:23:22.273 "trtype": "$TEST_TRANSPORT", 00:23:22.273 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:22.273 "adrfam": "ipv4", 00:23:22.273 "trsvcid": "$NVMF_PORT", 00:23:22.273 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:22.273 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:22.273 "hdgst": ${hdgst:-false}, 00:23:22.273 "ddgst": ${ddgst:-false} 00:23:22.273 }, 00:23:22.273 "method": "bdev_nvme_attach_controller" 00:23:22.273 } 00:23:22.273 EOF 00:23:22.273 )") 00:23:22.273 06:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:22.273 06:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:22.273 06:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:22.273 { 00:23:22.273 "params": { 00:23:22.273 "name": "Nvme$subsystem", 00:23:22.273 "trtype": "$TEST_TRANSPORT", 00:23:22.273 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:22.273 "adrfam": "ipv4", 00:23:22.273 "trsvcid": "$NVMF_PORT", 00:23:22.273 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:22.273 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:22.273 "hdgst": ${hdgst:-false}, 00:23:22.273 "ddgst": ${ddgst:-false} 00:23:22.273 }, 00:23:22.273 "method": "bdev_nvme_attach_controller" 00:23:22.273 } 00:23:22.273 EOF 00:23:22.273 )") 00:23:22.273 06:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:22.273 06:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:22.273 06:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:22.273 { 00:23:22.273 "params": { 00:23:22.273 "name": "Nvme$subsystem", 00:23:22.273 "trtype": "$TEST_TRANSPORT", 00:23:22.273 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:22.273 "adrfam": "ipv4", 00:23:22.273 "trsvcid": "$NVMF_PORT", 00:23:22.273 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:22.273 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:22.273 "hdgst": ${hdgst:-false}, 00:23:22.273 "ddgst": ${ddgst:-false} 00:23:22.273 }, 00:23:22.273 "method": "bdev_nvme_attach_controller" 00:23:22.273 } 00:23:22.273 EOF 00:23:22.273 )") 00:23:22.273 06:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:22.273 06:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:22.273 06:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:22.273 { 00:23:22.273 "params": { 00:23:22.273 "name": "Nvme$subsystem", 00:23:22.273 "trtype": "$TEST_TRANSPORT", 00:23:22.273 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:22.273 "adrfam": "ipv4", 00:23:22.273 "trsvcid": "$NVMF_PORT", 00:23:22.273 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:22.273 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:22.273 "hdgst": ${hdgst:-false}, 00:23:22.273 "ddgst": ${ddgst:-false} 00:23:22.273 }, 00:23:22.273 "method": "bdev_nvme_attach_controller" 00:23:22.273 } 00:23:22.273 EOF 00:23:22.273 )") 00:23:22.273 06:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:22.273 06:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:22.273 06:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:22.273 { 00:23:22.273 "params": { 00:23:22.273 "name": "Nvme$subsystem", 00:23:22.273 "trtype": "$TEST_TRANSPORT", 00:23:22.273 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:22.273 "adrfam": "ipv4", 00:23:22.273 "trsvcid": "$NVMF_PORT", 00:23:22.273 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:22.273 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:22.273 "hdgst": ${hdgst:-false}, 00:23:22.273 "ddgst": ${ddgst:-false} 00:23:22.273 }, 00:23:22.273 "method": "bdev_nvme_attach_controller" 00:23:22.273 } 00:23:22.273 EOF 00:23:22.273 )") 00:23:22.273 06:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:22.273 06:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:22.273 06:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:22.273 { 00:23:22.273 "params": { 00:23:22.273 "name": "Nvme$subsystem", 00:23:22.273 "trtype": "$TEST_TRANSPORT", 00:23:22.273 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:22.273 "adrfam": "ipv4", 00:23:22.273 "trsvcid": "$NVMF_PORT", 00:23:22.273 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:22.273 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:22.273 "hdgst": ${hdgst:-false}, 00:23:22.273 "ddgst": ${ddgst:-false} 00:23:22.273 }, 00:23:22.273 "method": "bdev_nvme_attach_controller" 00:23:22.273 } 00:23:22.273 EOF 00:23:22.273 )") 00:23:22.273 06:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:22.273 06:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:22.273 [2024-12-09 06:22:16.643825] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:23:22.273 [2024-12-09 06:22:16.643881] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid389536 ] 00:23:22.273 06:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:22.273 { 00:23:22.273 "params": { 00:23:22.273 "name": "Nvme$subsystem", 00:23:22.273 "trtype": "$TEST_TRANSPORT", 00:23:22.273 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:22.273 "adrfam": "ipv4", 00:23:22.273 "trsvcid": "$NVMF_PORT", 00:23:22.274 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:22.274 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:22.274 "hdgst": ${hdgst:-false}, 00:23:22.274 "ddgst": ${ddgst:-false} 00:23:22.274 }, 00:23:22.274 "method": "bdev_nvme_attach_controller" 00:23:22.274 } 00:23:22.274 EOF 00:23:22.274 )") 00:23:22.274 06:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:22.274 06:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:22.274 06:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:22.274 { 00:23:22.274 "params": { 00:23:22.274 "name": "Nvme$subsystem", 00:23:22.274 "trtype": "$TEST_TRANSPORT", 00:23:22.274 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:22.274 "adrfam": "ipv4", 00:23:22.274 "trsvcid": "$NVMF_PORT", 00:23:22.274 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:22.274 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:22.274 "hdgst": ${hdgst:-false}, 00:23:22.274 "ddgst": ${ddgst:-false} 00:23:22.274 }, 00:23:22.274 "method": "bdev_nvme_attach_controller" 00:23:22.274 } 00:23:22.274 EOF 00:23:22.274 )") 00:23:22.274 06:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:22.274 06:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:22.274 06:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:22.274 { 00:23:22.274 "params": { 00:23:22.274 "name": "Nvme$subsystem", 00:23:22.274 "trtype": "$TEST_TRANSPORT", 00:23:22.274 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:22.274 "adrfam": "ipv4", 00:23:22.274 "trsvcid": "$NVMF_PORT", 00:23:22.274 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:22.274 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:22.274 "hdgst": ${hdgst:-false}, 00:23:22.274 "ddgst": ${ddgst:-false} 00:23:22.274 }, 00:23:22.274 "method": "bdev_nvme_attach_controller" 00:23:22.274 } 00:23:22.274 EOF 00:23:22.274 )") 00:23:22.274 06:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:22.274 06:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:22.274 06:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:22.274 { 00:23:22.274 "params": { 00:23:22.274 "name": "Nvme$subsystem", 00:23:22.274 "trtype": "$TEST_TRANSPORT", 00:23:22.274 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:22.274 "adrfam": "ipv4", 00:23:22.274 "trsvcid": "$NVMF_PORT", 00:23:22.274 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:22.274 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:22.274 "hdgst": ${hdgst:-false}, 00:23:22.274 "ddgst": ${ddgst:-false} 00:23:22.274 }, 00:23:22.274 "method": "bdev_nvme_attach_controller" 00:23:22.274 } 00:23:22.274 EOF 00:23:22.274 )") 00:23:22.274 06:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:22.274 06:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:23:22.274 06:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:23:22.274 06:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:22.274 "params": { 00:23:22.274 "name": "Nvme1", 00:23:22.274 "trtype": "tcp", 00:23:22.274 "traddr": "10.0.0.2", 00:23:22.274 "adrfam": "ipv4", 00:23:22.274 "trsvcid": "4420", 00:23:22.274 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:22.274 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:22.274 "hdgst": false, 00:23:22.274 "ddgst": false 00:23:22.274 }, 00:23:22.274 "method": "bdev_nvme_attach_controller" 00:23:22.274 },{ 00:23:22.274 "params": { 00:23:22.274 "name": "Nvme2", 00:23:22.274 "trtype": "tcp", 00:23:22.274 "traddr": "10.0.0.2", 00:23:22.274 "adrfam": "ipv4", 00:23:22.274 "trsvcid": "4420", 00:23:22.274 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:22.274 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:22.274 "hdgst": false, 00:23:22.274 "ddgst": false 00:23:22.274 }, 00:23:22.274 "method": "bdev_nvme_attach_controller" 00:23:22.274 },{ 00:23:22.274 "params": { 00:23:22.274 "name": "Nvme3", 00:23:22.274 "trtype": "tcp", 00:23:22.274 "traddr": "10.0.0.2", 00:23:22.274 "adrfam": "ipv4", 00:23:22.274 "trsvcid": "4420", 00:23:22.274 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:22.274 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:22.274 "hdgst": false, 00:23:22.274 "ddgst": false 00:23:22.274 }, 00:23:22.274 "method": "bdev_nvme_attach_controller" 00:23:22.274 },{ 00:23:22.274 "params": { 00:23:22.274 "name": "Nvme4", 00:23:22.274 "trtype": "tcp", 00:23:22.274 "traddr": "10.0.0.2", 00:23:22.274 "adrfam": "ipv4", 00:23:22.274 "trsvcid": "4420", 00:23:22.274 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:22.274 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:22.274 "hdgst": false, 00:23:22.274 "ddgst": false 00:23:22.274 }, 00:23:22.274 "method": "bdev_nvme_attach_controller" 00:23:22.274 },{ 00:23:22.274 "params": { 00:23:22.274 "name": "Nvme5", 00:23:22.274 "trtype": "tcp", 00:23:22.274 "traddr": "10.0.0.2", 00:23:22.274 "adrfam": "ipv4", 00:23:22.274 "trsvcid": "4420", 00:23:22.274 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:22.274 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:22.274 "hdgst": false, 00:23:22.274 "ddgst": false 00:23:22.274 }, 00:23:22.274 "method": "bdev_nvme_attach_controller" 00:23:22.274 },{ 00:23:22.274 "params": { 00:23:22.274 "name": "Nvme6", 00:23:22.274 "trtype": "tcp", 00:23:22.274 "traddr": "10.0.0.2", 00:23:22.274 "adrfam": "ipv4", 00:23:22.274 "trsvcid": "4420", 00:23:22.274 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:22.274 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:22.274 "hdgst": false, 00:23:22.274 "ddgst": false 00:23:22.274 }, 00:23:22.274 "method": "bdev_nvme_attach_controller" 00:23:22.274 },{ 00:23:22.274 "params": { 00:23:22.274 "name": "Nvme7", 00:23:22.274 "trtype": "tcp", 00:23:22.274 "traddr": "10.0.0.2", 00:23:22.274 "adrfam": "ipv4", 00:23:22.274 "trsvcid": "4420", 00:23:22.274 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:22.274 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:22.274 "hdgst": false, 00:23:22.274 "ddgst": false 00:23:22.274 }, 00:23:22.274 "method": "bdev_nvme_attach_controller" 00:23:22.274 },{ 00:23:22.274 "params": { 00:23:22.274 "name": "Nvme8", 00:23:22.274 "trtype": "tcp", 00:23:22.274 "traddr": "10.0.0.2", 00:23:22.274 "adrfam": "ipv4", 00:23:22.274 "trsvcid": "4420", 00:23:22.274 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:22.274 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:22.274 "hdgst": false, 00:23:22.274 "ddgst": false 00:23:22.274 }, 00:23:22.274 "method": "bdev_nvme_attach_controller" 00:23:22.274 },{ 00:23:22.274 "params": { 00:23:22.274 "name": "Nvme9", 00:23:22.274 "trtype": "tcp", 00:23:22.274 "traddr": "10.0.0.2", 00:23:22.274 "adrfam": "ipv4", 00:23:22.274 "trsvcid": "4420", 00:23:22.274 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:22.274 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:22.274 "hdgst": false, 00:23:22.274 "ddgst": false 00:23:22.274 }, 00:23:22.274 "method": "bdev_nvme_attach_controller" 00:23:22.274 },{ 00:23:22.274 "params": { 00:23:22.274 "name": "Nvme10", 00:23:22.274 "trtype": "tcp", 00:23:22.274 "traddr": "10.0.0.2", 00:23:22.274 "adrfam": "ipv4", 00:23:22.274 "trsvcid": "4420", 00:23:22.274 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:22.274 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:22.274 "hdgst": false, 00:23:22.274 "ddgst": false 00:23:22.274 }, 00:23:22.274 "method": "bdev_nvme_attach_controller" 00:23:22.274 }' 00:23:22.274 [2024-12-09 06:22:16.730316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:22.274 [2024-12-09 06:22:16.764407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:23.654 Running I/O for 1 seconds... 00:23:24.851 2052.00 IOPS, 128.25 MiB/s 00:23:24.851 Latency(us) 00:23:24.851 [2024-12-09T05:22:19.438Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:24.851 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:24.851 Verification LBA range: start 0x0 length 0x400 00:23:24.851 Nvme1n1 : 1.08 240.38 15.02 0.00 0.00 258479.61 19660.80 224233.94 00:23:24.851 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:24.851 Verification LBA range: start 0x0 length 0x400 00:23:24.851 Nvme2n1 : 1.10 232.13 14.51 0.00 0.00 268408.52 15627.82 238752.69 00:23:24.851 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:24.851 Verification LBA range: start 0x0 length 0x400 00:23:24.851 Nvme3n1 : 1.09 234.89 14.68 0.00 0.00 260592.64 16837.71 227460.33 00:23:24.851 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:24.851 Verification LBA range: start 0x0 length 0x400 00:23:24.851 Nvme4n1 : 1.10 233.55 14.60 0.00 0.00 257714.81 14014.62 237139.50 00:23:24.851 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:24.851 Verification LBA range: start 0x0 length 0x400 00:23:24.851 Nvme5n1 : 1.16 275.36 17.21 0.00 0.00 215780.27 14619.57 240365.88 00:23:24.851 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:24.851 Verification LBA range: start 0x0 length 0x400 00:23:24.851 Nvme6n1 : 1.16 276.29 17.27 0.00 0.00 211325.40 18047.61 245205.46 00:23:24.851 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:24.851 Verification LBA range: start 0x0 length 0x400 00:23:24.851 Nvme7n1 : 1.12 284.76 17.80 0.00 0.00 201002.69 19963.27 219394.36 00:23:24.851 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:24.851 Verification LBA range: start 0x0 length 0x400 00:23:24.851 Nvme8n1 : 1.17 328.21 20.51 0.00 0.00 171739.37 6503.19 230686.72 00:23:24.851 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:24.851 Verification LBA range: start 0x0 length 0x400 00:23:24.851 Nvme9n1 : 1.15 226.11 14.13 0.00 0.00 244198.26 1852.65 238752.69 00:23:24.851 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:24.851 Verification LBA range: start 0x0 length 0x400 00:23:24.851 Nvme10n1 : 1.18 272.33 17.02 0.00 0.00 200506.84 12754.31 248431.85 00:23:24.851 [2024-12-09T05:22:19.438Z] =================================================================================================================== 00:23:24.851 [2024-12-09T05:22:19.438Z] Total : 2604.02 162.75 0.00 0.00 224667.97 1852.65 248431.85 00:23:24.851 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:23:24.851 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:24.851 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:24.851 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:24.851 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:24.851 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:24.851 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:23:24.851 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:24.851 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:23:24.851 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:24.851 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:24.851 rmmod nvme_tcp 00:23:24.851 rmmod nvme_fabrics 00:23:24.851 rmmod nvme_keyring 00:23:24.851 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:24.851 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:23:24.851 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:23:24.851 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 388846 ']' 00:23:24.851 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 388846 00:23:24.851 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 388846 ']' 00:23:24.851 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 388846 00:23:24.851 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:23:24.851 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:24.851 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 388846 00:23:25.110 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:25.111 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:25.111 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 388846' 00:23:25.111 killing process with pid 388846 00:23:25.111 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 388846 00:23:25.111 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 388846 00:23:25.111 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:25.111 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:25.111 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:25.111 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:23:25.111 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:23:25.111 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:25.111 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:23:25.111 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:25.111 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:25.111 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:25.111 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:25.111 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:27.654 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:27.654 00:23:27.654 real 0m16.284s 00:23:27.654 user 0m33.297s 00:23:27.654 sys 0m6.627s 00:23:27.654 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:27.654 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:27.654 ************************************ 00:23:27.654 END TEST nvmf_shutdown_tc1 00:23:27.654 ************************************ 00:23:27.654 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:23:27.654 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:27.654 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:27.654 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:27.654 ************************************ 00:23:27.654 START TEST nvmf_shutdown_tc2 00:23:27.654 ************************************ 00:23:27.654 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:23:27.654 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:23:27.654 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:27.654 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:27.654 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:27.654 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:27.654 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:27.654 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:27.654 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:27.654 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:27.654 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:27.654 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:27.654 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:27.654 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:27.654 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:27.654 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:27.654 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:27.654 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:27.654 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:27.654 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:27.654 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:27.654 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:27.654 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:23:27.654 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:27.654 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:23:27.654 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:23:27.654 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:23:27.654 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:23:27.654 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:23:27.654 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:27.654 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:27.654 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:27.654 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:27.654 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:27.654 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:27.654 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:27.654 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:27.654 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:27.654 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:27.654 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:27.654 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:27.654 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:27.654 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:27.654 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:27.654 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:27.654 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:27.654 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:27.654 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:27.654 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:27.654 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:27.654 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:27.654 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:27.654 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:27.654 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:27.654 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:27.654 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:27.654 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:27.654 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:27.654 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:27.654 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:27.654 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:27.654 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:27.654 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:27.654 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:27.654 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:27.654 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:27.655 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:27.655 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:27.655 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:27.655 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:27.655 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:27.655 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:27.655 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:27.655 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:27.655 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:27.655 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:27.655 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:27.655 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:27.655 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:27.655 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:27.655 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:27.655 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:27.655 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:27.655 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:27.655 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:27.655 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:27.655 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:27.655 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:27.655 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:27.655 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:27.655 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:27.655 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:27.655 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:27.655 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:27.655 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:27.655 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:27.655 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:27.655 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:27.655 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:27.655 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:27.655 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:27.655 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:27.655 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:27.655 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:27.655 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:27.655 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:27.655 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:27.655 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:27.655 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:27.655 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:27.655 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:27.655 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:27.655 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:27.655 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:27.655 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:27.655 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:27.655 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.620 ms 00:23:27.655 00:23:27.655 --- 10.0.0.2 ping statistics --- 00:23:27.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:27.655 rtt min/avg/max/mdev = 0.620/0.620/0.620/0.000 ms 00:23:27.655 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:27.655 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:27.655 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.260 ms 00:23:27.655 00:23:27.655 --- 10.0.0.1 ping statistics --- 00:23:27.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:27.655 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:23:27.655 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:27.655 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:23:27.655 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:27.655 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:27.655 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:27.655 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:27.655 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:27.655 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:27.655 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:27.655 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:27.655 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:27.655 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:27.655 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:27.655 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=390552 00:23:27.655 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 390552 00:23:27.655 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:27.655 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 390552 ']' 00:23:27.655 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:27.655 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:27.655 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:27.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:27.655 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:27.655 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:27.915 [2024-12-09 06:22:22.280565] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:23:27.915 [2024-12-09 06:22:22.280628] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:27.915 [2024-12-09 06:22:22.350641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:27.915 [2024-12-09 06:22:22.388344] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:27.915 [2024-12-09 06:22:22.388382] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:27.915 [2024-12-09 06:22:22.388388] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:27.915 [2024-12-09 06:22:22.388393] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:27.915 [2024-12-09 06:22:22.388398] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:27.915 [2024-12-09 06:22:22.389988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:27.915 [2024-12-09 06:22:22.390137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:27.915 [2024-12-09 06:22:22.390286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:27.915 [2024-12-09 06:22:22.390287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:28.855 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:28.855 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:23:28.855 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:28.855 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:28.855 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:28.855 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:28.855 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:28.855 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.855 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:28.855 [2024-12-09 06:22:23.139247] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:28.856 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.856 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:28.856 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:28.856 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:28.856 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:28.856 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:28.856 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:28.856 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:28.856 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:28.856 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:28.856 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:28.856 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:28.856 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:28.856 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:28.856 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:28.856 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:28.856 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:28.856 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:28.856 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:28.856 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:28.856 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:28.856 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:28.856 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:28.856 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:28.856 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:28.856 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:28.856 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:28.856 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.856 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:28.856 Malloc1 00:23:28.856 [2024-12-09 06:22:23.252836] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:28.856 Malloc2 00:23:28.856 Malloc3 00:23:28.856 Malloc4 00:23:28.856 Malloc5 00:23:28.856 Malloc6 00:23:29.116 Malloc7 00:23:29.116 Malloc8 00:23:29.116 Malloc9 00:23:29.116 Malloc10 00:23:29.116 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.116 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:29.116 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:29.116 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:29.116 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=390906 00:23:29.116 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 390906 /var/tmp/bdevperf.sock 00:23:29.116 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 390906 ']' 00:23:29.116 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:29.116 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:29.116 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:29.116 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:29.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:29.116 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:29.116 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:29.116 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:29.116 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:23:29.116 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:23:29.116 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:29.116 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:29.116 { 00:23:29.116 "params": { 00:23:29.116 "name": "Nvme$subsystem", 00:23:29.116 "trtype": "$TEST_TRANSPORT", 00:23:29.116 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:29.116 "adrfam": "ipv4", 00:23:29.116 "trsvcid": "$NVMF_PORT", 00:23:29.116 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:29.116 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:29.116 "hdgst": ${hdgst:-false}, 00:23:29.117 "ddgst": ${ddgst:-false} 00:23:29.117 }, 00:23:29.117 "method": "bdev_nvme_attach_controller" 00:23:29.117 } 00:23:29.117 EOF 00:23:29.117 )") 00:23:29.117 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:29.117 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:29.117 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:29.117 { 00:23:29.117 "params": { 00:23:29.117 "name": "Nvme$subsystem", 00:23:29.117 "trtype": "$TEST_TRANSPORT", 00:23:29.117 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:29.117 "adrfam": "ipv4", 00:23:29.117 "trsvcid": "$NVMF_PORT", 00:23:29.117 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:29.117 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:29.117 "hdgst": ${hdgst:-false}, 00:23:29.117 "ddgst": ${ddgst:-false} 00:23:29.117 }, 00:23:29.117 "method": "bdev_nvme_attach_controller" 00:23:29.117 } 00:23:29.117 EOF 00:23:29.117 )") 00:23:29.117 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:29.117 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:29.117 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:29.117 { 00:23:29.117 "params": { 00:23:29.117 "name": "Nvme$subsystem", 00:23:29.117 "trtype": "$TEST_TRANSPORT", 00:23:29.117 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:29.117 "adrfam": "ipv4", 00:23:29.117 "trsvcid": "$NVMF_PORT", 00:23:29.117 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:29.117 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:29.117 "hdgst": ${hdgst:-false}, 00:23:29.117 "ddgst": ${ddgst:-false} 00:23:29.117 }, 00:23:29.117 "method": "bdev_nvme_attach_controller" 00:23:29.117 } 00:23:29.117 EOF 00:23:29.117 )") 00:23:29.117 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:29.117 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:29.117 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:29.117 { 00:23:29.117 "params": { 00:23:29.117 "name": "Nvme$subsystem", 00:23:29.117 "trtype": "$TEST_TRANSPORT", 00:23:29.117 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:29.117 "adrfam": "ipv4", 00:23:29.117 "trsvcid": "$NVMF_PORT", 00:23:29.117 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:29.117 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:29.117 "hdgst": ${hdgst:-false}, 00:23:29.117 "ddgst": ${ddgst:-false} 00:23:29.117 }, 00:23:29.117 "method": "bdev_nvme_attach_controller" 00:23:29.117 } 00:23:29.117 EOF 00:23:29.117 )") 00:23:29.117 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:29.117 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:29.117 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:29.117 { 00:23:29.117 "params": { 00:23:29.117 "name": "Nvme$subsystem", 00:23:29.117 "trtype": "$TEST_TRANSPORT", 00:23:29.117 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:29.117 "adrfam": "ipv4", 00:23:29.117 "trsvcid": "$NVMF_PORT", 00:23:29.117 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:29.117 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:29.117 "hdgst": ${hdgst:-false}, 00:23:29.117 "ddgst": ${ddgst:-false} 00:23:29.117 }, 00:23:29.117 "method": "bdev_nvme_attach_controller" 00:23:29.117 } 00:23:29.117 EOF 00:23:29.117 )") 00:23:29.117 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:29.117 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:29.117 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:29.117 { 00:23:29.117 "params": { 00:23:29.117 "name": "Nvme$subsystem", 00:23:29.117 "trtype": "$TEST_TRANSPORT", 00:23:29.117 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:29.117 "adrfam": "ipv4", 00:23:29.117 "trsvcid": "$NVMF_PORT", 00:23:29.117 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:29.117 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:29.117 "hdgst": ${hdgst:-false}, 00:23:29.117 "ddgst": ${ddgst:-false} 00:23:29.117 }, 00:23:29.117 "method": "bdev_nvme_attach_controller" 00:23:29.117 } 00:23:29.117 EOF 00:23:29.117 )") 00:23:29.117 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:29.117 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:29.117 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:29.117 { 00:23:29.117 "params": { 00:23:29.117 "name": "Nvme$subsystem", 00:23:29.117 "trtype": "$TEST_TRANSPORT", 00:23:29.117 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:29.117 "adrfam": "ipv4", 00:23:29.117 "trsvcid": "$NVMF_PORT", 00:23:29.117 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:29.117 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:29.117 "hdgst": ${hdgst:-false}, 00:23:29.117 "ddgst": ${ddgst:-false} 00:23:29.117 }, 00:23:29.117 "method": "bdev_nvme_attach_controller" 00:23:29.117 } 00:23:29.117 EOF 00:23:29.117 )") 00:23:29.117 [2024-12-09 06:22:23.696214] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:23:29.117 [2024-12-09 06:22:23.696265] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid390906 ] 00:23:29.117 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:29.377 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:29.377 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:29.377 { 00:23:29.377 "params": { 00:23:29.377 "name": "Nvme$subsystem", 00:23:29.377 "trtype": "$TEST_TRANSPORT", 00:23:29.377 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:29.377 "adrfam": "ipv4", 00:23:29.377 "trsvcid": "$NVMF_PORT", 00:23:29.377 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:29.377 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:29.377 "hdgst": ${hdgst:-false}, 00:23:29.377 "ddgst": ${ddgst:-false} 00:23:29.377 }, 00:23:29.377 "method": "bdev_nvme_attach_controller" 00:23:29.377 } 00:23:29.377 EOF 00:23:29.377 )") 00:23:29.377 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:29.377 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:29.377 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:29.377 { 00:23:29.377 "params": { 00:23:29.377 "name": "Nvme$subsystem", 00:23:29.377 "trtype": "$TEST_TRANSPORT", 00:23:29.377 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:29.377 "adrfam": "ipv4", 00:23:29.377 "trsvcid": "$NVMF_PORT", 00:23:29.377 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:29.377 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:29.377 "hdgst": ${hdgst:-false}, 00:23:29.377 "ddgst": ${ddgst:-false} 00:23:29.377 }, 00:23:29.377 "method": "bdev_nvme_attach_controller" 00:23:29.377 } 00:23:29.377 EOF 00:23:29.377 )") 00:23:29.377 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:29.377 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:29.377 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:29.377 { 00:23:29.377 "params": { 00:23:29.377 "name": "Nvme$subsystem", 00:23:29.377 "trtype": "$TEST_TRANSPORT", 00:23:29.377 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:29.377 "adrfam": "ipv4", 00:23:29.377 "trsvcid": "$NVMF_PORT", 00:23:29.377 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:29.377 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:29.377 "hdgst": ${hdgst:-false}, 00:23:29.377 "ddgst": ${ddgst:-false} 00:23:29.377 }, 00:23:29.377 "method": "bdev_nvme_attach_controller" 00:23:29.377 } 00:23:29.377 EOF 00:23:29.377 )") 00:23:29.378 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:29.378 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:23:29.378 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:23:29.378 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:29.378 "params": { 00:23:29.378 "name": "Nvme1", 00:23:29.378 "trtype": "tcp", 00:23:29.378 "traddr": "10.0.0.2", 00:23:29.378 "adrfam": "ipv4", 00:23:29.378 "trsvcid": "4420", 00:23:29.378 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:29.378 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:29.378 "hdgst": false, 00:23:29.378 "ddgst": false 00:23:29.378 }, 00:23:29.378 "method": "bdev_nvme_attach_controller" 00:23:29.378 },{ 00:23:29.378 "params": { 00:23:29.378 "name": "Nvme2", 00:23:29.378 "trtype": "tcp", 00:23:29.378 "traddr": "10.0.0.2", 00:23:29.378 "adrfam": "ipv4", 00:23:29.378 "trsvcid": "4420", 00:23:29.378 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:29.378 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:29.378 "hdgst": false, 00:23:29.378 "ddgst": false 00:23:29.378 }, 00:23:29.378 "method": "bdev_nvme_attach_controller" 00:23:29.378 },{ 00:23:29.378 "params": { 00:23:29.378 "name": "Nvme3", 00:23:29.378 "trtype": "tcp", 00:23:29.378 "traddr": "10.0.0.2", 00:23:29.378 "adrfam": "ipv4", 00:23:29.378 "trsvcid": "4420", 00:23:29.378 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:29.378 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:29.378 "hdgst": false, 00:23:29.378 "ddgst": false 00:23:29.378 }, 00:23:29.378 "method": "bdev_nvme_attach_controller" 00:23:29.378 },{ 00:23:29.378 "params": { 00:23:29.378 "name": "Nvme4", 00:23:29.378 "trtype": "tcp", 00:23:29.378 "traddr": "10.0.0.2", 00:23:29.378 "adrfam": "ipv4", 00:23:29.378 "trsvcid": "4420", 00:23:29.378 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:29.378 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:29.378 "hdgst": false, 00:23:29.378 "ddgst": false 00:23:29.378 }, 00:23:29.378 "method": "bdev_nvme_attach_controller" 00:23:29.378 },{ 00:23:29.378 "params": { 00:23:29.378 "name": "Nvme5", 00:23:29.378 "trtype": "tcp", 00:23:29.378 "traddr": "10.0.0.2", 00:23:29.378 "adrfam": "ipv4", 00:23:29.378 "trsvcid": "4420", 00:23:29.378 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:29.378 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:29.378 "hdgst": false, 00:23:29.378 "ddgst": false 00:23:29.378 }, 00:23:29.378 "method": "bdev_nvme_attach_controller" 00:23:29.378 },{ 00:23:29.378 "params": { 00:23:29.378 "name": "Nvme6", 00:23:29.378 "trtype": "tcp", 00:23:29.378 "traddr": "10.0.0.2", 00:23:29.378 "adrfam": "ipv4", 00:23:29.378 "trsvcid": "4420", 00:23:29.378 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:29.378 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:29.378 "hdgst": false, 00:23:29.378 "ddgst": false 00:23:29.378 }, 00:23:29.378 "method": "bdev_nvme_attach_controller" 00:23:29.378 },{ 00:23:29.378 "params": { 00:23:29.378 "name": "Nvme7", 00:23:29.378 "trtype": "tcp", 00:23:29.378 "traddr": "10.0.0.2", 00:23:29.378 "adrfam": "ipv4", 00:23:29.378 "trsvcid": "4420", 00:23:29.378 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:29.378 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:29.378 "hdgst": false, 00:23:29.378 "ddgst": false 00:23:29.378 }, 00:23:29.378 "method": "bdev_nvme_attach_controller" 00:23:29.378 },{ 00:23:29.378 "params": { 00:23:29.378 "name": "Nvme8", 00:23:29.378 "trtype": "tcp", 00:23:29.378 "traddr": "10.0.0.2", 00:23:29.378 "adrfam": "ipv4", 00:23:29.378 "trsvcid": "4420", 00:23:29.378 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:29.378 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:29.378 "hdgst": false, 00:23:29.378 "ddgst": false 00:23:29.378 }, 00:23:29.378 "method": "bdev_nvme_attach_controller" 00:23:29.378 },{ 00:23:29.378 "params": { 00:23:29.378 "name": "Nvme9", 00:23:29.378 "trtype": "tcp", 00:23:29.378 "traddr": "10.0.0.2", 00:23:29.378 "adrfam": "ipv4", 00:23:29.378 "trsvcid": "4420", 00:23:29.378 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:29.378 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:29.378 "hdgst": false, 00:23:29.378 "ddgst": false 00:23:29.378 }, 00:23:29.378 "method": "bdev_nvme_attach_controller" 00:23:29.378 },{ 00:23:29.378 "params": { 00:23:29.378 "name": "Nvme10", 00:23:29.378 "trtype": "tcp", 00:23:29.378 "traddr": "10.0.0.2", 00:23:29.378 "adrfam": "ipv4", 00:23:29.378 "trsvcid": "4420", 00:23:29.378 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:29.378 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:29.378 "hdgst": false, 00:23:29.378 "ddgst": false 00:23:29.378 }, 00:23:29.378 "method": "bdev_nvme_attach_controller" 00:23:29.378 }' 00:23:29.378 [2024-12-09 06:22:23.782940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:29.378 [2024-12-09 06:22:23.817133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:30.760 Running I/O for 10 seconds... 00:23:30.760 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:30.760 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:23:30.760 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:30.760 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.760 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:31.019 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.019 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:31.019 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:31.019 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:23:31.019 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:23:31.019 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:23:31.019 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:23:31.019 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:31.019 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:31.019 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:31.019 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.019 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:31.019 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.019 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:23:31.019 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:23:31.019 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:31.279 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:31.279 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:31.279 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:31.279 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:31.279 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.279 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:31.279 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.279 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:23:31.279 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:23:31.279 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:31.538 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:31.538 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:31.538 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:31.538 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:31.538 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.538 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:31.538 06:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.538 06:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:23:31.538 06:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:23:31.538 06:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:23:31.538 06:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:23:31.538 06:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:23:31.538 06:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 390906 00:23:31.538 06:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 390906 ']' 00:23:31.538 06:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 390906 00:23:31.538 06:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:23:31.538 06:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:31.538 06:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 390906 00:23:31.538 06:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:31.538 06:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:31.538 06:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 390906' 00:23:31.538 killing process with pid 390906 00:23:31.538 06:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 390906 00:23:31.538 06:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 390906 00:23:31.797 Received shutdown signal, test time was about 0.942681 seconds 00:23:31.797 00:23:31.797 Latency(us) 00:23:31.797 [2024-12-09T05:22:26.384Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:31.797 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:31.797 Verification LBA range: start 0x0 length 0x400 00:23:31.797 Nvme1n1 : 0.91 211.09 13.19 0.00 0.00 299780.99 27625.94 250045.05 00:23:31.797 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:31.797 Verification LBA range: start 0x0 length 0x400 00:23:31.797 Nvme2n1 : 0.94 273.74 17.11 0.00 0.00 226697.45 15325.34 233913.11 00:23:31.797 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:31.797 Verification LBA range: start 0x0 length 0x400 00:23:31.797 Nvme3n1 : 0.92 278.77 17.42 0.00 0.00 218130.02 12754.31 233913.11 00:23:31.797 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:31.797 Verification LBA range: start 0x0 length 0x400 00:23:31.797 Nvme4n1 : 0.94 273.02 17.06 0.00 0.00 218603.72 18350.08 230686.72 00:23:31.797 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:31.797 Verification LBA range: start 0x0 length 0x400 00:23:31.797 Nvme5n1 : 0.92 224.57 14.04 0.00 0.00 256509.75 9376.69 235526.30 00:23:31.797 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:31.797 Verification LBA range: start 0x0 length 0x400 00:23:31.797 Nvme6n1 : 0.93 276.61 17.29 0.00 0.00 206632.57 19761.62 212941.59 00:23:31.797 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:31.797 Verification LBA range: start 0x0 length 0x400 00:23:31.797 Nvme7n1 : 0.93 275.20 17.20 0.00 0.00 203319.73 14821.22 230686.72 00:23:31.797 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:31.797 Verification LBA range: start 0x0 length 0x400 00:23:31.797 Nvme8n1 : 0.93 274.93 17.18 0.00 0.00 199211.91 19459.15 232299.91 00:23:31.797 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:31.797 Verification LBA range: start 0x0 length 0x400 00:23:31.797 Nvme9n1 : 0.92 208.83 13.05 0.00 0.00 255468.31 22584.71 261337.40 00:23:31.797 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:31.797 Verification LBA range: start 0x0 length 0x400 00:23:31.797 Nvme10n1 : 0.94 276.06 17.25 0.00 0.00 190281.26 1430.45 219394.36 00:23:31.797 [2024-12-09T05:22:26.384Z] =================================================================================================================== 00:23:31.797 [2024-12-09T05:22:26.384Z] Total : 2572.84 160.80 0.00 0.00 224115.18 1430.45 261337.40 00:23:31.797 06:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:23:32.736 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 390552 00:23:32.736 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:23:32.736 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:32.736 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:32.736 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:32.736 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:32.736 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:32.736 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:23:32.736 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:32.736 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:23:32.736 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:32.736 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:32.736 rmmod nvme_tcp 00:23:32.995 rmmod nvme_fabrics 00:23:32.996 rmmod nvme_keyring 00:23:32.996 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:32.996 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:23:32.996 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:23:32.996 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 390552 ']' 00:23:32.996 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 390552 00:23:32.996 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 390552 ']' 00:23:32.996 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 390552 00:23:32.996 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:23:32.996 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:32.996 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 390552 00:23:32.996 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:32.996 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:32.996 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 390552' 00:23:32.996 killing process with pid 390552 00:23:32.996 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 390552 00:23:32.996 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 390552 00:23:33.255 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:33.255 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:33.255 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:33.255 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:23:33.255 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:23:33.255 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:33.255 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:23:33.255 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:33.255 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:33.255 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:33.255 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:33.255 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:35.160 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:35.160 00:23:35.160 real 0m7.880s 00:23:35.160 user 0m23.897s 00:23:35.160 sys 0m1.264s 00:23:35.160 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:35.160 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:35.160 ************************************ 00:23:35.160 END TEST nvmf_shutdown_tc2 00:23:35.160 ************************************ 00:23:35.422 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:23:35.422 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:35.422 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:35.422 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:35.422 ************************************ 00:23:35.422 START TEST nvmf_shutdown_tc3 00:23:35.422 ************************************ 00:23:35.422 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:23:35.422 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:23:35.422 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:35.422 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:35.422 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:35.422 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:35.422 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:35.422 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:35.422 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:35.422 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:35.422 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:35.422 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:35.422 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:35.422 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:35.422 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:35.422 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:35.422 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:35.422 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:35.422 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:35.422 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:35.422 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:35.422 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:35.422 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:23:35.422 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:35.422 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:23:35.422 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:23:35.422 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:23:35.422 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:23:35.422 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:23:35.422 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:35.423 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:35.423 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:35.423 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:35.423 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:35.423 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:35.423 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:35.423 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:35.423 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:35.423 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:35.423 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:35.423 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:35.423 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:35.423 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:35.423 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:35.423 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:35.423 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:35.423 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:35.423 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:35.423 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:35.423 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:35.423 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:35.423 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:35.423 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:35.423 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:35.423 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:35.423 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:35.423 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:35.423 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:35.423 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:35.423 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:35.423 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:35.423 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:35.423 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:35.423 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:35.423 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:35.423 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:35.423 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:35.423 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:35.423 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:35.423 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:35.423 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:35.423 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:35.423 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:35.423 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:35.423 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:35.423 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:35.423 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:35.423 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:35.423 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:35.423 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:35.423 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:35.423 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:35.423 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:35.423 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:35.423 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:35.423 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:35.423 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:35.423 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:35.423 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:35.423 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:35.423 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:35.423 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:35.423 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:35.423 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:35.423 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:35.423 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:35.423 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:35.423 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:35.423 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:35.423 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:35.423 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:35.423 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:35.423 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:35.423 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:35.423 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:35.423 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:35.423 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:35.423 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:35.423 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:35.423 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:35.423 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:35.684 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:35.684 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:35.684 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:35.684 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:35.684 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:35.684 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.596 ms 00:23:35.684 00:23:35.684 --- 10.0.0.2 ping statistics --- 00:23:35.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:35.684 rtt min/avg/max/mdev = 0.596/0.596/0.596/0.000 ms 00:23:35.684 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:35.684 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:35.684 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.300 ms 00:23:35.684 00:23:35.684 --- 10.0.0.1 ping statistics --- 00:23:35.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:35.684 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:23:35.684 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:35.684 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:23:35.684 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:35.684 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:35.684 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:35.684 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:35.684 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:35.684 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:35.684 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:35.684 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:35.684 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:35.684 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:35.684 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:35.684 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=392015 00:23:35.684 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 392015 00:23:35.684 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:35.684 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 392015 ']' 00:23:35.684 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:35.684 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:35.684 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:35.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:35.684 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:35.684 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:35.684 [2024-12-09 06:22:30.222085] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:23:35.684 [2024-12-09 06:22:30.222137] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:35.943 [2024-12-09 06:22:30.289362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:35.944 [2024-12-09 06:22:30.323444] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:35.944 [2024-12-09 06:22:30.323482] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:35.944 [2024-12-09 06:22:30.323488] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:35.944 [2024-12-09 06:22:30.323493] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:35.944 [2024-12-09 06:22:30.323497] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:35.944 [2024-12-09 06:22:30.324797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:35.944 [2024-12-09 06:22:30.324944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:35.944 [2024-12-09 06:22:30.325089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:35.944 [2024-12-09 06:22:30.325091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:36.513 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:36.513 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:23:36.513 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:36.513 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:36.513 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:36.513 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:36.513 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:36.513 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.513 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:36.513 [2024-12-09 06:22:31.075956] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:36.513 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.513 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:36.513 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:36.513 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:36.513 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:36.513 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:36.513 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:36.513 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:36.775 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:36.775 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:36.775 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:36.775 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:36.775 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:36.775 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:36.775 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:36.775 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:36.775 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:36.775 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:36.775 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:36.775 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:36.775 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:36.775 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:36.775 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:36.775 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:36.775 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:36.775 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:36.775 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:36.775 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.775 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:36.775 Malloc1 00:23:36.775 [2024-12-09 06:22:31.182715] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:36.775 Malloc2 00:23:36.775 Malloc3 00:23:36.775 Malloc4 00:23:36.775 Malloc5 00:23:36.775 Malloc6 00:23:37.038 Malloc7 00:23:37.038 Malloc8 00:23:37.038 Malloc9 00:23:37.038 Malloc10 00:23:37.038 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.038 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:37.038 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:37.038 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:37.038 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=392323 00:23:37.038 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 392323 /var/tmp/bdevperf.sock 00:23:37.038 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 392323 ']' 00:23:37.038 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:37.038 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:37.038 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:37.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:37.038 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:37.038 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:37.038 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:37.038 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:37.038 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:23:37.038 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:23:37.038 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:37.038 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:37.038 { 00:23:37.038 "params": { 00:23:37.038 "name": "Nvme$subsystem", 00:23:37.038 "trtype": "$TEST_TRANSPORT", 00:23:37.038 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:37.038 "adrfam": "ipv4", 00:23:37.038 "trsvcid": "$NVMF_PORT", 00:23:37.038 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:37.038 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:37.038 "hdgst": ${hdgst:-false}, 00:23:37.038 "ddgst": ${ddgst:-false} 00:23:37.038 }, 00:23:37.038 "method": "bdev_nvme_attach_controller" 00:23:37.038 } 00:23:37.038 EOF 00:23:37.038 )") 00:23:37.038 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:37.038 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:37.038 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:37.038 { 00:23:37.038 "params": { 00:23:37.038 "name": "Nvme$subsystem", 00:23:37.038 "trtype": "$TEST_TRANSPORT", 00:23:37.038 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:37.038 "adrfam": "ipv4", 00:23:37.038 "trsvcid": "$NVMF_PORT", 00:23:37.038 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:37.038 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:37.038 "hdgst": ${hdgst:-false}, 00:23:37.038 "ddgst": ${ddgst:-false} 00:23:37.038 }, 00:23:37.038 "method": "bdev_nvme_attach_controller" 00:23:37.038 } 00:23:37.038 EOF 00:23:37.038 )") 00:23:37.038 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:37.038 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:37.038 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:37.038 { 00:23:37.038 "params": { 00:23:37.038 "name": "Nvme$subsystem", 00:23:37.038 "trtype": "$TEST_TRANSPORT", 00:23:37.038 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:37.038 "adrfam": "ipv4", 00:23:37.038 "trsvcid": "$NVMF_PORT", 00:23:37.038 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:37.038 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:37.038 "hdgst": ${hdgst:-false}, 00:23:37.038 "ddgst": ${ddgst:-false} 00:23:37.038 }, 00:23:37.038 "method": "bdev_nvme_attach_controller" 00:23:37.038 } 00:23:37.038 EOF 00:23:37.038 )") 00:23:37.038 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:37.038 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:37.038 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:37.038 { 00:23:37.038 "params": { 00:23:37.038 "name": "Nvme$subsystem", 00:23:37.038 "trtype": "$TEST_TRANSPORT", 00:23:37.039 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:37.039 "adrfam": "ipv4", 00:23:37.039 "trsvcid": "$NVMF_PORT", 00:23:37.039 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:37.039 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:37.039 "hdgst": ${hdgst:-false}, 00:23:37.039 "ddgst": ${ddgst:-false} 00:23:37.039 }, 00:23:37.039 "method": "bdev_nvme_attach_controller" 00:23:37.039 } 00:23:37.039 EOF 00:23:37.039 )") 00:23:37.039 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:37.039 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:37.039 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:37.039 { 00:23:37.039 "params": { 00:23:37.039 "name": "Nvme$subsystem", 00:23:37.039 "trtype": "$TEST_TRANSPORT", 00:23:37.039 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:37.039 "adrfam": "ipv4", 00:23:37.039 "trsvcid": "$NVMF_PORT", 00:23:37.039 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:37.039 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:37.039 "hdgst": ${hdgst:-false}, 00:23:37.039 "ddgst": ${ddgst:-false} 00:23:37.039 }, 00:23:37.039 "method": "bdev_nvme_attach_controller" 00:23:37.039 } 00:23:37.039 EOF 00:23:37.039 )") 00:23:37.039 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:37.039 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:37.039 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:37.039 { 00:23:37.039 "params": { 00:23:37.039 "name": "Nvme$subsystem", 00:23:37.039 "trtype": "$TEST_TRANSPORT", 00:23:37.039 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:37.039 "adrfam": "ipv4", 00:23:37.039 "trsvcid": "$NVMF_PORT", 00:23:37.039 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:37.039 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:37.039 "hdgst": ${hdgst:-false}, 00:23:37.039 "ddgst": ${ddgst:-false} 00:23:37.039 }, 00:23:37.039 "method": "bdev_nvme_attach_controller" 00:23:37.039 } 00:23:37.039 EOF 00:23:37.039 )") 00:23:37.039 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:37.300 [2024-12-09 06:22:31.624159] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:23:37.300 [2024-12-09 06:22:31.624209] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid392323 ] 00:23:37.300 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:37.300 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:37.300 { 00:23:37.300 "params": { 00:23:37.300 "name": "Nvme$subsystem", 00:23:37.300 "trtype": "$TEST_TRANSPORT", 00:23:37.300 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:37.300 "adrfam": "ipv4", 00:23:37.300 "trsvcid": "$NVMF_PORT", 00:23:37.300 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:37.300 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:37.300 "hdgst": ${hdgst:-false}, 00:23:37.300 "ddgst": ${ddgst:-false} 00:23:37.300 }, 00:23:37.300 "method": "bdev_nvme_attach_controller" 00:23:37.300 } 00:23:37.300 EOF 00:23:37.300 )") 00:23:37.300 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:37.300 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:37.300 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:37.300 { 00:23:37.300 "params": { 00:23:37.300 "name": "Nvme$subsystem", 00:23:37.300 "trtype": "$TEST_TRANSPORT", 00:23:37.300 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:37.300 "adrfam": "ipv4", 00:23:37.300 "trsvcid": "$NVMF_PORT", 00:23:37.300 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:37.300 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:37.300 "hdgst": ${hdgst:-false}, 00:23:37.300 "ddgst": ${ddgst:-false} 00:23:37.300 }, 00:23:37.300 "method": "bdev_nvme_attach_controller" 00:23:37.300 } 00:23:37.300 EOF 00:23:37.300 )") 00:23:37.300 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:37.300 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:37.300 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:37.300 { 00:23:37.300 "params": { 00:23:37.300 "name": "Nvme$subsystem", 00:23:37.300 "trtype": "$TEST_TRANSPORT", 00:23:37.300 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:37.300 "adrfam": "ipv4", 00:23:37.300 "trsvcid": "$NVMF_PORT", 00:23:37.300 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:37.300 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:37.300 "hdgst": ${hdgst:-false}, 00:23:37.300 "ddgst": ${ddgst:-false} 00:23:37.300 }, 00:23:37.300 "method": "bdev_nvme_attach_controller" 00:23:37.300 } 00:23:37.300 EOF 00:23:37.300 )") 00:23:37.300 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:37.300 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:37.300 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:37.300 { 00:23:37.300 "params": { 00:23:37.300 "name": "Nvme$subsystem", 00:23:37.300 "trtype": "$TEST_TRANSPORT", 00:23:37.300 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:37.300 "adrfam": "ipv4", 00:23:37.300 "trsvcid": "$NVMF_PORT", 00:23:37.300 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:37.300 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:37.300 "hdgst": ${hdgst:-false}, 00:23:37.300 "ddgst": ${ddgst:-false} 00:23:37.300 }, 00:23:37.300 "method": "bdev_nvme_attach_controller" 00:23:37.300 } 00:23:37.300 EOF 00:23:37.300 )") 00:23:37.300 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:37.300 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:23:37.300 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:23:37.300 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:37.300 "params": { 00:23:37.300 "name": "Nvme1", 00:23:37.300 "trtype": "tcp", 00:23:37.300 "traddr": "10.0.0.2", 00:23:37.300 "adrfam": "ipv4", 00:23:37.300 "trsvcid": "4420", 00:23:37.300 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:37.300 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:37.300 "hdgst": false, 00:23:37.300 "ddgst": false 00:23:37.300 }, 00:23:37.300 "method": "bdev_nvme_attach_controller" 00:23:37.300 },{ 00:23:37.300 "params": { 00:23:37.300 "name": "Nvme2", 00:23:37.300 "trtype": "tcp", 00:23:37.300 "traddr": "10.0.0.2", 00:23:37.300 "adrfam": "ipv4", 00:23:37.300 "trsvcid": "4420", 00:23:37.300 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:37.300 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:37.300 "hdgst": false, 00:23:37.300 "ddgst": false 00:23:37.300 }, 00:23:37.300 "method": "bdev_nvme_attach_controller" 00:23:37.301 },{ 00:23:37.301 "params": { 00:23:37.301 "name": "Nvme3", 00:23:37.301 "trtype": "tcp", 00:23:37.301 "traddr": "10.0.0.2", 00:23:37.301 "adrfam": "ipv4", 00:23:37.301 "trsvcid": "4420", 00:23:37.301 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:37.301 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:37.301 "hdgst": false, 00:23:37.301 "ddgst": false 00:23:37.301 }, 00:23:37.301 "method": "bdev_nvme_attach_controller" 00:23:37.301 },{ 00:23:37.301 "params": { 00:23:37.301 "name": "Nvme4", 00:23:37.301 "trtype": "tcp", 00:23:37.301 "traddr": "10.0.0.2", 00:23:37.301 "adrfam": "ipv4", 00:23:37.301 "trsvcid": "4420", 00:23:37.301 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:37.301 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:37.301 "hdgst": false, 00:23:37.301 "ddgst": false 00:23:37.301 }, 00:23:37.301 "method": "bdev_nvme_attach_controller" 00:23:37.301 },{ 00:23:37.301 "params": { 00:23:37.301 "name": "Nvme5", 00:23:37.301 "trtype": "tcp", 00:23:37.301 "traddr": "10.0.0.2", 00:23:37.301 "adrfam": "ipv4", 00:23:37.301 "trsvcid": "4420", 00:23:37.301 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:37.301 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:37.301 "hdgst": false, 00:23:37.301 "ddgst": false 00:23:37.301 }, 00:23:37.301 "method": "bdev_nvme_attach_controller" 00:23:37.301 },{ 00:23:37.301 "params": { 00:23:37.301 "name": "Nvme6", 00:23:37.301 "trtype": "tcp", 00:23:37.301 "traddr": "10.0.0.2", 00:23:37.301 "adrfam": "ipv4", 00:23:37.301 "trsvcid": "4420", 00:23:37.301 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:37.301 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:37.301 "hdgst": false, 00:23:37.301 "ddgst": false 00:23:37.301 }, 00:23:37.301 "method": "bdev_nvme_attach_controller" 00:23:37.301 },{ 00:23:37.301 "params": { 00:23:37.301 "name": "Nvme7", 00:23:37.301 "trtype": "tcp", 00:23:37.301 "traddr": "10.0.0.2", 00:23:37.301 "adrfam": "ipv4", 00:23:37.301 "trsvcid": "4420", 00:23:37.301 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:37.301 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:37.301 "hdgst": false, 00:23:37.301 "ddgst": false 00:23:37.301 }, 00:23:37.301 "method": "bdev_nvme_attach_controller" 00:23:37.301 },{ 00:23:37.301 "params": { 00:23:37.301 "name": "Nvme8", 00:23:37.301 "trtype": "tcp", 00:23:37.301 "traddr": "10.0.0.2", 00:23:37.301 "adrfam": "ipv4", 00:23:37.301 "trsvcid": "4420", 00:23:37.301 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:37.301 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:37.301 "hdgst": false, 00:23:37.301 "ddgst": false 00:23:37.301 }, 00:23:37.301 "method": "bdev_nvme_attach_controller" 00:23:37.301 },{ 00:23:37.301 "params": { 00:23:37.301 "name": "Nvme9", 00:23:37.301 "trtype": "tcp", 00:23:37.301 "traddr": "10.0.0.2", 00:23:37.301 "adrfam": "ipv4", 00:23:37.301 "trsvcid": "4420", 00:23:37.301 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:37.301 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:37.301 "hdgst": false, 00:23:37.301 "ddgst": false 00:23:37.301 }, 00:23:37.301 "method": "bdev_nvme_attach_controller" 00:23:37.301 },{ 00:23:37.301 "params": { 00:23:37.301 "name": "Nvme10", 00:23:37.301 "trtype": "tcp", 00:23:37.301 "traddr": "10.0.0.2", 00:23:37.301 "adrfam": "ipv4", 00:23:37.301 "trsvcid": "4420", 00:23:37.301 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:37.301 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:37.301 "hdgst": false, 00:23:37.301 "ddgst": false 00:23:37.301 }, 00:23:37.301 "method": "bdev_nvme_attach_controller" 00:23:37.301 }' 00:23:37.301 [2024-12-09 06:22:31.710008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:37.301 [2024-12-09 06:22:31.744766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:38.682 Running I/O for 10 seconds... 00:23:38.943 06:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:38.943 06:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:23:38.943 06:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:38.943 06:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.943 06:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:38.943 06:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.943 06:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:38.943 06:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:38.943 06:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:38.943 06:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:23:38.943 06:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:23:38.943 06:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:23:38.943 06:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:23:38.943 06:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:38.943 06:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:38.943 06:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.943 06:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:38.943 06:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:38.943 06:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.943 06:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:23:38.943 06:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:23:38.943 06:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:39.202 06:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:39.202 06:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:39.202 06:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:39.202 06:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:39.202 06:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.202 06:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:39.461 06:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.461 06:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:23:39.461 06:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:23:39.461 06:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:39.737 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:39.737 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:39.737 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:39.737 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:39.737 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.737 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:39.737 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.737 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=195 00:23:39.737 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 195 -ge 100 ']' 00:23:39.737 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:23:39.737 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:23:39.737 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:23:39.737 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 392015 00:23:39.737 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 392015 ']' 00:23:39.737 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 392015 00:23:39.737 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:23:39.737 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:39.737 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 392015 00:23:39.737 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:39.737 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:39.737 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 392015' 00:23:39.737 killing process with pid 392015 00:23:39.737 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 392015 00:23:39.737 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 392015 00:23:39.737 [2024-12-09 06:22:34.173957] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9e760 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.174019] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9e760 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.174026] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9e760 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.174031] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9e760 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.174037] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9e760 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.174043] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9e760 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.174048] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9e760 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.174054] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9e760 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.174059] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9e760 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.174063] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9e760 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.174069] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9e760 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.174086] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9e760 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.174091] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9e760 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.174096] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9e760 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.174101] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9e760 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.174106] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9e760 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.174111] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9e760 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.174116] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9e760 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.174121] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9e760 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.174126] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9e760 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.174131] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9e760 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.174136] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9e760 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.174141] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9e760 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.174146] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9e760 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.174151] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9e760 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.174156] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9e760 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.174161] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9e760 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.174166] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9e760 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.174171] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9e760 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.174176] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9e760 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.174181] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9e760 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.174186] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9e760 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.174191] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9e760 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.174196] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9e760 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.174201] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9e760 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.174206] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9e760 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.174210] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9e760 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.174215] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9e760 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.174222] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9e760 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.174227] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9e760 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.174233] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9e760 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.174238] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9e760 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.174243] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9e760 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.174247] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9e760 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.174252] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9e760 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.174257] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9e760 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.174262] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9e760 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.174267] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9e760 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.174272] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9e760 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.174277] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9e760 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.174282] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9e760 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.174288] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9e760 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.174292] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9e760 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.174297] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9e760 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.174302] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9e760 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.174306] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9e760 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.174311] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9e760 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.174316] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9e760 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.174321] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9e760 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.174325] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9e760 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.174330] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9e760 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.174335] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9e760 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.174340] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9e760 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.175667] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcd230 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.175695] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcd230 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.175701] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcd230 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.175709] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcd230 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.175714] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcd230 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.175720] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcd230 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.175724] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcd230 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.175729] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcd230 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.175734] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcd230 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.175739] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcd230 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.175743] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcd230 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.175748] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcd230 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.175753] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcd230 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.175758] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcd230 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.175763] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcd230 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.175768] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcd230 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.175773] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcd230 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.175778] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcd230 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.175783] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcd230 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.175788] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcd230 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.175793] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcd230 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.175798] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcd230 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.175802] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcd230 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.175807] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcd230 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.175812] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcd230 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.175817] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcd230 is same with the state(6) to be set 00:23:39.737 [2024-12-09 06:22:34.175822] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcd230 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.175827] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcd230 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.175831] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcd230 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.175836] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcd230 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.175843] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcd230 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.175848] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcd230 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.175853] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcd230 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.175857] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcd230 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.175863] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcd230 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.175868] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcd230 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.175873] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcd230 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.175878] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcd230 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.175883] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcd230 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.175888] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcd230 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.175893] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcd230 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.175897] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcd230 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.175902] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcd230 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.175907] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcd230 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.175912] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcd230 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.175917] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcd230 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.175921] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcd230 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.175926] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcd230 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.175931] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcd230 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.175939] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcd230 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.175949] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcd230 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.175954] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcd230 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.175959] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcd230 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.175965] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcd230 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.175970] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcd230 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.175975] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcd230 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.175980] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcd230 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.175987] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcd230 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.175992] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcd230 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.175997] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcd230 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.176001] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcd230 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.176006] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcd230 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.176011] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcd230 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.177161] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9ec50 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.177177] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9ec50 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.177183] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9ec50 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.177187] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9ec50 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.177192] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9ec50 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.177198] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9ec50 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.177202] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9ec50 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.177208] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9ec50 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.177212] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9ec50 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.177218] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9ec50 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.177223] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9ec50 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.177228] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9ec50 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.177233] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9ec50 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.177238] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9ec50 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.177242] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9ec50 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.177247] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9ec50 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.177252] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9ec50 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.177257] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9ec50 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.177262] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9ec50 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.177266] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9ec50 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.177271] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9ec50 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.177279] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9ec50 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.177284] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9ec50 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.177289] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9ec50 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.177294] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9ec50 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.177299] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9ec50 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.177303] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9ec50 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.177308] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9ec50 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.177313] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9ec50 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.177318] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9ec50 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.177323] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9ec50 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.177328] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9ec50 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.177333] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9ec50 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.177337] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9ec50 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.177342] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9ec50 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.177347] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9ec50 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.177352] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9ec50 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.177357] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9ec50 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.177362] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9ec50 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.177368] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9ec50 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.177376] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9ec50 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.177385] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9ec50 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.177390] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9ec50 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.177395] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9ec50 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.177400] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9ec50 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.177405] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9ec50 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.177410] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9ec50 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.177415] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9ec50 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.177424] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9ec50 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.177429] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9ec50 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.177436] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9ec50 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.177444] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9ec50 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.177458] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9ec50 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.177465] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9ec50 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.177470] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9ec50 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.177476] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9ec50 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.177481] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9ec50 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.177486] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9ec50 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.177490] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9ec50 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.177497] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9ec50 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.177504] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9ec50 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.177509] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9ec50 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.177514] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9ec50 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.178146] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:39.738 [2024-12-09 06:22:34.178850] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f120 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.178875] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f120 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.178881] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f120 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.178887] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f120 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.178892] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f120 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.178898] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f120 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.178903] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f120 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.178908] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f120 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.178913] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f120 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.178918] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f120 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.178922] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f120 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.178931] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f120 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.178936] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f120 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.178941] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f120 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.178945] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f120 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.178950] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f120 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.178956] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f120 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.178961] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f120 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.178966] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f120 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.178971] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f120 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.178975] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f120 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.178980] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f120 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.178985] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f120 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.178990] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f120 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.178995] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f120 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.179000] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f120 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.179005] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f120 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.179010] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f120 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.179015] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f120 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.179020] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f120 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.179025] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f120 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.179030] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f120 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.179035] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f120 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.179040] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f120 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.179044] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f120 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.179049] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f120 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.179054] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f120 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.179059] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f120 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.179064] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f120 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.179070] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f120 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.179075] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f120 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.179079] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f120 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.179084] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f120 is same with the state(6) to be set 00:23:39.738 [2024-12-09 06:22:34.179089] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f120 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.179096] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f120 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.179103] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f120 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.179108] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f120 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.179112] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f120 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.179117] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f120 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.179122] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f120 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.179127] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f120 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.179132] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f120 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.179136] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f120 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.179141] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f120 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.179151] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f120 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.179156] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f120 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.179161] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f120 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.179166] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f120 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.179170] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f120 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.179175] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f120 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.179180] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f120 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.179185] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f120 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.179190] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f120 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.179383] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:39.739 [2024-12-09 06:22:34.180196] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f610 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.180346] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:39.739 [2024-12-09 06:22:34.180673] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f990 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.180690] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f990 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.180695] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f990 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.180700] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f990 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.180705] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f990 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.180710] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f990 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.180715] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f990 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.180720] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f990 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.180724] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f990 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.180729] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f990 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.180735] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f990 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.180739] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f990 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.180744] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f990 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.180749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f990 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.180754] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f990 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.180758] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f990 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.180763] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f990 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.180768] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f990 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.180773] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f990 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.180777] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f990 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.180782] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f990 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.180787] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f990 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.180791] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f990 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.180796] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f990 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.180801] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f990 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.180805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f990 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.180810] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f990 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.180818] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f990 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.180823] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f990 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.180827] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f990 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.180832] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f990 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.180837] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f990 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.180842] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f990 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.180846] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f990 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.180851] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f990 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.180856] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f990 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.180860] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f990 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.180865] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f990 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.180870] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f990 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.180874] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f990 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.180879] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f990 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.180883] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f990 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.180888] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f990 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.180893] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f990 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.180898] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f990 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.180902] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f990 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.180907] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f990 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.180912] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f990 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.180916] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f990 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.180921] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f990 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.180925] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f990 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.180930] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f990 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.180934] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f990 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.180931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:1[2024-12-09 06:22:34.180940] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f990 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.739 the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.180950] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f990 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.180955] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f990 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.180955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.739 [2024-12-09 06:22:34.180959] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f990 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.180965] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f990 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.180969] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f990 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.180971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.739 [2024-12-09 06:22:34.180974] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f990 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.180979] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f990 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.180979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.739 [2024-12-09 06:22:34.180984] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f990 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.180989] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f990 is same with the state(6) to be set 00:23:39.739 [2024-12-09 06:22:34.180989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.739 [2024-12-09 06:22:34.180997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.739 [2024-12-09 06:22:34.181006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.739 [2024-12-09 06:22:34.181013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.739 [2024-12-09 06:22:34.181022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.739 [2024-12-09 06:22:34.181029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.739 [2024-12-09 06:22:34.181038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.739 [2024-12-09 06:22:34.181045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.739 [2024-12-09 06:22:34.181053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.739 [2024-12-09 06:22:34.181061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.739 [2024-12-09 06:22:34.181069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.739 [2024-12-09 06:22:34.181076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.739 [2024-12-09 06:22:34.181085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.739 [2024-12-09 06:22:34.181102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.739 [2024-12-09 06:22:34.181111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.739 [2024-12-09 06:22:34.181118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.739 [2024-12-09 06:22:34.181126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.739 [2024-12-09 06:22:34.181133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.739 [2024-12-09 06:22:34.181142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.739 [2024-12-09 06:22:34.181149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.739 [2024-12-09 06:22:34.181158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.739 [2024-12-09 06:22:34.181165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.739 [2024-12-09 06:22:34.181174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.739 [2024-12-09 06:22:34.181181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.739 [2024-12-09 06:22:34.181190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.739 [2024-12-09 06:22:34.181196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.739 [2024-12-09 06:22:34.181205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.739 [2024-12-09 06:22:34.181212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.739 [2024-12-09 06:22:34.181221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.739 [2024-12-09 06:22:34.181229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.739 [2024-12-09 06:22:34.181237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.739 [2024-12-09 06:22:34.181244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.739 [2024-12-09 06:22:34.181253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.739 [2024-12-09 06:22:34.181260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.739 [2024-12-09 06:22:34.181269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.739 [2024-12-09 06:22:34.181276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.739 [2024-12-09 06:22:34.181285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.739 [2024-12-09 06:22:34.181291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.739 [2024-12-09 06:22:34.181302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.740 [2024-12-09 06:22:34.181309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.740 [2024-12-09 06:22:34.181318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.740 [2024-12-09 06:22:34.181325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.740 [2024-12-09 06:22:34.181333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.740 [2024-12-09 06:22:34.181341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.740 [2024-12-09 06:22:34.181350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.740 [2024-12-09 06:22:34.181356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.740 [2024-12-09 06:22:34.181365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.740 [2024-12-09 06:22:34.181372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.740 [2024-12-09 06:22:34.181381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.740 [2024-12-09 06:22:34.181388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.740 [2024-12-09 06:22:34.181397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.740 [2024-12-09 06:22:34.181404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.740 [2024-12-09 06:22:34.181413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.740 [2024-12-09 06:22:34.181420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.740 [2024-12-09 06:22:34.181429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.740 [2024-12-09 06:22:34.181436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.740 [2024-12-09 06:22:34.181444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.740 [2024-12-09 06:22:34.181457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.740 [2024-12-09 06:22:34.181468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.740 [2024-12-09 06:22:34.181475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.740 [2024-12-09 06:22:34.181484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.740 [2024-12-09 06:22:34.181491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.740 [2024-12-09 06:22:34.181500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.740 [2024-12-09 06:22:34.181508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.740 [2024-12-09 06:22:34.181517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.740 [2024-12-09 06:22:34.181524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.740 [2024-12-09 06:22:34.181533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.740 [2024-12-09 06:22:34.181540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.740 [2024-12-09 06:22:34.181548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.740 [2024-12-09 06:22:34.181555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.740 [2024-12-09 06:22:34.181564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.740 [2024-12-09 06:22:34.181571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.740 [2024-12-09 06:22:34.181579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.740 [2024-12-09 06:22:34.181586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.740 [2024-12-09 06:22:34.181595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.740 [2024-12-09 06:22:34.181602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.740 [2024-12-09 06:22:34.181611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.740 [2024-12-09 06:22:34.181618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.740 [2024-12-09 06:22:34.181627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.740 [2024-12-09 06:22:34.181634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.740 [2024-12-09 06:22:34.181643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.740 [2024-12-09 06:22:34.181650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.740 [2024-12-09 06:22:34.181659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.740 [2024-12-09 06:22:34.181666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.740 [2024-12-09 06:22:34.181674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.740 [2024-12-09 06:22:34.181681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.740 [2024-12-09 06:22:34.181690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.740 [2024-12-09 06:22:34.181697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.740 [2024-12-09 06:22:34.181707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.740 [2024-12-09 06:22:34.181714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.740 [2024-12-09 06:22:34.181723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.740 [2024-12-09 06:22:34.181730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.740 [2024-12-09 06:22:34.181739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.740 [2024-12-09 06:22:34.181745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.740 [2024-12-09 06:22:34.181754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.740 [2024-12-09 06:22:34.181761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.740 [2024-12-09 06:22:34.181769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.740 [2024-12-09 06:22:34.181777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.740 [2024-12-09 06:22:34.181786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.740 [2024-12-09 06:22:34.181793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.740 [2024-12-09 06:22:34.181801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.740 [2024-12-09 06:22:34.181808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.740 [2024-12-09 06:22:34.181817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.740 [2024-12-09 06:22:34.181823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.740 [2024-12-09 06:22:34.181832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.740 [2024-12-09 06:22:34.181839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.740 [2024-12-09 06:22:34.181848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.740 [2024-12-09 06:22:34.181855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.740 [2024-12-09 06:22:34.181863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.740 [2024-12-09 06:22:34.181870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.740 [2024-12-09 06:22:34.181879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.740 [2024-12-09 06:22:34.181885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.740 [2024-12-09 06:22:34.181894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.740 [2024-12-09 06:22:34.181891] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9fe60 is same with the state(6) to be set 00:23:39.740 [2024-12-09 06:22:34.181904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.740 [2024-12-09 06:22:34.181907] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9fe60 is same with the state(6) to be set 00:23:39.740 [2024-12-09 06:22:34.181913] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9fe60 is same with the state(6) to be set 00:23:39.740 [2024-12-09 06:22:34.181913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.740 [2024-12-09 06:22:34.181918] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9fe60 is same with the state(6) to be set 00:23:39.740 [2024-12-09 06:22:34.181921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.740 [2024-12-09 06:22:34.181924] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9fe60 is same with the state(6) to be set 00:23:39.740 [2024-12-09 06:22:34.181929] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9fe60 is same with the state(6) to be set 00:23:39.740 [2024-12-09 06:22:34.181930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.740 [2024-12-09 06:22:34.181934] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9fe60 is same with the state(6) to be set 00:23:39.740 [2024-12-09 06:22:34.181937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.740 [2024-12-09 06:22:34.181940] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9fe60 is same with the state(6) to be set 00:23:39.740 [2024-12-09 06:22:34.181946] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9fe60 is same with the state(6) to be set 00:23:39.740 [2024-12-09 06:22:34.181947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.740 [2024-12-09 06:22:34.181951] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9fe60 is same with the state(6) to be set 00:23:39.740 [2024-12-09 06:22:34.181955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.740 [2024-12-09 06:22:34.181957] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9fe60 is same with the state(6) to be set 00:23:39.740 [2024-12-09 06:22:34.181963] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9fe60 is same with the state(6) to be set 00:23:39.740 [2024-12-09 06:22:34.181965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.740 [2024-12-09 06:22:34.181967] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9fe60 is same with the state(6) to be set 00:23:39.740 [2024-12-09 06:22:34.181972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-09 06:22:34.181973] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9fe60 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.740 the state(6) to be set 00:23:39.740 [2024-12-09 06:22:34.181980] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9fe60 is same with the state(6) to be set 00:23:39.740 [2024-12-09 06:22:34.181983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.740 [2024-12-09 06:22:34.181985] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9fe60 is same with the state(6) to be set 00:23:39.740 [2024-12-09 06:22:34.181992] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9fe60 is same with [2024-12-09 06:22:34.181992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:23:39.740 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.740 [2024-12-09 06:22:34.182002] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9fe60 is same with the state(6) to be set 00:23:39.740 [2024-12-09 06:22:34.182007] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9fe60 is same with the state(6) to be set 00:23:39.740 [2024-12-09 06:22:34.182012] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9fe60 is same with the state(6) to be set 00:23:39.740 [2024-12-09 06:22:34.182017] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9fe60 is same with the state(6) to be set 00:23:39.740 [2024-12-09 06:22:34.182022] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9fe60 is same with the state(6) to be set 00:23:39.740 [2024-12-09 06:22:34.182026] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9fe60 is same with the state(6) to be set 00:23:39.740 [2024-12-09 06:22:34.182031] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9fe60 is same with the state(6) to be set 00:23:39.740 [2024-12-09 06:22:34.182036] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9fe60 is same with the state(6) to be set 00:23:39.740 [2024-12-09 06:22:34.182040] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9fe60 is same with the state(6) to be set 00:23:39.740 [2024-12-09 06:22:34.182045] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9fe60 is same with the state(6) to be set 00:23:39.740 [2024-12-09 06:22:34.182050] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9fe60 is same with the state(6) to be set 00:23:39.740 [2024-12-09 06:22:34.182055] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9fe60 is same with the state(6) to be set 00:23:39.740 [2024-12-09 06:22:34.182060] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9fe60 is same with the state(6) to be set 00:23:39.740 [2024-12-09 06:22:34.182064] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9fe60 is same with the state(6) to be set 00:23:39.740 [2024-12-09 06:22:34.182069] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9fe60 is same with the state(6) to be set 00:23:39.740 [2024-12-09 06:22:34.182074] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9fe60 is same with the state(6) to be set 00:23:39.740 [2024-12-09 06:22:34.182079] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9fe60 is same with the state(6) to be set 00:23:39.740 [2024-12-09 06:22:34.182083] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9fe60 is same with the state(6) to be set 00:23:39.740 [2024-12-09 06:22:34.182088] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9fe60 is same with the state(6) to be set 00:23:39.740 [2024-12-09 06:22:34.182094] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9fe60 is same with the state(6) to be set 00:23:39.740 [2024-12-09 06:22:34.182099] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9fe60 is same with the state(6) to be set 00:23:39.740 [2024-12-09 06:22:34.182104] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9fe60 is same with the state(6) to be set 00:23:39.740 [2024-12-09 06:22:34.182109] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9fe60 is same with the state(6) to be set 00:23:39.740 [2024-12-09 06:22:34.182113] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9fe60 is same with the state(6) to be set 00:23:39.740 [2024-12-09 06:22:34.182118] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9fe60 is same with the state(6) to be set 00:23:39.740 [2024-12-09 06:22:34.182122] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9fe60 is same with the state(6) to be set 00:23:39.740 [2024-12-09 06:22:34.182129] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9fe60 is same with the state(6) to be set 00:23:39.740 [2024-12-09 06:22:34.182134] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9fe60 is same with the state(6) to be set 00:23:39.740 [2024-12-09 06:22:34.182139] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9fe60 is same with the state(6) to be set 00:23:39.740 [2024-12-09 06:22:34.182144] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9fe60 is same with the state(6) to be set 00:23:39.740 [2024-12-09 06:22:34.182148] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9fe60 is same with the state(6) to be set 00:23:39.740 [2024-12-09 06:22:34.182153] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9fe60 is same with the state(6) to be set 00:23:39.740 [2024-12-09 06:22:34.182158] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9fe60 is same with the state(6) to be set 00:23:39.740 [2024-12-09 06:22:34.182162] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9fe60 is same with the state(6) to be set 00:23:39.740 [2024-12-09 06:22:34.182167] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9fe60 is same with the state(6) to be set 00:23:39.740 [2024-12-09 06:22:34.182172] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9fe60 is same with the state(6) to be set 00:23:39.740 [2024-12-09 06:22:34.182176] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9fe60 is same with the state(6) to be set 00:23:39.740 [2024-12-09 06:22:34.182181] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9fe60 is same with the state(6) to be set 00:23:39.740 [2024-12-09 06:22:34.182185] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9fe60 is same with the state(6) to be set 00:23:39.740 [2024-12-09 06:22:34.182190] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9fe60 is same with the state(6) to be set 00:23:39.741 [2024-12-09 06:22:34.182195] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9fe60 is same with the state(6) to be set 00:23:39.741 [2024-12-09 06:22:34.182200] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9fe60 is same with the state(6) to be set 00:23:39.741 [2024-12-09 06:22:34.182204] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9fe60 is same with the state(6) to be set 00:23:39.741 [2024-12-09 06:22:34.182209] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9fe60 is same with the state(6) to be set 00:23:39.741 [2024-12-09 06:22:34.182214] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9fe60 is same with the state(6) to be set 00:23:39.741 [2024-12-09 06:22:34.182218] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9fe60 is same with the state(6) to be set 00:23:39.741 [2024-12-09 06:22:34.182642] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:39.741 [2024-12-09 06:22:34.183037] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0330 is same with the state(6) to be set 00:23:39.741 [2024-12-09 06:22:34.183052] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0330 is same with the state(6) to be set 00:23:39.741 [2024-12-09 06:22:34.183062] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0330 is same with the state(6) to be set 00:23:39.741 [2024-12-09 06:22:34.183068] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0330 is same with the state(6) to be set 00:23:39.741 [2024-12-09 06:22:34.183073] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0330 is same with the state(6) to be set 00:23:39.741 [2024-12-09 06:22:34.183077] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0330 is same with the state(6) to be set 00:23:39.741 [2024-12-09 06:22:34.183088] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0330 is same with the state(6) to be set 00:23:39.741 [2024-12-09 06:22:34.183093] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0330 is same with the state(6) to be set 00:23:39.741 [2024-12-09 06:22:34.183098] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0330 is same with the state(6) to be set 00:23:39.741 [2024-12-09 06:22:34.183103] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0330 is same with the state(6) to be set 00:23:39.741 [2024-12-09 06:22:34.183108] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0330 is same with the state(6) to be set 00:23:39.741 [2024-12-09 06:22:34.183112] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0330 is same with the state(6) to be set 00:23:39.741 [2024-12-09 06:22:34.183117] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0330 is same with the state(6) to be set 00:23:39.741 [2024-12-09 06:22:34.183122] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0330 is same with the state(6) to be set 00:23:39.741 [2024-12-09 06:22:34.183127] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0330 is same with the state(6) to be set 00:23:39.741 [2024-12-09 06:22:34.183132] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0330 is same with the state(6) to be set 00:23:39.741 [2024-12-09 06:22:34.183136] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0330 is same with the state(6) to be set 00:23:39.741 [2024-12-09 06:22:34.183141] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0330 is same with the state(6) to be set 00:23:39.741 [2024-12-09 06:22:34.183146] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0330 is same with the state(6) to be set 00:23:39.741 [2024-12-09 06:22:34.183151] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0330 is same with the state(6) to be set 00:23:39.741 [2024-12-09 06:22:34.183155] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0330 is same with the state(6) to be set 00:23:39.741 [2024-12-09 06:22:34.183160] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0330 is same with the state(6) to be set 00:23:39.741 [2024-12-09 06:22:34.183164] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0330 is same with the state(6) to be set 00:23:39.741 [2024-12-09 06:22:34.183169] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0330 is same with the state(6) to be set 00:23:39.741 [2024-12-09 06:22:34.183174] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0330 is same with the state(6) to be set 00:23:39.741 [2024-12-09 06:22:34.183179] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0330 is same with the state(6) to be set 00:23:39.741 [2024-12-09 06:22:34.183183] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0330 is same with the state(6) to be set 00:23:39.741 [2024-12-09 06:22:34.183189] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0330 is same with the state(6) to be set 00:23:39.741 [2024-12-09 06:22:34.183196] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0330 is same with the state(6) to be set 00:23:39.741 [2024-12-09 06:22:34.183201] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0330 is same with the state(6) to be set 00:23:39.741 [2024-12-09 06:22:34.183206] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0330 is same with the state(6) to be set 00:23:39.741 [2024-12-09 06:22:34.183211] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0330 is same with the state(6) to be set 00:23:39.741 [2024-12-09 06:22:34.183215] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0330 is same with the state(6) to be set 00:23:39.741 [2024-12-09 06:22:34.183233] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0330 is same with the state(6) to be set 00:23:39.741 [2024-12-09 06:22:34.183238] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0330 is same with the state(6) to be set 00:23:39.741 [2024-12-09 06:22:34.183243] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0330 is same with the state(6) to be set 00:23:39.741 [2024-12-09 06:22:34.183247] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0330 is same with the state(6) to be set 00:23:39.741 [2024-12-09 06:22:34.183252] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0330 is same with the state(6) to be set 00:23:39.741 [2024-12-09 06:22:34.184167] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:23:39.741 [2024-12-09 06:22:34.184218] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2211130 (9): Bad file descriptor 00:23:39.741 [2024-12-09 06:22:34.184242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.741 [2024-12-09 06:22:34.184254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.741 [2024-12-09 06:22:34.184267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.741 [2024-12-09 06:22:34.184278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.741 [2024-12-09 06:22:34.184286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.741 [2024-12-09 06:22:34.184297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.741 [2024-12-09 06:22:34.184304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.741 [2024-12-09 06:22:34.184311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.741 [2024-12-09 06:22:34.184318] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d06610 is same with the state(6) to be set 00:23:39.741 [2024-12-09 06:22:34.184362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.741 [2024-12-09 06:22:34.184370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.741 [2024-12-09 06:22:34.184378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.741 [2024-12-09 06:22:34.184385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.741 [2024-12-09 06:22:34.184392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.741 [2024-12-09 06:22:34.184399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.741 [2024-12-09 06:22:34.184407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.741 [2024-12-09 06:22:34.184413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.741 [2024-12-09 06:22:34.184420] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223a450 is same with the state(6) to be set 00:23:39.741 [2024-12-09 06:22:34.184444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.741 [2024-12-09 06:22:34.184464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.741 [2024-12-09 06:22:34.184477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.741 [2024-12-09 06:22:34.184486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.741 [2024-12-09 06:22:34.184494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.741 [2024-12-09 06:22:34.184501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.741 [2024-12-09 06:22:34.184508] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.741 [2024-12-09 06:22:34.184515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.741 [2024-12-09 06:22:34.184522] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1deb700 is same with the state(6) to be set 00:23:39.741 [2024-12-09 06:22:34.184545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.741 [2024-12-09 06:22:34.184553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.741 [2024-12-09 06:22:34.184561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.741 [2024-12-09 06:22:34.184568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.741 [2024-12-09 06:22:34.184576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.741 [2024-12-09 06:22:34.184583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.741 [2024-12-09 06:22:34.184590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.741 [2024-12-09 06:22:34.184597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.741 [2024-12-09 06:22:34.184603] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227ac30 is same with the state(6) to be set 00:23:39.741 [2024-12-09 06:22:34.184628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.741 [2024-12-09 06:22:34.184636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.741 [2024-12-09 06:22:34.184644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.741 [2024-12-09 06:22:34.184651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.741 [2024-12-09 06:22:34.184658] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.741 [2024-12-09 06:22:34.184665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.741 [2024-12-09 06:22:34.184672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.741 [2024-12-09 06:22:34.184679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.741 [2024-12-09 06:22:34.184686] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df8d90 is same with the state(6) to be set 00:23:39.741 [2024-12-09 06:22:34.184708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.741 [2024-12-09 06:22:34.184716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.741 [2024-12-09 06:22:34.184724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.741 [2024-12-09 06:22:34.184730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.741 [2024-12-09 06:22:34.184738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.741 [2024-12-09 06:22:34.184745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.741 [2024-12-09 06:22:34.184753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.741 [2024-12-09 06:22:34.184760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.741 [2024-12-09 06:22:34.184766] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453090 is same with the state(6) to be set 00:23:39.741 [2024-12-09 06:22:34.184786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.741 [2024-12-09 06:22:34.184794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.741 [2024-12-09 06:22:34.184801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.741 [2024-12-09 06:22:34.184808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.741 [2024-12-09 06:22:34.184816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.741 [2024-12-09 06:22:34.184824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.741 [2024-12-09 06:22:34.184832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.741 [2024-12-09 06:22:34.184838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.741 [2024-12-09 06:22:34.184844] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d8e0 is same with the state(6) to be set 00:23:39.741 [2024-12-09 06:22:34.186072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:39.741 [2024-12-09 06:22:34.186094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211130 with addr=10.0.0.2, port=4420 00:23:39.741 [2024-12-09 06:22:34.186102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2211130 is same with the state(6) to be set 00:23:39.741 [2024-12-09 06:22:34.186155] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:39.741 [2024-12-09 06:22:34.186292] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2211130 (9): Bad file descriptor 00:23:39.741 [2024-12-09 06:22:34.186359] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:39.741 [2024-12-09 06:22:34.186592] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:23:39.741 [2024-12-09 06:22:34.186605] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:23:39.741 [2024-12-09 06:22:34.186613] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:23:39.741 [2024-12-09 06:22:34.186627] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:23:39.741 [2024-12-09 06:22:34.194210] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d06610 (9): Bad file descriptor 00:23:39.741 [2024-12-09 06:22:34.194255] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x223a450 (9): Bad file descriptor 00:23:39.741 [2024-12-09 06:22:34.194272] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1deb700 (9): Bad file descriptor 00:23:39.741 [2024-12-09 06:22:34.194288] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x227ac30 (9): Bad file descriptor 00:23:39.741 [2024-12-09 06:22:34.194305] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df8d90 (9): Bad file descriptor 00:23:39.741 [2024-12-09 06:22:34.194320] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2453090 (9): Bad file descriptor 00:23:39.741 [2024-12-09 06:22:34.194334] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d8e0 (9): Bad file descriptor 00:23:39.741 [2024-12-09 06:22:34.195429] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:23:39.741 [2024-12-09 06:22:34.195825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:39.741 [2024-12-09 06:22:34.195842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211130 with addr=10.0.0.2, port=4420 00:23:39.741 [2024-12-09 06:22:34.195850] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2211130 is same with the state(6) to be set 00:23:39.741 [2024-12-09 06:22:34.195913] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2211130 (9): Bad file descriptor 00:23:39.741 [2024-12-09 06:22:34.195983] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:23:39.741 [2024-12-09 06:22:34.195991] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:23:39.741 [2024-12-09 06:22:34.195998] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:23:39.741 [2024-12-09 06:22:34.196006] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:23:39.741 [2024-12-09 06:22:34.200819] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0330 is same with the state(6) to be set 00:23:39.741 [2024-12-09 06:22:34.200873] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0330 is same with the state(6) to be set 00:23:39.741 [2024-12-09 06:22:34.200892] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0330 is same with the state(6) to be set 00:23:39.741 [2024-12-09 06:22:34.200908] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0330 is same with the state(6) to be set 00:23:39.741 [2024-12-09 06:22:34.200923] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0330 is same with the state(6) to be set 00:23:39.741 [2024-12-09 06:22:34.200938] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0330 is same with the state(6) to be set 00:23:39.741 [2024-12-09 06:22:34.200953] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0330 is same with the state(6) to be set 00:23:39.741 [2024-12-09 06:22:34.200968] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0330 is same with the state(6) to be set 00:23:39.741 [2024-12-09 06:22:34.200985] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0330 is same with the state(6) to be set 00:23:39.741 [2024-12-09 06:22:34.201000] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0330 is same with the state(6) to be set 00:23:39.742 [2024-12-09 06:22:34.201015] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0330 is same with the state(6) to be set 00:23:39.742 [2024-12-09 06:22:34.201040] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0330 is same with the state(6) to be set 00:23:39.742 [2024-12-09 06:22:34.201056] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0330 is same with the state(6) to be set 00:23:39.742 [2024-12-09 06:22:34.201072] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0330 is same with the state(6) to be set 00:23:39.742 [2024-12-09 06:22:34.201087] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0330 is same with the state(6) to be set 00:23:39.742 [2024-12-09 06:22:34.201103] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0330 is same with the state(6) to be set 00:23:39.742 [2024-12-09 06:22:34.201118] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0330 is same with the state(6) to be set 00:23:39.742 [2024-12-09 06:22:34.201133] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0330 is same with the state(6) to be set 00:23:39.742 [2024-12-09 06:22:34.201148] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0330 is same with the state(6) to be set 00:23:39.742 [2024-12-09 06:22:34.201163] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0330 is same with the state(6) to be set 00:23:39.742 [2024-12-09 06:22:34.201179] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0330 is same with the state(6) to be set 00:23:39.742 [2024-12-09 06:22:34.201194] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0330 is same with the state(6) to be set 00:23:39.742 [2024-12-09 06:22:34.201210] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0330 is same with the state(6) to be set 00:23:39.742 [2024-12-09 06:22:34.201225] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0330 is same with the state(6) to be set 00:23:39.742 [2024-12-09 06:22:34.201241] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0330 is same with the state(6) to be set 00:23:39.742 [2024-12-09 06:22:34.201362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.742 [2024-12-09 06:22:34.201381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.742 [2024-12-09 06:22:34.201394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.742 [2024-12-09 06:22:34.201401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.742 [2024-12-09 06:22:34.201411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.742 [2024-12-09 06:22:34.201418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.742 [2024-12-09 06:22:34.201426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.742 [2024-12-09 06:22:34.201433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.742 [2024-12-09 06:22:34.201442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.742 [2024-12-09 06:22:34.201453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.742 [2024-12-09 06:22:34.201463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.742 [2024-12-09 06:22:34.201470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.742 [2024-12-09 06:22:34.201485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.742 [2024-12-09 06:22:34.201492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.742 [2024-12-09 06:22:34.201501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.742 [2024-12-09 06:22:34.201508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.742 [2024-12-09 06:22:34.201516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.742 [2024-12-09 06:22:34.201523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.742 [2024-12-09 06:22:34.201532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.742 [2024-12-09 06:22:34.201539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.742 [2024-12-09 06:22:34.201548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.742 [2024-12-09 06:22:34.201554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.742 [2024-12-09 06:22:34.201563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.742 [2024-12-09 06:22:34.201570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.742 [2024-12-09 06:22:34.201579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.742 [2024-12-09 06:22:34.201585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.742 [2024-12-09 06:22:34.201594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.742 [2024-12-09 06:22:34.201601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.742 [2024-12-09 06:22:34.201610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.742 [2024-12-09 06:22:34.201616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.742 [2024-12-09 06:22:34.201625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.742 [2024-12-09 06:22:34.201632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.742 [2024-12-09 06:22:34.201640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.742 [2024-12-09 06:22:34.201647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.742 [2024-12-09 06:22:34.201657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.742 [2024-12-09 06:22:34.201664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.742 [2024-12-09 06:22:34.201673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.742 [2024-12-09 06:22:34.201681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.742 [2024-12-09 06:22:34.201690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.742 [2024-12-09 06:22:34.201696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.742 [2024-12-09 06:22:34.201705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.742 [2024-12-09 06:22:34.201712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.742 [2024-12-09 06:22:34.201720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.742 [2024-12-09 06:22:34.201727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.742 [2024-12-09 06:22:34.201736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.742 [2024-12-09 06:22:34.201743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.742 [2024-12-09 06:22:34.201752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.742 [2024-12-09 06:22:34.201759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.742 [2024-12-09 06:22:34.201767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.742 [2024-12-09 06:22:34.201774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.742 [2024-12-09 06:22:34.201783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.742 [2024-12-09 06:22:34.201789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.742 [2024-12-09 06:22:34.201798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.742 [2024-12-09 06:22:34.201805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.742 [2024-12-09 06:22:34.201814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.742 [2024-12-09 06:22:34.201820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.742 [2024-12-09 06:22:34.201829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.742 [2024-12-09 06:22:34.201836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.742 [2024-12-09 06:22:34.201845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.742 [2024-12-09 06:22:34.201851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.742 [2024-12-09 06:22:34.201860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.742 [2024-12-09 06:22:34.201867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.742 [2024-12-09 06:22:34.201878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.742 [2024-12-09 06:22:34.201885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.742 [2024-12-09 06:22:34.201894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.742 [2024-12-09 06:22:34.201901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.742 [2024-12-09 06:22:34.201910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.742 [2024-12-09 06:22:34.201917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.742 [2024-12-09 06:22:34.201926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.742 [2024-12-09 06:22:34.201933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.742 [2024-12-09 06:22:34.201942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.742 [2024-12-09 06:22:34.201949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.742 [2024-12-09 06:22:34.201958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.742 [2024-12-09 06:22:34.201965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.742 [2024-12-09 06:22:34.201973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.742 [2024-12-09 06:22:34.201980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.742 [2024-12-09 06:22:34.201989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.742 [2024-12-09 06:22:34.201995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.742 [2024-12-09 06:22:34.202004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.742 [2024-12-09 06:22:34.202011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.742 [2024-12-09 06:22:34.202020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.742 [2024-12-09 06:22:34.202027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.742 [2024-12-09 06:22:34.202036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.742 [2024-12-09 06:22:34.202042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.742 [2024-12-09 06:22:34.202051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.742 [2024-12-09 06:22:34.202058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.742 [2024-12-09 06:22:34.202067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.742 [2024-12-09 06:22:34.202075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.742 [2024-12-09 06:22:34.202084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.742 [2024-12-09 06:22:34.202090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.742 [2024-12-09 06:22:34.202099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.742 [2024-12-09 06:22:34.202106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.742 [2024-12-09 06:22:34.202114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.742 [2024-12-09 06:22:34.202121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.742 [2024-12-09 06:22:34.202130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.742 [2024-12-09 06:22:34.202137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.742 [2024-12-09 06:22:34.202145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.742 [2024-12-09 06:22:34.202152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.742 [2024-12-09 06:22:34.202161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.742 [2024-12-09 06:22:34.202168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.742 [2024-12-09 06:22:34.202177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.742 [2024-12-09 06:22:34.202183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.742 [2024-12-09 06:22:34.202192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.742 [2024-12-09 06:22:34.202199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.742 [2024-12-09 06:22:34.202208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.742 [2024-12-09 06:22:34.202214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.742 [2024-12-09 06:22:34.202223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.742 [2024-12-09 06:22:34.202230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.742 [2024-12-09 06:22:34.202239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.742 [2024-12-09 06:22:34.202245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.742 [2024-12-09 06:22:34.202254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.742 [2024-12-09 06:22:34.202261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.742 [2024-12-09 06:22:34.202270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.742 [2024-12-09 06:22:34.202278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.742 [2024-12-09 06:22:34.202287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.743 [2024-12-09 06:22:34.202294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.743 [2024-12-09 06:22:34.202302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.743 [2024-12-09 06:22:34.202309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.743 [2024-12-09 06:22:34.202318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.743 [2024-12-09 06:22:34.202325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.743 [2024-12-09 06:22:34.202333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.743 [2024-12-09 06:22:34.202340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.743 [2024-12-09 06:22:34.202349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.743 [2024-12-09 06:22:34.202355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.743 [2024-12-09 06:22:34.202364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.743 [2024-12-09 06:22:34.202371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.743 [2024-12-09 06:22:34.202380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.743 [2024-12-09 06:22:34.202386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.743 [2024-12-09 06:22:34.202394] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fbb50 is same with the state(6) to be set 00:23:39.743 [2024-12-09 06:22:34.203573] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:23:39.743 [2024-12-09 06:22:34.203986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:39.743 [2024-12-09 06:22:34.204001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d06610 with addr=10.0.0.2, port=4420 00:23:39.743 [2024-12-09 06:22:34.204011] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d06610 is same with the state(6) to be set 00:23:39.743 [2024-12-09 06:22:34.204286] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d06610 (9): Bad file descriptor 00:23:39.743 [2024-12-09 06:22:34.204310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.743 [2024-12-09 06:22:34.204319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.743 [2024-12-09 06:22:34.204326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.743 [2024-12-09 06:22:34.204333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.743 [2024-12-09 06:22:34.204344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.743 [2024-12-09 06:22:34.204351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.743 [2024-12-09 06:22:34.204358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.743 [2024-12-09 06:22:34.204365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.743 [2024-12-09 06:22:34.204371] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222e2f0 is same with the state(6) to be set 00:23:39.743 [2024-12-09 06:22:34.204394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.743 [2024-12-09 06:22:34.204402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.743 [2024-12-09 06:22:34.204409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.743 [2024-12-09 06:22:34.204416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.743 [2024-12-09 06:22:34.204423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.743 [2024-12-09 06:22:34.204430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.743 [2024-12-09 06:22:34.204437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.743 [2024-12-09 06:22:34.204444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.743 [2024-12-09 06:22:34.204457] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222ec10 is same with the state(6) to be set 00:23:39.743 [2024-12-09 06:22:34.204557] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:23:39.743 [2024-12-09 06:22:34.204565] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:23:39.743 [2024-12-09 06:22:34.204573] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:23:39.743 [2024-12-09 06:22:34.204579] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:23:39.743 [2024-12-09 06:22:34.204606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.743 [2024-12-09 06:22:34.204614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.743 [2024-12-09 06:22:34.204624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.743 [2024-12-09 06:22:34.204632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.743 [2024-12-09 06:22:34.204641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.743 [2024-12-09 06:22:34.204647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.743 [2024-12-09 06:22:34.204656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.743 [2024-12-09 06:22:34.204664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.743 [2024-12-09 06:22:34.204675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.743 [2024-12-09 06:22:34.204682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.743 [2024-12-09 06:22:34.204691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.743 [2024-12-09 06:22:34.204698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.743 [2024-12-09 06:22:34.204707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.743 [2024-12-09 06:22:34.204713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.743 [2024-12-09 06:22:34.204722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.743 [2024-12-09 06:22:34.204729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.743 [2024-12-09 06:22:34.204737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.743 [2024-12-09 06:22:34.204744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.743 [2024-12-09 06:22:34.204753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.743 [2024-12-09 06:22:34.204760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.743 [2024-12-09 06:22:34.204769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.743 [2024-12-09 06:22:34.204776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.743 [2024-12-09 06:22:34.204784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.743 [2024-12-09 06:22:34.204791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.743 [2024-12-09 06:22:34.204800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.743 [2024-12-09 06:22:34.204807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.743 [2024-12-09 06:22:34.204815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.743 [2024-12-09 06:22:34.204822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.743 [2024-12-09 06:22:34.204831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.743 [2024-12-09 06:22:34.204838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.743 [2024-12-09 06:22:34.204847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.743 [2024-12-09 06:22:34.204854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.743 [2024-12-09 06:22:34.204863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.743 [2024-12-09 06:22:34.204871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.743 [2024-12-09 06:22:34.204881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.743 [2024-12-09 06:22:34.204887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.743 [2024-12-09 06:22:34.204897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.743 [2024-12-09 06:22:34.204904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.743 [2024-12-09 06:22:34.204913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.743 [2024-12-09 06:22:34.204919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.743 [2024-12-09 06:22:34.204928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.743 [2024-12-09 06:22:34.204935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.743 [2024-12-09 06:22:34.204943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.743 [2024-12-09 06:22:34.204950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.743 [2024-12-09 06:22:34.204959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.743 [2024-12-09 06:22:34.204966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.743 [2024-12-09 06:22:34.204974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.743 [2024-12-09 06:22:34.204981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.743 [2024-12-09 06:22:34.204989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.743 [2024-12-09 06:22:34.204997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.743 [2024-12-09 06:22:34.205005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.743 [2024-12-09 06:22:34.205012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.743 [2024-12-09 06:22:34.205021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.743 [2024-12-09 06:22:34.205028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.743 [2024-12-09 06:22:34.205037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.743 [2024-12-09 06:22:34.205043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.743 [2024-12-09 06:22:34.205052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.743 [2024-12-09 06:22:34.205059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.743 [2024-12-09 06:22:34.205069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.743 [2024-12-09 06:22:34.205076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.743 [2024-12-09 06:22:34.205084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.743 [2024-12-09 06:22:34.205091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.743 [2024-12-09 06:22:34.205100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.743 [2024-12-09 06:22:34.205107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.743 [2024-12-09 06:22:34.205116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.743 [2024-12-09 06:22:34.205123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.743 [2024-12-09 06:22:34.205131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.743 [2024-12-09 06:22:34.205138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.743 [2024-12-09 06:22:34.205148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.743 [2024-12-09 06:22:34.205154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.743 [2024-12-09 06:22:34.205163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.743 [2024-12-09 06:22:34.205171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.743 [2024-12-09 06:22:34.205180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.743 [2024-12-09 06:22:34.205187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.743 [2024-12-09 06:22:34.205195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.743 [2024-12-09 06:22:34.205203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.743 [2024-12-09 06:22:34.205211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.743 [2024-12-09 06:22:34.205218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.743 [2024-12-09 06:22:34.205227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.743 [2024-12-09 06:22:34.205233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.743 [2024-12-09 06:22:34.205242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.743 [2024-12-09 06:22:34.205249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.743 [2024-12-09 06:22:34.205257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.743 [2024-12-09 06:22:34.205265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.743 [2024-12-09 06:22:34.205274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.743 [2024-12-09 06:22:34.205281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.743 [2024-12-09 06:22:34.205290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.743 [2024-12-09 06:22:34.205296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.743 [2024-12-09 06:22:34.205305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.743 [2024-12-09 06:22:34.205312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.743 [2024-12-09 06:22:34.205321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.743 [2024-12-09 06:22:34.205328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.743 [2024-12-09 06:22:34.205336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.743 [2024-12-09 06:22:34.205343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.743 [2024-12-09 06:22:34.205352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.743 [2024-12-09 06:22:34.205358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.743 [2024-12-09 06:22:34.211460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.743 [2024-12-09 06:22:34.211495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.743 [2024-12-09 06:22:34.211506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.744 [2024-12-09 06:22:34.211515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.744 [2024-12-09 06:22:34.211524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.744 [2024-12-09 06:22:34.211532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.744 [2024-12-09 06:22:34.211541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.744 [2024-12-09 06:22:34.211548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.744 [2024-12-09 06:22:34.211557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.744 [2024-12-09 06:22:34.211564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.744 [2024-12-09 06:22:34.211573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.744 [2024-12-09 06:22:34.211580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.744 [2024-12-09 06:22:34.211589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.744 [2024-12-09 06:22:34.211601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.744 [2024-12-09 06:22:34.211610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.744 [2024-12-09 06:22:34.211617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.744 [2024-12-09 06:22:34.211626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.744 [2024-12-09 06:22:34.211633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.744 [2024-12-09 06:22:34.211641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.744 [2024-12-09 06:22:34.211648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.744 [2024-12-09 06:22:34.211657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.744 [2024-12-09 06:22:34.211664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.744 [2024-12-09 06:22:34.211673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.744 [2024-12-09 06:22:34.211680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.744 [2024-12-09 06:22:34.211690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.744 [2024-12-09 06:22:34.211696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.744 [2024-12-09 06:22:34.211706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.744 [2024-12-09 06:22:34.211713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.744 [2024-12-09 06:22:34.211721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.744 [2024-12-09 06:22:34.211728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.744 [2024-12-09 06:22:34.211737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.744 [2024-12-09 06:22:34.211744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.744 [2024-12-09 06:22:34.211752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2b00 is same with the state(6) to be set 00:23:39.744 [2024-12-09 06:22:34.212989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.744 [2024-12-09 06:22:34.213005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.744 [2024-12-09 06:22:34.213019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.744 [2024-12-09 06:22:34.213028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.744 [2024-12-09 06:22:34.213039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.744 [2024-12-09 06:22:34.213051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.744 [2024-12-09 06:22:34.213062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.744 [2024-12-09 06:22:34.213070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.744 [2024-12-09 06:22:34.213081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.744 [2024-12-09 06:22:34.213090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.744 [2024-12-09 06:22:34.213101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.744 [2024-12-09 06:22:34.213109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.744 [2024-12-09 06:22:34.213119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.744 [2024-12-09 06:22:34.213126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.744 [2024-12-09 06:22:34.213135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.744 [2024-12-09 06:22:34.213142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.744 [2024-12-09 06:22:34.213151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.744 [2024-12-09 06:22:34.213157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.744 [2024-12-09 06:22:34.213166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.744 [2024-12-09 06:22:34.213173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.744 [2024-12-09 06:22:34.213181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.744 [2024-12-09 06:22:34.213188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.744 [2024-12-09 06:22:34.213197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.744 [2024-12-09 06:22:34.213203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.744 [2024-12-09 06:22:34.213212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.744 [2024-12-09 06:22:34.213219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.744 [2024-12-09 06:22:34.213228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.744 [2024-12-09 06:22:34.213235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.744 [2024-12-09 06:22:34.213244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.744 [2024-12-09 06:22:34.213250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.744 [2024-12-09 06:22:34.213261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.744 [2024-12-09 06:22:34.213267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.744 [2024-12-09 06:22:34.213276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.744 [2024-12-09 06:22:34.213283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.744 [2024-12-09 06:22:34.213292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.744 [2024-12-09 06:22:34.213299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.744 [2024-12-09 06:22:34.213307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.744 [2024-12-09 06:22:34.213314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.744 [2024-12-09 06:22:34.213323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.744 [2024-12-09 06:22:34.213330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.744 [2024-12-09 06:22:34.213339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.744 [2024-12-09 06:22:34.213346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.744 [2024-12-09 06:22:34.213355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.744 [2024-12-09 06:22:34.213362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.744 [2024-12-09 06:22:34.213371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.744 [2024-12-09 06:22:34.213378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.744 [2024-12-09 06:22:34.213386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.744 [2024-12-09 06:22:34.213393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.744 [2024-12-09 06:22:34.213403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.744 [2024-12-09 06:22:34.213410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.744 [2024-12-09 06:22:34.213418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.744 [2024-12-09 06:22:34.213426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.744 [2024-12-09 06:22:34.213434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.744 [2024-12-09 06:22:34.213441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.744 [2024-12-09 06:22:34.213456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.744 [2024-12-09 06:22:34.213466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.744 [2024-12-09 06:22:34.213474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.744 [2024-12-09 06:22:34.213481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.744 [2024-12-09 06:22:34.213490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.744 [2024-12-09 06:22:34.213497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.744 [2024-12-09 06:22:34.213506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.744 [2024-12-09 06:22:34.213512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.744 [2024-12-09 06:22:34.213521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.744 [2024-12-09 06:22:34.213528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.744 [2024-12-09 06:22:34.213537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.744 [2024-12-09 06:22:34.213544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.744 [2024-12-09 06:22:34.213553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.744 [2024-12-09 06:22:34.213559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.744 [2024-12-09 06:22:34.213568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.744 [2024-12-09 06:22:34.213575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.744 [2024-12-09 06:22:34.213584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.744 [2024-12-09 06:22:34.213591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.744 [2024-12-09 06:22:34.213599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.744 [2024-12-09 06:22:34.213606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.744 [2024-12-09 06:22:34.213615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.744 [2024-12-09 06:22:34.213622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.744 [2024-12-09 06:22:34.213631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.744 [2024-12-09 06:22:34.213638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.744 [2024-12-09 06:22:34.213646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.744 [2024-12-09 06:22:34.213653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.744 [2024-12-09 06:22:34.213664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.744 [2024-12-09 06:22:34.213670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.744 [2024-12-09 06:22:34.213679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.744 [2024-12-09 06:22:34.213686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.744 [2024-12-09 06:22:34.213695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.744 [2024-12-09 06:22:34.213701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.744 [2024-12-09 06:22:34.213710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.744 [2024-12-09 06:22:34.213716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.744 [2024-12-09 06:22:34.213725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.744 [2024-12-09 06:22:34.213733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.744 [2024-12-09 06:22:34.213741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.744 [2024-12-09 06:22:34.213748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.744 [2024-12-09 06:22:34.213757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.744 [2024-12-09 06:22:34.213764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.744 [2024-12-09 06:22:34.213772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.744 [2024-12-09 06:22:34.213779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.744 [2024-12-09 06:22:34.213788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.744 [2024-12-09 06:22:34.213794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.744 [2024-12-09 06:22:34.213803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.744 [2024-12-09 06:22:34.213810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.745 [2024-12-09 06:22:34.213819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.745 [2024-12-09 06:22:34.213826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.745 [2024-12-09 06:22:34.213835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.745 [2024-12-09 06:22:34.213842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.745 [2024-12-09 06:22:34.213850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.745 [2024-12-09 06:22:34.213862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.745 [2024-12-09 06:22:34.213870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.745 [2024-12-09 06:22:34.213877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.745 [2024-12-09 06:22:34.213886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.745 [2024-12-09 06:22:34.213893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.745 [2024-12-09 06:22:34.213902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.745 [2024-12-09 06:22:34.213909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.745 [2024-12-09 06:22:34.213918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.745 [2024-12-09 06:22:34.213925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.745 [2024-12-09 06:22:34.213934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.745 [2024-12-09 06:22:34.213940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.745 [2024-12-09 06:22:34.213949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.745 [2024-12-09 06:22:34.213956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.745 [2024-12-09 06:22:34.213965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.745 [2024-12-09 06:22:34.213971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.745 [2024-12-09 06:22:34.213980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.745 [2024-12-09 06:22:34.213987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.745 [2024-12-09 06:22:34.213996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.745 [2024-12-09 06:22:34.214003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.745 [2024-12-09 06:22:34.214011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.745 [2024-12-09 06:22:34.214018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.745 [2024-12-09 06:22:34.214027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.745 [2024-12-09 06:22:34.214033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.745 [2024-12-09 06:22:34.214041] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df3c10 is same with the state(6) to be set 00:23:39.745 [2024-12-09 06:22:34.215224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.745 [2024-12-09 06:22:34.215238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.745 [2024-12-09 06:22:34.215251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.745 [2024-12-09 06:22:34.215260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.745 [2024-12-09 06:22:34.215270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.745 [2024-12-09 06:22:34.215278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.745 [2024-12-09 06:22:34.215289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.745 [2024-12-09 06:22:34.215297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.745 [2024-12-09 06:22:34.215307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.745 [2024-12-09 06:22:34.215316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.745 [2024-12-09 06:22:34.215326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.745 [2024-12-09 06:22:34.215335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.745 [2024-12-09 06:22:34.215345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.745 [2024-12-09 06:22:34.215353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.745 [2024-12-09 06:22:34.215363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.745 [2024-12-09 06:22:34.215370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.745 [2024-12-09 06:22:34.215379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.745 [2024-12-09 06:22:34.215386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.745 [2024-12-09 06:22:34.215395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.745 [2024-12-09 06:22:34.215401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.745 [2024-12-09 06:22:34.215411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.745 [2024-12-09 06:22:34.215417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.745 [2024-12-09 06:22:34.215426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.745 [2024-12-09 06:22:34.215433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.745 [2024-12-09 06:22:34.215442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.745 [2024-12-09 06:22:34.215453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.745 [2024-12-09 06:22:34.215464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.745 [2024-12-09 06:22:34.215471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.745 [2024-12-09 06:22:34.215479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.745 [2024-12-09 06:22:34.215487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.745 [2024-12-09 06:22:34.215495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.745 [2024-12-09 06:22:34.215502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.745 [2024-12-09 06:22:34.215511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.745 [2024-12-09 06:22:34.215518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.745 [2024-12-09 06:22:34.215526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.745 [2024-12-09 06:22:34.215534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.745 [2024-12-09 06:22:34.215542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.745 [2024-12-09 06:22:34.215549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.745 [2024-12-09 06:22:34.215558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.745 [2024-12-09 06:22:34.215565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.745 [2024-12-09 06:22:34.215573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.745 [2024-12-09 06:22:34.215580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.745 [2024-12-09 06:22:34.215589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.745 [2024-12-09 06:22:34.215596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.745 [2024-12-09 06:22:34.215605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.745 [2024-12-09 06:22:34.215612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.745 [2024-12-09 06:22:34.215621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.745 [2024-12-09 06:22:34.215628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.745 [2024-12-09 06:22:34.215637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.745 [2024-12-09 06:22:34.215644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.745 [2024-12-09 06:22:34.215653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.745 [2024-12-09 06:22:34.215662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.745 [2024-12-09 06:22:34.215670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.745 [2024-12-09 06:22:34.215677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.745 [2024-12-09 06:22:34.215686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.745 [2024-12-09 06:22:34.215693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.745 [2024-12-09 06:22:34.215702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.745 [2024-12-09 06:22:34.215709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.745 [2024-12-09 06:22:34.215718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.745 [2024-12-09 06:22:34.215725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.745 [2024-12-09 06:22:34.215734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.745 [2024-12-09 06:22:34.215741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.745 [2024-12-09 06:22:34.215749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.745 [2024-12-09 06:22:34.215756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.745 [2024-12-09 06:22:34.215765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.745 [2024-12-09 06:22:34.215772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.745 [2024-12-09 06:22:34.215781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.745 [2024-12-09 06:22:34.215787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.745 [2024-12-09 06:22:34.215796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.745 [2024-12-09 06:22:34.215803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.745 [2024-12-09 06:22:34.215812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.745 [2024-12-09 06:22:34.215819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.745 [2024-12-09 06:22:34.215827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.745 [2024-12-09 06:22:34.215834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.745 [2024-12-09 06:22:34.215843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.745 [2024-12-09 06:22:34.215850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.745 [2024-12-09 06:22:34.215860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.745 [2024-12-09 06:22:34.215867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.745 [2024-12-09 06:22:34.215875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.745 [2024-12-09 06:22:34.215882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.745 [2024-12-09 06:22:34.215891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.745 [2024-12-09 06:22:34.215897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.745 [2024-12-09 06:22:34.215906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.745 [2024-12-09 06:22:34.215913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.745 [2024-12-09 06:22:34.215921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.745 [2024-12-09 06:22:34.215928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.745 [2024-12-09 06:22:34.215937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.745 [2024-12-09 06:22:34.215944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.745 [2024-12-09 06:22:34.215952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.745 [2024-12-09 06:22:34.215959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.745 [2024-12-09 06:22:34.215967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.745 [2024-12-09 06:22:34.215974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.745 [2024-12-09 06:22:34.215983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.745 [2024-12-09 06:22:34.215989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.745 [2024-12-09 06:22:34.215998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.745 [2024-12-09 06:22:34.216005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.745 [2024-12-09 06:22:34.216013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.745 [2024-12-09 06:22:34.216020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.745 [2024-12-09 06:22:34.216029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.745 [2024-12-09 06:22:34.216036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.745 [2024-12-09 06:22:34.216044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.745 [2024-12-09 06:22:34.216052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.745 [2024-12-09 06:22:34.216062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.745 [2024-12-09 06:22:34.216069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.745 [2024-12-09 06:22:34.216078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.745 [2024-12-09 06:22:34.216084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.745 [2024-12-09 06:22:34.216093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.745 [2024-12-09 06:22:34.216100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.745 [2024-12-09 06:22:34.216109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.745 [2024-12-09 06:22:34.216115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.746 [2024-12-09 06:22:34.216124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.746 [2024-12-09 06:22:34.216131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.746 [2024-12-09 06:22:34.216140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.746 [2024-12-09 06:22:34.216147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.746 [2024-12-09 06:22:34.216155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.746 [2024-12-09 06:22:34.216162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.746 [2024-12-09 06:22:34.216171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.746 [2024-12-09 06:22:34.216177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.746 [2024-12-09 06:22:34.216186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.746 [2024-12-09 06:22:34.216193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.746 [2024-12-09 06:22:34.216202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.746 [2024-12-09 06:22:34.216209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.746 [2024-12-09 06:22:34.216218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.746 [2024-12-09 06:22:34.216224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.746 [2024-12-09 06:22:34.216233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.746 [2024-12-09 06:22:34.216240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.746 [2024-12-09 06:22:34.216250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.746 [2024-12-09 06:22:34.216257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.746 [2024-12-09 06:22:34.216264] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b5b0 is same with the state(6) to be set 00:23:39.746 [2024-12-09 06:22:34.217456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.746 [2024-12-09 06:22:34.217469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.746 [2024-12-09 06:22:34.217481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.746 [2024-12-09 06:22:34.217490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.746 [2024-12-09 06:22:34.217500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.746 [2024-12-09 06:22:34.217506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.746 [2024-12-09 06:22:34.217516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.746 [2024-12-09 06:22:34.217522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.746 [2024-12-09 06:22:34.217531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.746 [2024-12-09 06:22:34.217538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.746 [2024-12-09 06:22:34.217547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.746 [2024-12-09 06:22:34.217554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.746 [2024-12-09 06:22:34.217563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.746 [2024-12-09 06:22:34.217570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.746 [2024-12-09 06:22:34.217579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.746 [2024-12-09 06:22:34.217586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.746 [2024-12-09 06:22:34.217595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.746 [2024-12-09 06:22:34.217602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.746 [2024-12-09 06:22:34.217610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.746 [2024-12-09 06:22:34.217617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.746 [2024-12-09 06:22:34.217626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.746 [2024-12-09 06:22:34.217633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.746 [2024-12-09 06:22:34.217642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.746 [2024-12-09 06:22:34.217651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.746 [2024-12-09 06:22:34.217659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.746 [2024-12-09 06:22:34.217666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.746 [2024-12-09 06:22:34.217675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.746 [2024-12-09 06:22:34.217682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.746 [2024-12-09 06:22:34.217691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.746 [2024-12-09 06:22:34.217697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.746 [2024-12-09 06:22:34.217706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.746 [2024-12-09 06:22:34.217713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.746 [2024-12-09 06:22:34.217721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.746 [2024-12-09 06:22:34.217728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.746 [2024-12-09 06:22:34.217737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.746 [2024-12-09 06:22:34.217744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.746 [2024-12-09 06:22:34.217752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.746 [2024-12-09 06:22:34.217759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.746 [2024-12-09 06:22:34.217768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.746 [2024-12-09 06:22:34.217775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.746 [2024-12-09 06:22:34.217784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.746 [2024-12-09 06:22:34.217790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.746 [2024-12-09 06:22:34.217799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.746 [2024-12-09 06:22:34.217806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.746 [2024-12-09 06:22:34.217814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.746 [2024-12-09 06:22:34.217821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.746 [2024-12-09 06:22:34.217831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.746 [2024-12-09 06:22:34.217837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.746 [2024-12-09 06:22:34.217849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.746 [2024-12-09 06:22:34.217855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.746 [2024-12-09 06:22:34.217864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.746 [2024-12-09 06:22:34.217871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.746 [2024-12-09 06:22:34.217880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.746 [2024-12-09 06:22:34.217886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.746 [2024-12-09 06:22:34.217895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.746 [2024-12-09 06:22:34.217902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.746 [2024-12-09 06:22:34.217911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.746 [2024-12-09 06:22:34.217917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.746 [2024-12-09 06:22:34.217926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.746 [2024-12-09 06:22:34.217933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.746 [2024-12-09 06:22:34.217942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.746 [2024-12-09 06:22:34.217949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.746 [2024-12-09 06:22:34.217958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.746 [2024-12-09 06:22:34.217965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.746 [2024-12-09 06:22:34.217974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.746 [2024-12-09 06:22:34.217980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.746 [2024-12-09 06:22:34.217989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.746 [2024-12-09 06:22:34.217996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.746 [2024-12-09 06:22:34.218005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.746 [2024-12-09 06:22:34.218012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.746 [2024-12-09 06:22:34.218020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.746 [2024-12-09 06:22:34.218027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.746 [2024-12-09 06:22:34.218037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.746 [2024-12-09 06:22:34.218045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.746 [2024-12-09 06:22:34.218054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.746 [2024-12-09 06:22:34.218060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.746 [2024-12-09 06:22:34.218070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.746 [2024-12-09 06:22:34.218076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.746 [2024-12-09 06:22:34.218085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.746 [2024-12-09 06:22:34.218092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.746 [2024-12-09 06:22:34.218100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.746 [2024-12-09 06:22:34.218107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.746 [2024-12-09 06:22:34.218116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.746 [2024-12-09 06:22:34.218123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.746 [2024-12-09 06:22:34.218131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.746 [2024-12-09 06:22:34.218138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.746 [2024-12-09 06:22:34.218147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.746 [2024-12-09 06:22:34.218154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.746 [2024-12-09 06:22:34.218162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.746 [2024-12-09 06:22:34.218169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.746 [2024-12-09 06:22:34.218178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.746 [2024-12-09 06:22:34.218185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.746 [2024-12-09 06:22:34.218194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.746 [2024-12-09 06:22:34.218200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.746 [2024-12-09 06:22:34.218209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.746 [2024-12-09 06:22:34.218216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.746 [2024-12-09 06:22:34.218225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.746 [2024-12-09 06:22:34.218231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.746 [2024-12-09 06:22:34.218242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.746 [2024-12-09 06:22:34.218249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.746 [2024-12-09 06:22:34.218257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.746 [2024-12-09 06:22:34.218264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.746 [2024-12-09 06:22:34.218273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.746 [2024-12-09 06:22:34.218280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.746 [2024-12-09 06:22:34.218289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.746 [2024-12-09 06:22:34.218295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.746 [2024-12-09 06:22:34.218304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.746 [2024-12-09 06:22:34.218311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.746 [2024-12-09 06:22:34.218320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.746 [2024-12-09 06:22:34.218326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.746 [2024-12-09 06:22:34.218335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.746 [2024-12-09 06:22:34.218342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.746 [2024-12-09 06:22:34.218351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.746 [2024-12-09 06:22:34.218357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.746 [2024-12-09 06:22:34.218366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.746 [2024-12-09 06:22:34.218373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.746 [2024-12-09 06:22:34.218382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.746 [2024-12-09 06:22:34.218389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.746 [2024-12-09 06:22:34.218398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.746 [2024-12-09 06:22:34.218405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.746 [2024-12-09 06:22:34.218413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.746 [2024-12-09 06:22:34.218420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.747 [2024-12-09 06:22:34.218429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.747 [2024-12-09 06:22:34.218437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.747 [2024-12-09 06:22:34.218446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.747 [2024-12-09 06:22:34.218458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.747 [2024-12-09 06:22:34.218467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.747 [2024-12-09 06:22:34.218474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.747 [2024-12-09 06:22:34.218482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f9530 is same with the state(6) to be set 00:23:39.747 [2024-12-09 06:22:34.219654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.747 [2024-12-09 06:22:34.219667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.747 [2024-12-09 06:22:34.219680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.747 [2024-12-09 06:22:34.219688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.747 [2024-12-09 06:22:34.219700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.747 [2024-12-09 06:22:34.219707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.747 [2024-12-09 06:22:34.219716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.747 [2024-12-09 06:22:34.219723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.747 [2024-12-09 06:22:34.219731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.747 [2024-12-09 06:22:34.219738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.747 [2024-12-09 06:22:34.219747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.747 [2024-12-09 06:22:34.219754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.747 [2024-12-09 06:22:34.219763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.747 [2024-12-09 06:22:34.219769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.747 [2024-12-09 06:22:34.219778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.747 [2024-12-09 06:22:34.219785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.747 [2024-12-09 06:22:34.219794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.747 [2024-12-09 06:22:34.219801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.747 [2024-12-09 06:22:34.219809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.747 [2024-12-09 06:22:34.219818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.747 [2024-12-09 06:22:34.219827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.747 [2024-12-09 06:22:34.219834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.747 [2024-12-09 06:22:34.219842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.747 [2024-12-09 06:22:34.219849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.747 [2024-12-09 06:22:34.219858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.747 [2024-12-09 06:22:34.219865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.747 [2024-12-09 06:22:34.219874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.747 [2024-12-09 06:22:34.219881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.747 [2024-12-09 06:22:34.219889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.747 [2024-12-09 06:22:34.219896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.747 [2024-12-09 06:22:34.219906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.747 [2024-12-09 06:22:34.219912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.747 [2024-12-09 06:22:34.219921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.747 [2024-12-09 06:22:34.219928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.747 [2024-12-09 06:22:34.219937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.747 [2024-12-09 06:22:34.219944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.747 [2024-12-09 06:22:34.219953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.747 [2024-12-09 06:22:34.219960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.747 [2024-12-09 06:22:34.219968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.747 [2024-12-09 06:22:34.219975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.747 [2024-12-09 06:22:34.219984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.747 [2024-12-09 06:22:34.219991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.747 [2024-12-09 06:22:34.219999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.747 [2024-12-09 06:22:34.220006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.747 [2024-12-09 06:22:34.220016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.747 [2024-12-09 06:22:34.220024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.747 [2024-12-09 06:22:34.220033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.747 [2024-12-09 06:22:34.220040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.747 [2024-12-09 06:22:34.220049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.747 [2024-12-09 06:22:34.220056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.747 [2024-12-09 06:22:34.220066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.747 [2024-12-09 06:22:34.220073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.747 [2024-12-09 06:22:34.220082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.747 [2024-12-09 06:22:34.220089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.747 [2024-12-09 06:22:34.220097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.747 [2024-12-09 06:22:34.220104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.747 [2024-12-09 06:22:34.220113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.747 [2024-12-09 06:22:34.220120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.747 [2024-12-09 06:22:34.220129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.747 [2024-12-09 06:22:34.220135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.747 [2024-12-09 06:22:34.220144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.747 [2024-12-09 06:22:34.220151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.747 [2024-12-09 06:22:34.220160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.747 [2024-12-09 06:22:34.220167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.747 [2024-12-09 06:22:34.220176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.747 [2024-12-09 06:22:34.220182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.747 [2024-12-09 06:22:34.220191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.747 [2024-12-09 06:22:34.220198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.747 [2024-12-09 06:22:34.220207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.747 [2024-12-09 06:22:34.220217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.747 [2024-12-09 06:22:34.220226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.747 [2024-12-09 06:22:34.220233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.747 [2024-12-09 06:22:34.220242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.747 [2024-12-09 06:22:34.220249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.747 [2024-12-09 06:22:34.220258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.747 [2024-12-09 06:22:34.220264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.747 [2024-12-09 06:22:34.220273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.747 [2024-12-09 06:22:34.220280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.747 [2024-12-09 06:22:34.220289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.747 [2024-12-09 06:22:34.220296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.747 [2024-12-09 06:22:34.220304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.747 [2024-12-09 06:22:34.220311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.747 [2024-12-09 06:22:34.220320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.747 [2024-12-09 06:22:34.220327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.747 [2024-12-09 06:22:34.220336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.747 [2024-12-09 06:22:34.220343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.747 [2024-12-09 06:22:34.220351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.747 [2024-12-09 06:22:34.220358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.747 [2024-12-09 06:22:34.220367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.747 [2024-12-09 06:22:34.220374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.747 [2024-12-09 06:22:34.220382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.747 [2024-12-09 06:22:34.220389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.747 [2024-12-09 06:22:34.220398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.747 [2024-12-09 06:22:34.220405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.747 [2024-12-09 06:22:34.220417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.747 [2024-12-09 06:22:34.220424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.747 [2024-12-09 06:22:34.220433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.747 [2024-12-09 06:22:34.220440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.747 [2024-12-09 06:22:34.220458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.747 [2024-12-09 06:22:34.220465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.747 [2024-12-09 06:22:34.220474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.747 [2024-12-09 06:22:34.220481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.747 [2024-12-09 06:22:34.220490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.747 [2024-12-09 06:22:34.220497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.747 [2024-12-09 06:22:34.220506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.747 [2024-12-09 06:22:34.220512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.747 [2024-12-09 06:22:34.220521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.747 [2024-12-09 06:22:34.220528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.747 [2024-12-09 06:22:34.220536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.747 [2024-12-09 06:22:34.220543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.747 [2024-12-09 06:22:34.220552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.747 [2024-12-09 06:22:34.220558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.747 [2024-12-09 06:22:34.220567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.747 [2024-12-09 06:22:34.220574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.747 [2024-12-09 06:22:34.220583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.747 [2024-12-09 06:22:34.220589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.747 [2024-12-09 06:22:34.220598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.747 [2024-12-09 06:22:34.220605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.747 [2024-12-09 06:22:34.220614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.748 [2024-12-09 06:22:34.220622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.748 [2024-12-09 06:22:34.220631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.748 [2024-12-09 06:22:34.220638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.748 [2024-12-09 06:22:34.220647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.748 [2024-12-09 06:22:34.220654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.748 [2024-12-09 06:22:34.220663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.748 [2024-12-09 06:22:34.220670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.748 [2024-12-09 06:22:34.220678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.748 [2024-12-09 06:22:34.220685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.748 [2024-12-09 06:22:34.220693] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fa840 is same with the state(6) to be set 00:23:39.748 [2024-12-09 06:22:34.221887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.748 [2024-12-09 06:22:34.221902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.748 [2024-12-09 06:22:34.221914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.748 [2024-12-09 06:22:34.221923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.748 [2024-12-09 06:22:34.221933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.748 [2024-12-09 06:22:34.221942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.748 [2024-12-09 06:22:34.221953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.748 [2024-12-09 06:22:34.221961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.748 [2024-12-09 06:22:34.221972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.748 [2024-12-09 06:22:34.221980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.748 [2024-12-09 06:22:34.221991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.748 [2024-12-09 06:22:34.221999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.748 [2024-12-09 06:22:34.222010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.748 [2024-12-09 06:22:34.222019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.748 [2024-12-09 06:22:34.222028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.748 [2024-12-09 06:22:34.222039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.748 [2024-12-09 06:22:34.222048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.748 [2024-12-09 06:22:34.222055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.748 [2024-12-09 06:22:34.222064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.748 [2024-12-09 06:22:34.222071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.748 [2024-12-09 06:22:34.222079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.748 [2024-12-09 06:22:34.222086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.748 [2024-12-09 06:22:34.222095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.748 [2024-12-09 06:22:34.222101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.748 [2024-12-09 06:22:34.222110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.748 [2024-12-09 06:22:34.222117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.748 [2024-12-09 06:22:34.222125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.748 [2024-12-09 06:22:34.222132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.748 [2024-12-09 06:22:34.222141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.748 [2024-12-09 06:22:34.222148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.748 [2024-12-09 06:22:34.222157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.748 [2024-12-09 06:22:34.222163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.748 [2024-12-09 06:22:34.222172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.748 [2024-12-09 06:22:34.222179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.748 [2024-12-09 06:22:34.222188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.748 [2024-12-09 06:22:34.222195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.748 [2024-12-09 06:22:34.222203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.748 [2024-12-09 06:22:34.222210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.748 [2024-12-09 06:22:34.222219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.748 [2024-12-09 06:22:34.222226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.748 [2024-12-09 06:22:34.222236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.748 [2024-12-09 06:22:34.222243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.748 [2024-12-09 06:22:34.222251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.748 [2024-12-09 06:22:34.222258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.748 [2024-12-09 06:22:34.222267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.748 [2024-12-09 06:22:34.222274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.748 [2024-12-09 06:22:34.222283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.748 [2024-12-09 06:22:34.222290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.748 [2024-12-09 06:22:34.222299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.748 [2024-12-09 06:22:34.222306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.748 [2024-12-09 06:22:34.222314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.748 [2024-12-09 06:22:34.222321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.748 [2024-12-09 06:22:34.222330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.748 [2024-12-09 06:22:34.222337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.748 [2024-12-09 06:22:34.222346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.748 [2024-12-09 06:22:34.222353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.748 [2024-12-09 06:22:34.222361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.748 [2024-12-09 06:22:34.222368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.748 [2024-12-09 06:22:34.222377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.748 [2024-12-09 06:22:34.222384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.748 [2024-12-09 06:22:34.222392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.748 [2024-12-09 06:22:34.222399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.748 [2024-12-09 06:22:34.222408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.748 [2024-12-09 06:22:34.222415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.748 [2024-12-09 06:22:34.222423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.748 [2024-12-09 06:22:34.222432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.748 [2024-12-09 06:22:34.222441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.748 [2024-12-09 06:22:34.222454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.748 [2024-12-09 06:22:34.222463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.748 [2024-12-09 06:22:34.222470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.748 [2024-12-09 06:22:34.222479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.748 [2024-12-09 06:22:34.222485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.748 [2024-12-09 06:22:34.222495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.748 [2024-12-09 06:22:34.222502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.748 [2024-12-09 06:22:34.222511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.748 [2024-12-09 06:22:34.222518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.748 [2024-12-09 06:22:34.222527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.748 [2024-12-09 06:22:34.222534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.748 [2024-12-09 06:22:34.222542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.748 [2024-12-09 06:22:34.222549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.748 [2024-12-09 06:22:34.222558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.748 [2024-12-09 06:22:34.222565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.748 [2024-12-09 06:22:34.222574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.748 [2024-12-09 06:22:34.222581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.748 [2024-12-09 06:22:34.222590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.748 [2024-12-09 06:22:34.222596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.748 [2024-12-09 06:22:34.222605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.748 [2024-12-09 06:22:34.222612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.748 [2024-12-09 06:22:34.222621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.748 [2024-12-09 06:22:34.222628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.748 [2024-12-09 06:22:34.222639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.748 [2024-12-09 06:22:34.222646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.748 [2024-12-09 06:22:34.222655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.748 [2024-12-09 06:22:34.222662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.748 [2024-12-09 06:22:34.222670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.748 [2024-12-09 06:22:34.222677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.748 [2024-12-09 06:22:34.222686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.748 [2024-12-09 06:22:34.222692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.748 [2024-12-09 06:22:34.222701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.748 [2024-12-09 06:22:34.222708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.748 [2024-12-09 06:22:34.222717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.748 [2024-12-09 06:22:34.222724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.748 [2024-12-09 06:22:34.222733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.748 [2024-12-09 06:22:34.222740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.748 [2024-12-09 06:22:34.222749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.748 [2024-12-09 06:22:34.222756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.748 [2024-12-09 06:22:34.222765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.748 [2024-12-09 06:22:34.222772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.748 [2024-12-09 06:22:34.222781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.748 [2024-12-09 06:22:34.222788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.748 [2024-12-09 06:22:34.222797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.748 [2024-12-09 06:22:34.222804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.748 [2024-12-09 06:22:34.222813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.748 [2024-12-09 06:22:34.222820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.748 [2024-12-09 06:22:34.222828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.748 [2024-12-09 06:22:34.222837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.748 [2024-12-09 06:22:34.222846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.748 [2024-12-09 06:22:34.222853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.748 [2024-12-09 06:22:34.222861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.748 [2024-12-09 06:22:34.222869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.748 [2024-12-09 06:22:34.222878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.748 [2024-12-09 06:22:34.222884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.748 [2024-12-09 06:22:34.222893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.748 [2024-12-09 06:22:34.222901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.748 [2024-12-09 06:22:34.222909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.748 [2024-12-09 06:22:34.222916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.748 [2024-12-09 06:22:34.222925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.748 [2024-12-09 06:22:34.222932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.748 [2024-12-09 06:22:34.222939] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23177d0 is same with the state(6) to be set 00:23:39.748 [2024-12-09 06:22:34.225144] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:39.749 [2024-12-09 06:22:34.225185] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:23:39.749 [2024-12-09 06:22:34.225199] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:23:39.749 [2024-12-09 06:22:34.225211] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:23:39.749 [2024-12-09 06:22:34.225295] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x222e2f0 (9): Bad file descriptor 00:23:39.749 [2024-12-09 06:22:34.225319] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x222ec10 (9): Bad file descriptor 00:23:39.749 [2024-12-09 06:22:34.225338] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:23:39.749 [2024-12-09 06:22:34.225351] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:23:39.749 [2024-12-09 06:22:34.225423] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:23:39.749 task offset: 31232 on job bdev=Nvme4n1 fails 00:23:39.749 00:23:39.749 Latency(us) 00:23:39.749 [2024-12-09T05:22:34.336Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:39.749 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.749 Job: Nvme1n1 ended in about 0.94 seconds with error 00:23:39.749 Verification LBA range: start 0x0 length 0x400 00:23:39.749 Nvme1n1 : 0.94 203.68 12.73 67.89 0.00 233126.40 19559.98 221007.56 00:23:39.749 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.749 Job: Nvme2n1 ended in about 0.94 seconds with error 00:23:39.749 Verification LBA range: start 0x0 length 0x400 00:23:39.749 Nvme2n1 : 0.94 135.46 8.47 67.73 0.00 305633.81 18249.26 254884.63 00:23:39.749 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.749 Job: Nvme3n1 ended in about 0.95 seconds with error 00:23:39.749 Verification LBA range: start 0x0 length 0x400 00:23:39.749 Nvme3n1 : 0.95 202.72 12.67 67.57 0.00 225285.91 17341.83 204875.62 00:23:39.749 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.749 Job: Nvme4n1 ended in about 0.91 seconds with error 00:23:39.749 Verification LBA range: start 0x0 length 0x400 00:23:39.749 Nvme4n1 : 0.91 210.16 13.14 70.05 0.00 212344.12 2394.58 230686.72 00:23:39.749 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.749 Job: Nvme5n1 ended in about 0.95 seconds with error 00:23:39.749 Verification LBA range: start 0x0 length 0x400 00:23:39.749 Nvme5n1 : 0.95 202.25 12.64 67.42 0.00 216852.48 18955.03 225847.14 00:23:39.749 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.749 Job: Nvme6n1 ended in about 0.95 seconds with error 00:23:39.749 Verification LBA range: start 0x0 length 0x400 00:23:39.749 Nvme6n1 : 0.95 201.78 12.61 67.26 0.00 212901.81 17543.48 251658.24 00:23:39.749 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.749 Job: Nvme7n1 ended in about 0.93 seconds with error 00:23:39.749 Verification LBA range: start 0x0 length 0x400 00:23:39.749 Nvme7n1 : 0.93 205.71 12.86 68.57 0.00 203910.89 18350.08 224233.94 00:23:39.749 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.749 Verification LBA range: start 0x0 length 0x400 00:23:39.749 Nvme8n1 : 0.92 282.72 17.67 0.00 0.00 192733.98 1342.23 200842.63 00:23:39.749 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.749 Verification LBA range: start 0x0 length 0x400 00:23:39.749 Nvme9n1 : 0.93 275.73 17.23 0.00 0.00 193478.10 9578.34 229073.53 00:23:39.749 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.749 Job: Nvme10n1 ended in about 0.95 seconds with error 00:23:39.749 Verification LBA range: start 0x0 length 0x400 00:23:39.749 Nvme10n1 : 0.95 138.40 8.65 67.10 0.00 255607.63 17845.96 245205.46 00:23:39.749 [2024-12-09T05:22:34.336Z] =================================================================================================================== 00:23:39.749 [2024-12-09T05:22:34.336Z] Total : 2058.61 128.66 543.60 0.00 222276.21 1342.23 254884.63 00:23:39.749 [2024-12-09 06:22:34.251534] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:39.749 [2024-12-09 06:22:34.251572] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:23:39.749 [2024-12-09 06:22:34.251994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:39.749 [2024-12-09 06:22:34.252013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df8d90 with addr=10.0.0.2, port=4420 00:23:39.749 [2024-12-09 06:22:34.252022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df8d90 is same with the state(6) to be set 00:23:39.749 [2024-12-09 06:22:34.252327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:39.749 [2024-12-09 06:22:34.252337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2453090 with addr=10.0.0.2, port=4420 00:23:39.749 [2024-12-09 06:22:34.252344] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453090 is same with the state(6) to be set 00:23:39.749 [2024-12-09 06:22:34.252439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:39.749 [2024-12-09 06:22:34.252447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1deb700 with addr=10.0.0.2, port=4420 00:23:39.749 [2024-12-09 06:22:34.252460] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1deb700 is same with the state(6) to be set 00:23:39.749 [2024-12-09 06:22:34.252648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:39.749 [2024-12-09 06:22:34.252658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d8e0 with addr=10.0.0.2, port=4420 00:23:39.749 [2024-12-09 06:22:34.252665] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d8e0 is same with the state(6) to be set 00:23:39.749 [2024-12-09 06:22:34.254156] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:23:39.749 [2024-12-09 06:22:34.254171] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:23:39.749 [2024-12-09 06:22:34.254509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:39.749 [2024-12-09 06:22:34.254523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x223a450 with addr=10.0.0.2, port=4420 00:23:39.749 [2024-12-09 06:22:34.254531] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223a450 is same with the state(6) to be set 00:23:39.749 [2024-12-09 06:22:34.254863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:39.749 [2024-12-09 06:22:34.254873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227ac30 with addr=10.0.0.2, port=4420 00:23:39.749 [2024-12-09 06:22:34.254880] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227ac30 is same with the state(6) to be set 00:23:39.749 [2024-12-09 06:22:34.254891] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df8d90 (9): Bad file descriptor 00:23:39.749 [2024-12-09 06:22:34.254903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2453090 (9): Bad file descriptor 00:23:39.749 [2024-12-09 06:22:34.254912] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1deb700 (9): Bad file descriptor 00:23:39.749 [2024-12-09 06:22:34.254920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d8e0 (9): Bad file descriptor 00:23:39.749 [2024-12-09 06:22:34.254960] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:23:39.749 [2024-12-09 06:22:34.254972] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:23:39.749 [2024-12-09 06:22:34.254983] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:23:39.749 [2024-12-09 06:22:34.254993] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:23:39.749 [2024-12-09 06:22:34.255399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:39.749 [2024-12-09 06:22:34.255412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211130 with addr=10.0.0.2, port=4420 00:23:39.749 [2024-12-09 06:22:34.255419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2211130 is same with the state(6) to be set 00:23:39.749 [2024-12-09 06:22:34.255754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:39.749 [2024-12-09 06:22:34.255764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d06610 with addr=10.0.0.2, port=4420 00:23:39.749 [2024-12-09 06:22:34.255771] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d06610 is same with the state(6) to be set 00:23:39.749 [2024-12-09 06:22:34.255780] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x223a450 (9): Bad file descriptor 00:23:39.749 [2024-12-09 06:22:34.255789] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x227ac30 (9): Bad file descriptor 00:23:39.749 [2024-12-09 06:22:34.255797] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:39.749 [2024-12-09 06:22:34.255808] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:39.749 [2024-12-09 06:22:34.255816] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:39.749 [2024-12-09 06:22:34.255824] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:39.749 [2024-12-09 06:22:34.255831] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:23:39.749 [2024-12-09 06:22:34.255837] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:23:39.749 [2024-12-09 06:22:34.255844] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:23:39.749 [2024-12-09 06:22:34.255850] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:23:39.749 [2024-12-09 06:22:34.255857] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:23:39.749 [2024-12-09 06:22:34.255863] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:23:39.749 [2024-12-09 06:22:34.255869] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:23:39.749 [2024-12-09 06:22:34.255875] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:23:39.749 [2024-12-09 06:22:34.255882] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:23:39.749 [2024-12-09 06:22:34.255888] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:23:39.749 [2024-12-09 06:22:34.255894] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:23:39.749 [2024-12-09 06:22:34.255900] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:23:39.749 [2024-12-09 06:22:34.255975] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:23:39.749 [2024-12-09 06:22:34.255986] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:23:39.749 [2024-12-09 06:22:34.256008] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2211130 (9): Bad file descriptor 00:23:39.749 [2024-12-09 06:22:34.256017] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d06610 (9): Bad file descriptor 00:23:39.749 [2024-12-09 06:22:34.256025] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:23:39.749 [2024-12-09 06:22:34.256031] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:23:39.749 [2024-12-09 06:22:34.256038] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:23:39.749 [2024-12-09 06:22:34.256044] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:23:39.749 [2024-12-09 06:22:34.256051] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:23:39.749 [2024-12-09 06:22:34.256057] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:23:39.749 [2024-12-09 06:22:34.256063] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:23:39.749 [2024-12-09 06:22:34.256070] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:23:39.749 [2024-12-09 06:22:34.256301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:39.749 [2024-12-09 06:22:34.256313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222e2f0 with addr=10.0.0.2, port=4420 00:23:39.749 [2024-12-09 06:22:34.256324] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222e2f0 is same with the state(6) to be set 00:23:39.749 [2024-12-09 06:22:34.256504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:39.749 [2024-12-09 06:22:34.256515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222ec10 with addr=10.0.0.2, port=4420 00:23:39.749 [2024-12-09 06:22:34.256521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222ec10 is same with the state(6) to be set 00:23:39.749 [2024-12-09 06:22:34.256528] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:23:39.749 [2024-12-09 06:22:34.256534] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:23:39.749 [2024-12-09 06:22:34.256541] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:23:39.749 [2024-12-09 06:22:34.256548] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:23:39.749 [2024-12-09 06:22:34.256555] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:23:39.749 [2024-12-09 06:22:34.256561] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:23:39.749 [2024-12-09 06:22:34.256568] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:23:39.749 [2024-12-09 06:22:34.256574] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:23:39.749 [2024-12-09 06:22:34.256605] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x222e2f0 (9): Bad file descriptor 00:23:39.749 [2024-12-09 06:22:34.256615] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x222ec10 (9): Bad file descriptor 00:23:39.749 [2024-12-09 06:22:34.256640] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:23:39.749 [2024-12-09 06:22:34.256647] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:23:39.749 [2024-12-09 06:22:34.256653] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:23:39.749 [2024-12-09 06:22:34.256659] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:23:39.749 [2024-12-09 06:22:34.256666] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:23:39.749 [2024-12-09 06:22:34.256673] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:23:39.749 [2024-12-09 06:22:34.256679] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:23:39.749 [2024-12-09 06:22:34.256685] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:23:40.009 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:23:40.949 06:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 392323 00:23:40.949 06:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:23:40.949 06:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 392323 00:23:40.949 06:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:23:40.949 06:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:40.949 06:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:23:40.949 06:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:40.949 06:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 392323 00:23:40.949 06:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:23:40.949 06:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:40.949 06:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:23:40.949 06:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:23:40.949 06:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:23:40.949 06:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:40.949 06:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:23:40.949 06:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:40.949 06:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:40.949 06:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:40.949 06:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:40.949 06:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:40.949 06:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:23:40.949 06:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:40.949 06:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:23:40.949 06:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:40.949 06:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:40.949 rmmod nvme_tcp 00:23:40.949 rmmod nvme_fabrics 00:23:40.949 rmmod nvme_keyring 00:23:40.949 06:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:40.949 06:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:23:40.949 06:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:23:40.949 06:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 392015 ']' 00:23:40.949 06:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 392015 00:23:40.949 06:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 392015 ']' 00:23:40.949 06:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 392015 00:23:40.949 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (392015) - No such process 00:23:40.949 06:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 392015 is not found' 00:23:40.949 Process with pid 392015 is not found 00:23:40.949 06:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:40.949 06:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:40.949 06:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:40.949 06:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:23:40.949 06:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:40.949 06:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:23:40.949 06:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:23:41.264 06:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:41.264 06:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:41.264 06:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:41.264 06:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:41.264 06:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:43.170 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:43.170 00:23:43.170 real 0m7.801s 00:23:43.170 user 0m19.423s 00:23:43.170 sys 0m1.192s 00:23:43.170 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:43.170 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:43.170 ************************************ 00:23:43.170 END TEST nvmf_shutdown_tc3 00:23:43.170 ************************************ 00:23:43.170 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:23:43.170 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:23:43.170 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:23:43.170 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:43.170 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:43.170 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:43.170 ************************************ 00:23:43.170 START TEST nvmf_shutdown_tc4 00:23:43.170 ************************************ 00:23:43.170 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:23:43.170 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:23:43.170 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:43.170 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:43.170 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:43.170 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:43.170 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:43.171 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:43.171 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:43.171 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:43.171 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:43.171 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:43.172 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:43.432 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:43.432 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:43.432 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:43.432 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:43.432 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:43.432 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:43.432 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:43.432 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:43.432 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:43.432 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.618 ms 00:23:43.432 00:23:43.432 --- 10.0.0.2 ping statistics --- 00:23:43.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:43.432 rtt min/avg/max/mdev = 0.618/0.618/0.618/0.000 ms 00:23:43.432 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:43.432 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:43.432 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:23:43.432 00:23:43.432 --- 10.0.0.1 ping statistics --- 00:23:43.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:43.432 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:23:43.432 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:43.432 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:23:43.432 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:43.432 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:43.432 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:43.432 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:43.432 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:43.432 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:43.432 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:43.692 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:43.692 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:43.692 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:43.692 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:43.692 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=393648 00:23:43.692 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 393648 00:23:43.692 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:43.692 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 393648 ']' 00:23:43.692 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:43.692 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:43.692 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:43.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:43.692 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:43.692 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:43.692 [2024-12-09 06:22:38.136077] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:23:43.692 [2024-12-09 06:22:38.136143] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:43.692 [2024-12-09 06:22:38.206062] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:43.692 [2024-12-09 06:22:38.243122] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:43.692 [2024-12-09 06:22:38.243160] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:43.692 [2024-12-09 06:22:38.243166] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:43.692 [2024-12-09 06:22:38.243175] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:43.692 [2024-12-09 06:22:38.243180] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:43.692 [2024-12-09 06:22:38.244613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:43.692 [2024-12-09 06:22:38.244763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:43.692 [2024-12-09 06:22:38.244912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:43.692 [2024-12-09 06:22:38.244913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:44.632 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:44.632 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:23:44.632 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:44.632 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:44.632 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:44.632 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:44.632 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:44.632 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.632 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:44.632 [2024-12-09 06:22:38.989359] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:44.632 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.632 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:44.632 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:44.632 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:44.632 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:44.632 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:44.632 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:44.632 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:44.632 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:44.632 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:44.632 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:44.632 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:44.632 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:44.632 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:44.632 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:44.632 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:44.632 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:44.632 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:44.632 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:44.632 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:44.632 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:44.632 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:44.632 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:44.632 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:44.632 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:44.632 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:44.632 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:44.632 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.632 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:44.632 Malloc1 00:23:44.632 [2024-12-09 06:22:39.100327] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:44.632 Malloc2 00:23:44.632 Malloc3 00:23:44.632 Malloc4 00:23:44.892 Malloc5 00:23:44.892 Malloc6 00:23:44.892 Malloc7 00:23:44.892 Malloc8 00:23:44.892 Malloc9 00:23:44.892 Malloc10 00:23:44.892 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.892 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:44.892 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:44.892 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:45.152 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=393800 00:23:45.152 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:23:45.152 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:23:45.152 [2024-12-09 06:22:39.579240] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:50.441 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:50.441 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 393648 00:23:50.441 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 393648 ']' 00:23:50.441 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 393648 00:23:50.441 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:23:50.441 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:50.441 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 393648 00:23:50.441 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:50.441 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:50.441 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 393648' 00:23:50.441 killing process with pid 393648 00:23:50.441 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 393648 00:23:50.441 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 393648 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 starting I/O failed: -6 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 starting I/O failed: -6 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 starting I/O failed: -6 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 starting I/O failed: -6 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 starting I/O failed: -6 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 starting I/O failed: -6 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 starting I/O failed: -6 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 starting I/O failed: -6 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 starting I/O failed: -6 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 starting I/O failed: -6 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 [2024-12-09 06:22:44.572590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:50.441 starting I/O failed: -6 00:23:50.441 starting I/O failed: -6 00:23:50.441 starting I/O failed: -6 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 starting I/O failed: -6 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 starting I/O failed: -6 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 starting I/O failed: -6 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 starting I/O failed: -6 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 starting I/O failed: -6 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 starting I/O failed: -6 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 starting I/O failed: -6 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 starting I/O failed: -6 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 starting I/O failed: -6 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 starting I/O failed: -6 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 starting I/O failed: -6 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 starting I/O failed: -6 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 starting I/O failed: -6 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 starting I/O failed: -6 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 starting I/O failed: -6 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 starting I/O failed: -6 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 starting I/O failed: -6 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 starting I/O failed: -6 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 starting I/O failed: -6 00:23:50.441 [2024-12-09 06:22:44.573545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 starting I/O failed: -6 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 starting I/O failed: -6 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 starting I/O failed: -6 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 starting I/O failed: -6 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 starting I/O failed: -6 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 starting I/O failed: -6 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 starting I/O failed: -6 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 starting I/O failed: -6 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 starting I/O failed: -6 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 starting I/O failed: -6 00:23:50.441 Write completed with error (sct=0, sc=8) 00:23:50.441 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 [2024-12-09 06:22:44.574377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 [2024-12-09 06:22:44.575775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:50.442 NVMe io qpair process completion error 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 starting I/O failed: -6 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 Write completed with error (sct=0, sc=8) 00:23:50.442 [2024-12-09 06:22:44.576848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:50.443 starting I/O failed: -6 00:23:50.443 starting I/O failed: -6 00:23:50.443 [2024-12-09 06:22:44.577003] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b740 is same with the state(6) to be set 00:23:50.443 [2024-12-09 06:22:44.577037] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b740 is same with the state(6) to be set 00:23:50.443 starting I/O failed: -6 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 starting I/O failed: -6 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 starting I/O failed: -6 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 starting I/O failed: -6 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 starting I/O failed: -6 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 starting I/O failed: -6 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 [2024-12-09 06:22:44.577336] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181bc10 is same with the state(6) to be set 00:23:50.443 starting I/O failed: -6 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 [2024-12-09 06:22:44.577356] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181bc10 is same with the state(6) to be set 00:23:50.443 [2024-12-09 06:22:44.577363] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181bc10 is same with the state(6) to be set 00:23:50.443 [2024-12-09 06:22:44.577368] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181bc10 is same with the state(6) to be set 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 starting I/O failed: -6 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 starting I/O failed: -6 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 starting I/O failed: -6 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 starting I/O failed: -6 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 starting I/O failed: -6 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 starting I/O failed: -6 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 [2024-12-09 06:22:44.577554] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181c0e0 is same with the state(6) to be set 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 starting I/O failed: -6 00:23:50.443 [2024-12-09 06:22:44.577577] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181c0e0 is same with the state(6) to be set 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 [2024-12-09 06:22:44.577584] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181c0e0 is same with the state(6) to be set 00:23:50.443 [2024-12-09 06:22:44.577590] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181c0e0 is same with Write completed with error (sct=0, sc=8) 00:23:50.443 the state(6) to be set 00:23:50.443 [2024-12-09 06:22:44.577608] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181c0e0 is same with the state(6) to be set 00:23:50.443 starting I/O failed: -6 00:23:50.443 [2024-12-09 06:22:44.577613] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181c0e0 is same with the state(6) to be set 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 [2024-12-09 06:22:44.577619] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181c0e0 is same with the state(6) to be set 00:23:50.443 [2024-12-09 06:22:44.577624] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181c0e0 is same with the state(6) to be set 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 starting I/O failed: -6 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 starting I/O failed: -6 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 starting I/O failed: -6 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 starting I/O failed: -6 00:23:50.443 [2024-12-09 06:22:44.577747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 starting I/O failed: -6 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 starting I/O failed: -6 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 starting I/O failed: -6 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 starting I/O failed: -6 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 starting I/O failed: -6 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 starting I/O failed: -6 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 starting I/O failed: -6 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 starting I/O failed: -6 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 starting I/O failed: -6 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 starting I/O failed: -6 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 starting I/O failed: -6 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 starting I/O failed: -6 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 starting I/O failed: -6 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 starting I/O failed: -6 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 starting I/O failed: -6 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 starting I/O failed: -6 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 starting I/O failed: -6 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 starting I/O failed: -6 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 starting I/O failed: -6 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 starting I/O failed: -6 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 starting I/O failed: -6 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 starting I/O failed: -6 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 starting I/O failed: -6 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 starting I/O failed: -6 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 starting I/O failed: -6 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 starting I/O failed: -6 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 starting I/O failed: -6 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 starting I/O failed: -6 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 starting I/O failed: -6 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 starting I/O failed: -6 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 starting I/O failed: -6 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 starting I/O failed: -6 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 starting I/O failed: -6 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 starting I/O failed: -6 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 starting I/O failed: -6 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 starting I/O failed: -6 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 starting I/O failed: -6 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 starting I/O failed: -6 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 [2024-12-09 06:22:44.578639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:50.443 starting I/O failed: -6 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 starting I/O failed: -6 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 starting I/O failed: -6 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 starting I/O failed: -6 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 starting I/O failed: -6 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 starting I/O failed: -6 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 starting I/O failed: -6 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 starting I/O failed: -6 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 starting I/O failed: -6 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 starting I/O failed: -6 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 starting I/O failed: -6 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 starting I/O failed: -6 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 starting I/O failed: -6 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 starting I/O failed: -6 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 starting I/O failed: -6 00:23:50.443 Write completed with error (sct=0, sc=8) 00:23:50.443 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 [2024-12-09 06:22:44.579972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:50.444 NVMe io qpair process completion error 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 [2024-12-09 06:22:44.582430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.444 Write completed with error (sct=0, sc=8) 00:23:50.444 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 [2024-12-09 06:22:44.585407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:50.445 NVMe io qpair process completion error 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.445 starting I/O failed: -6 00:23:50.445 Write completed with error (sct=0, sc=8) 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 starting I/O failed: -6 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 starting I/O failed: -6 00:23:50.446 [2024-12-09 06:22:44.586462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 starting I/O failed: -6 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 starting I/O failed: -6 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 starting I/O failed: -6 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 starting I/O failed: -6 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 starting I/O failed: -6 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 starting I/O failed: -6 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 starting I/O failed: -6 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 starting I/O failed: -6 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 starting I/O failed: -6 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 starting I/O failed: -6 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 starting I/O failed: -6 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 starting I/O failed: -6 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 starting I/O failed: -6 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 starting I/O failed: -6 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 starting I/O failed: -6 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 starting I/O failed: -6 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 starting I/O failed: -6 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 starting I/O failed: -6 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 starting I/O failed: -6 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 starting I/O failed: -6 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 starting I/O failed: -6 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 starting I/O failed: -6 00:23:50.446 [2024-12-09 06:22:44.587270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 starting I/O failed: -6 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 starting I/O failed: -6 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 starting I/O failed: -6 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 starting I/O failed: -6 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 starting I/O failed: -6 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 starting I/O failed: -6 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 starting I/O failed: -6 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 starting I/O failed: -6 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 starting I/O failed: -6 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 starting I/O failed: -6 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 starting I/O failed: -6 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 starting I/O failed: -6 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 starting I/O failed: -6 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 starting I/O failed: -6 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 starting I/O failed: -6 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 starting I/O failed: -6 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 starting I/O failed: -6 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 starting I/O failed: -6 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 starting I/O failed: -6 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 starting I/O failed: -6 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 starting I/O failed: -6 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 starting I/O failed: -6 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 starting I/O failed: -6 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 starting I/O failed: -6 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 starting I/O failed: -6 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 starting I/O failed: -6 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 starting I/O failed: -6 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 starting I/O failed: -6 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 starting I/O failed: -6 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 starting I/O failed: -6 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 starting I/O failed: -6 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 starting I/O failed: -6 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 starting I/O failed: -6 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 starting I/O failed: -6 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 starting I/O failed: -6 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 starting I/O failed: -6 00:23:50.446 [2024-12-09 06:22:44.588122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 starting I/O failed: -6 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 starting I/O failed: -6 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 starting I/O failed: -6 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 starting I/O failed: -6 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 starting I/O failed: -6 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 starting I/O failed: -6 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 starting I/O failed: -6 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 starting I/O failed: -6 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 starting I/O failed: -6 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 starting I/O failed: -6 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 starting I/O failed: -6 00:23:50.446 Write completed with error (sct=0, sc=8) 00:23:50.446 starting I/O failed: -6 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 starting I/O failed: -6 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 starting I/O failed: -6 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 starting I/O failed: -6 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 starting I/O failed: -6 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 starting I/O failed: -6 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 starting I/O failed: -6 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 starting I/O failed: -6 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 starting I/O failed: -6 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 starting I/O failed: -6 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 starting I/O failed: -6 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 starting I/O failed: -6 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 starting I/O failed: -6 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 starting I/O failed: -6 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 starting I/O failed: -6 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 starting I/O failed: -6 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 starting I/O failed: -6 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 starting I/O failed: -6 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 starting I/O failed: -6 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 starting I/O failed: -6 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 starting I/O failed: -6 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 starting I/O failed: -6 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 starting I/O failed: -6 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 starting I/O failed: -6 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 starting I/O failed: -6 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 starting I/O failed: -6 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 starting I/O failed: -6 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 starting I/O failed: -6 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 starting I/O failed: -6 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 starting I/O failed: -6 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 starting I/O failed: -6 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 starting I/O failed: -6 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 starting I/O failed: -6 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 starting I/O failed: -6 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 starting I/O failed: -6 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 starting I/O failed: -6 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 starting I/O failed: -6 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 starting I/O failed: -6 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 starting I/O failed: -6 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 starting I/O failed: -6 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 starting I/O failed: -6 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 starting I/O failed: -6 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 starting I/O failed: -6 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 starting I/O failed: -6 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 starting I/O failed: -6 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 starting I/O failed: -6 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 starting I/O failed: -6 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 starting I/O failed: -6 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 starting I/O failed: -6 00:23:50.447 [2024-12-09 06:22:44.590461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:50.447 NVMe io qpair process completion error 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 starting I/O failed: -6 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 starting I/O failed: -6 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 starting I/O failed: -6 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 starting I/O failed: -6 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 starting I/O failed: -6 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 starting I/O failed: -6 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 starting I/O failed: -6 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 starting I/O failed: -6 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 starting I/O failed: -6 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 starting I/O failed: -6 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 [2024-12-09 06:22:44.591523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 starting I/O failed: -6 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 starting I/O failed: -6 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 starting I/O failed: -6 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 starting I/O failed: -6 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 starting I/O failed: -6 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 starting I/O failed: -6 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 starting I/O failed: -6 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 starting I/O failed: -6 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 starting I/O failed: -6 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 starting I/O failed: -6 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 starting I/O failed: -6 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 starting I/O failed: -6 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 starting I/O failed: -6 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 starting I/O failed: -6 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 starting I/O failed: -6 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 starting I/O failed: -6 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 starting I/O failed: -6 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 starting I/O failed: -6 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 starting I/O failed: -6 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 starting I/O failed: -6 00:23:50.447 [2024-12-09 06:22:44.592272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 starting I/O failed: -6 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 starting I/O failed: -6 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.447 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 [2024-12-09 06:22:44.593152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 [2024-12-09 06:22:44.594693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:50.448 NVMe io qpair process completion error 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 starting I/O failed: -6 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.448 Write completed with error (sct=0, sc=8) 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 starting I/O failed: -6 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 starting I/O failed: -6 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 starting I/O failed: -6 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 starting I/O failed: -6 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 starting I/O failed: -6 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 starting I/O failed: -6 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 starting I/O failed: -6 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 [2024-12-09 06:22:44.595838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:50.449 starting I/O failed: -6 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 starting I/O failed: -6 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 starting I/O failed: -6 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 starting I/O failed: -6 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 starting I/O failed: -6 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 starting I/O failed: -6 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 starting I/O failed: -6 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 starting I/O failed: -6 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 starting I/O failed: -6 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 starting I/O failed: -6 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 starting I/O failed: -6 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 starting I/O failed: -6 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 starting I/O failed: -6 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 starting I/O failed: -6 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 starting I/O failed: -6 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 starting I/O failed: -6 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 starting I/O failed: -6 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 starting I/O failed: -6 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 starting I/O failed: -6 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 starting I/O failed: -6 00:23:50.449 [2024-12-09 06:22:44.596596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 starting I/O failed: -6 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 starting I/O failed: -6 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 starting I/O failed: -6 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 starting I/O failed: -6 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 starting I/O failed: -6 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 starting I/O failed: -6 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 starting I/O failed: -6 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 starting I/O failed: -6 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 starting I/O failed: -6 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 starting I/O failed: -6 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 starting I/O failed: -6 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 starting I/O failed: -6 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 starting I/O failed: -6 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 starting I/O failed: -6 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 starting I/O failed: -6 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 starting I/O failed: -6 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 starting I/O failed: -6 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 starting I/O failed: -6 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 starting I/O failed: -6 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 starting I/O failed: -6 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 starting I/O failed: -6 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 starting I/O failed: -6 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 starting I/O failed: -6 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 starting I/O failed: -6 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 starting I/O failed: -6 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 starting I/O failed: -6 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 starting I/O failed: -6 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 starting I/O failed: -6 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 starting I/O failed: -6 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 starting I/O failed: -6 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 starting I/O failed: -6 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 starting I/O failed: -6 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 starting I/O failed: -6 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 starting I/O failed: -6 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 starting I/O failed: -6 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 starting I/O failed: -6 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 [2024-12-09 06:22:44.597462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 starting I/O failed: -6 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 starting I/O failed: -6 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 starting I/O failed: -6 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 starting I/O failed: -6 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 starting I/O failed: -6 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 starting I/O failed: -6 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 starting I/O failed: -6 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 starting I/O failed: -6 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 starting I/O failed: -6 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 starting I/O failed: -6 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 starting I/O failed: -6 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 starting I/O failed: -6 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 starting I/O failed: -6 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 starting I/O failed: -6 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 starting I/O failed: -6 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 starting I/O failed: -6 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 starting I/O failed: -6 00:23:50.449 Write completed with error (sct=0, sc=8) 00:23:50.449 starting I/O failed: -6 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 starting I/O failed: -6 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 starting I/O failed: -6 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 starting I/O failed: -6 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 starting I/O failed: -6 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 starting I/O failed: -6 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 starting I/O failed: -6 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 starting I/O failed: -6 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 starting I/O failed: -6 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 starting I/O failed: -6 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 starting I/O failed: -6 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 starting I/O failed: -6 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 starting I/O failed: -6 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 starting I/O failed: -6 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 starting I/O failed: -6 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 starting I/O failed: -6 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 starting I/O failed: -6 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 starting I/O failed: -6 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 starting I/O failed: -6 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 starting I/O failed: -6 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 starting I/O failed: -6 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 starting I/O failed: -6 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 starting I/O failed: -6 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 starting I/O failed: -6 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 starting I/O failed: -6 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 starting I/O failed: -6 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 starting I/O failed: -6 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 starting I/O failed: -6 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 starting I/O failed: -6 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 starting I/O failed: -6 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 starting I/O failed: -6 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 starting I/O failed: -6 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 starting I/O failed: -6 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 starting I/O failed: -6 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 starting I/O failed: -6 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 starting I/O failed: -6 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 starting I/O failed: -6 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 starting I/O failed: -6 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 starting I/O failed: -6 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 starting I/O failed: -6 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 starting I/O failed: -6 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 starting I/O failed: -6 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 starting I/O failed: -6 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 starting I/O failed: -6 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 starting I/O failed: -6 00:23:50.450 [2024-12-09 06:22:44.599963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:50.450 NVMe io qpair process completion error 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 starting I/O failed: -6 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 starting I/O failed: -6 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 starting I/O failed: -6 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 starting I/O failed: -6 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 starting I/O failed: -6 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 starting I/O failed: -6 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 starting I/O failed: -6 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 starting I/O failed: -6 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 starting I/O failed: -6 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 starting I/O failed: -6 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 [2024-12-09 06:22:44.601102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 starting I/O failed: -6 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 starting I/O failed: -6 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 starting I/O failed: -6 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 starting I/O failed: -6 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 starting I/O failed: -6 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 starting I/O failed: -6 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 starting I/O failed: -6 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 starting I/O failed: -6 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 starting I/O failed: -6 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 starting I/O failed: -6 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 starting I/O failed: -6 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 starting I/O failed: -6 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 starting I/O failed: -6 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 starting I/O failed: -6 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 starting I/O failed: -6 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 starting I/O failed: -6 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 starting I/O failed: -6 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 starting I/O failed: -6 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.450 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 [2024-12-09 06:22:44.601948] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 [2024-12-09 06:22:44.602806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.451 starting I/O failed: -6 00:23:50.451 Write completed with error (sct=0, sc=8) 00:23:50.452 starting I/O failed: -6 00:23:50.452 [2024-12-09 06:22:44.604155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:50.452 NVMe io qpair process completion error 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 starting I/O failed: -6 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 starting I/O failed: -6 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 starting I/O failed: -6 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 starting I/O failed: -6 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 starting I/O failed: -6 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 starting I/O failed: -6 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 starting I/O failed: -6 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 starting I/O failed: -6 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 starting I/O failed: -6 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 starting I/O failed: -6 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 [2024-12-09 06:22:44.605172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:50.452 starting I/O failed: -6 00:23:50.452 starting I/O failed: -6 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 starting I/O failed: -6 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 starting I/O failed: -6 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 starting I/O failed: -6 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 starting I/O failed: -6 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 starting I/O failed: -6 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 starting I/O failed: -6 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 starting I/O failed: -6 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 starting I/O failed: -6 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 starting I/O failed: -6 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 starting I/O failed: -6 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 starting I/O failed: -6 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 starting I/O failed: -6 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 starting I/O failed: -6 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 starting I/O failed: -6 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 starting I/O failed: -6 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 starting I/O failed: -6 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 starting I/O failed: -6 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 starting I/O failed: -6 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 starting I/O failed: -6 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 [2024-12-09 06:22:44.606041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 starting I/O failed: -6 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 starting I/O failed: -6 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 starting I/O failed: -6 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 starting I/O failed: -6 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 starting I/O failed: -6 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 starting I/O failed: -6 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 starting I/O failed: -6 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 starting I/O failed: -6 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 starting I/O failed: -6 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 starting I/O failed: -6 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 starting I/O failed: -6 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 starting I/O failed: -6 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 starting I/O failed: -6 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 starting I/O failed: -6 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 starting I/O failed: -6 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 starting I/O failed: -6 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 starting I/O failed: -6 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 starting I/O failed: -6 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 starting I/O failed: -6 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 starting I/O failed: -6 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 starting I/O failed: -6 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 starting I/O failed: -6 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 starting I/O failed: -6 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 starting I/O failed: -6 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 starting I/O failed: -6 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 starting I/O failed: -6 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 starting I/O failed: -6 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 starting I/O failed: -6 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 starting I/O failed: -6 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 starting I/O failed: -6 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 starting I/O failed: -6 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 starting I/O failed: -6 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 starting I/O failed: -6 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 starting I/O failed: -6 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 starting I/O failed: -6 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 starting I/O failed: -6 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 [2024-12-09 06:22:44.606889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 starting I/O failed: -6 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 starting I/O failed: -6 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 starting I/O failed: -6 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 starting I/O failed: -6 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.452 starting I/O failed: -6 00:23:50.452 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 [2024-12-09 06:22:44.608466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:50.453 NVMe io qpair process completion error 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 [2024-12-09 06:22:44.609606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 starting I/O failed: -6 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.453 Write completed with error (sct=0, sc=8) 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 [2024-12-09 06:22:44.610358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 [2024-12-09 06:22:44.611221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 Write completed with error (sct=0, sc=8) 00:23:50.454 starting I/O failed: -6 00:23:50.454 [2024-12-09 06:22:44.614187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:50.455 NVMe io qpair process completion error 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 starting I/O failed: -6 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 starting I/O failed: -6 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 starting I/O failed: -6 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 starting I/O failed: -6 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 starting I/O failed: -6 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 starting I/O failed: -6 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 starting I/O failed: -6 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 starting I/O failed: -6 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 starting I/O failed: -6 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 starting I/O failed: -6 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 starting I/O failed: -6 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 starting I/O failed: -6 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 starting I/O failed: -6 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 starting I/O failed: -6 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 starting I/O failed: -6 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 starting I/O failed: -6 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 starting I/O failed: -6 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 starting I/O failed: -6 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 starting I/O failed: -6 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 starting I/O failed: -6 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 starting I/O failed: -6 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 starting I/O failed: -6 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 starting I/O failed: -6 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 starting I/O failed: -6 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 starting I/O failed: -6 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 starting I/O failed: -6 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 starting I/O failed: -6 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 starting I/O failed: -6 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 starting I/O failed: -6 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 starting I/O failed: -6 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 starting I/O failed: -6 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 starting I/O failed: -6 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 starting I/O failed: -6 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 starting I/O failed: -6 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 starting I/O failed: -6 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 starting I/O failed: -6 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 starting I/O failed: -6 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 starting I/O failed: -6 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 starting I/O failed: -6 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 starting I/O failed: -6 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 starting I/O failed: -6 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 starting I/O failed: -6 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 starting I/O failed: -6 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 starting I/O failed: -6 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 starting I/O failed: -6 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 starting I/O failed: -6 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 starting I/O failed: -6 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 starting I/O failed: -6 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 starting I/O failed: -6 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 starting I/O failed: -6 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 starting I/O failed: -6 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 starting I/O failed: -6 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 starting I/O failed: -6 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 starting I/O failed: -6 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 starting I/O failed: -6 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 starting I/O failed: -6 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 starting I/O failed: -6 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 starting I/O failed: -6 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 starting I/O failed: -6 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 starting I/O failed: -6 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 starting I/O failed: -6 00:23:50.455 [2024-12-09 06:22:44.617299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 starting I/O failed: -6 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 starting I/O failed: -6 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 starting I/O failed: -6 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 starting I/O failed: -6 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 starting I/O failed: -6 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 starting I/O failed: -6 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 starting I/O failed: -6 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 starting I/O failed: -6 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 starting I/O failed: -6 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 starting I/O failed: -6 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 starting I/O failed: -6 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 starting I/O failed: -6 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 starting I/O failed: -6 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 starting I/O failed: -6 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 starting I/O failed: -6 00:23:50.455 Write completed with error (sct=0, sc=8) 00:23:50.455 starting I/O failed: -6 00:23:50.456 Write completed with error (sct=0, sc=8) 00:23:50.456 starting I/O failed: -6 00:23:50.456 Write completed with error (sct=0, sc=8) 00:23:50.456 starting I/O failed: -6 00:23:50.456 Write completed with error (sct=0, sc=8) 00:23:50.456 starting I/O failed: -6 00:23:50.456 Write completed with error (sct=0, sc=8) 00:23:50.456 starting I/O failed: -6 00:23:50.456 Write completed with error (sct=0, sc=8) 00:23:50.456 starting I/O failed: -6 00:23:50.456 Write completed with error (sct=0, sc=8) 00:23:50.456 starting I/O failed: -6 00:23:50.456 Write completed with error (sct=0, sc=8) 00:23:50.456 starting I/O failed: -6 00:23:50.456 Write completed with error (sct=0, sc=8) 00:23:50.456 starting I/O failed: -6 00:23:50.456 Write completed with error (sct=0, sc=8) 00:23:50.456 starting I/O failed: -6 00:23:50.456 Write completed with error (sct=0, sc=8) 00:23:50.456 starting I/O failed: -6 00:23:50.456 Write completed with error (sct=0, sc=8) 00:23:50.456 starting I/O failed: -6 00:23:50.456 Write completed with error (sct=0, sc=8) 00:23:50.456 starting I/O failed: -6 00:23:50.456 Write completed with error (sct=0, sc=8) 00:23:50.456 starting I/O failed: -6 00:23:50.456 Write completed with error (sct=0, sc=8) 00:23:50.456 starting I/O failed: -6 00:23:50.456 Write completed with error (sct=0, sc=8) 00:23:50.456 starting I/O failed: -6 00:23:50.456 Write completed with error (sct=0, sc=8) 00:23:50.456 starting I/O failed: -6 00:23:50.456 Write completed with error (sct=0, sc=8) 00:23:50.456 starting I/O failed: -6 00:23:50.456 Write completed with error (sct=0, sc=8) 00:23:50.456 starting I/O failed: -6 00:23:50.456 Write completed with error (sct=0, sc=8) 00:23:50.456 starting I/O failed: -6 00:23:50.456 Write completed with error (sct=0, sc=8) 00:23:50.456 starting I/O failed: -6 00:23:50.456 Write completed with error (sct=0, sc=8) 00:23:50.456 starting I/O failed: -6 00:23:50.456 Write completed with error (sct=0, sc=8) 00:23:50.456 starting I/O failed: -6 00:23:50.456 Write completed with error (sct=0, sc=8) 00:23:50.456 starting I/O failed: -6 00:23:50.456 Write completed with error (sct=0, sc=8) 00:23:50.456 starting I/O failed: -6 00:23:50.456 Write completed with error (sct=0, sc=8) 00:23:50.456 starting I/O failed: -6 00:23:50.456 Write completed with error (sct=0, sc=8) 00:23:50.456 starting I/O failed: -6 00:23:50.456 Write completed with error (sct=0, sc=8) 00:23:50.456 starting I/O failed: -6 00:23:50.456 Write completed with error (sct=0, sc=8) 00:23:50.456 starting I/O failed: -6 00:23:50.456 Write completed with error (sct=0, sc=8) 00:23:50.456 starting I/O failed: -6 00:23:50.456 Write completed with error (sct=0, sc=8) 00:23:50.456 starting I/O failed: -6 00:23:50.456 Write completed with error (sct=0, sc=8) 00:23:50.456 starting I/O failed: -6 00:23:50.456 Write completed with error (sct=0, sc=8) 00:23:50.456 starting I/O failed: -6 00:23:50.456 Write completed with error (sct=0, sc=8) 00:23:50.456 starting I/O failed: -6 00:23:50.456 Write completed with error (sct=0, sc=8) 00:23:50.456 starting I/O failed: -6 00:23:50.456 Write completed with error (sct=0, sc=8) 00:23:50.456 starting I/O failed: -6 00:23:50.456 Write completed with error (sct=0, sc=8) 00:23:50.456 starting I/O failed: -6 00:23:50.456 Write completed with error (sct=0, sc=8) 00:23:50.456 starting I/O failed: -6 00:23:50.456 Write completed with error (sct=0, sc=8) 00:23:50.456 starting I/O failed: -6 00:23:50.456 Write completed with error (sct=0, sc=8) 00:23:50.456 starting I/O failed: -6 00:23:50.456 Write completed with error (sct=0, sc=8) 00:23:50.456 starting I/O failed: -6 00:23:50.456 Write completed with error (sct=0, sc=8) 00:23:50.456 starting I/O failed: -6 00:23:50.456 Write completed with error (sct=0, sc=8) 00:23:50.456 starting I/O failed: -6 00:23:50.456 Write completed with error (sct=0, sc=8) 00:23:50.456 starting I/O failed: -6 00:23:50.456 Write completed with error (sct=0, sc=8) 00:23:50.456 starting I/O failed: -6 00:23:50.456 Write completed with error (sct=0, sc=8) 00:23:50.456 starting I/O failed: -6 00:23:50.456 Write completed with error (sct=0, sc=8) 00:23:50.456 starting I/O failed: -6 00:23:50.456 Write completed with error (sct=0, sc=8) 00:23:50.456 starting I/O failed: -6 00:23:50.456 Write completed with error (sct=0, sc=8) 00:23:50.456 starting I/O failed: -6 00:23:50.456 Write completed with error (sct=0, sc=8) 00:23:50.456 starting I/O failed: -6 00:23:50.456 Write completed with error (sct=0, sc=8) 00:23:50.456 starting I/O failed: -6 00:23:50.456 Write completed with error (sct=0, sc=8) 00:23:50.456 starting I/O failed: -6 00:23:50.456 [2024-12-09 06:22:44.619192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:50.456 NVMe io qpair process completion error 00:23:50.456 Initializing NVMe Controllers 00:23:50.456 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:50.456 Controller IO queue size 128, less than required. 00:23:50.456 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:50.456 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:23:50.456 Controller IO queue size 128, less than required. 00:23:50.456 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:50.456 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:23:50.456 Controller IO queue size 128, less than required. 00:23:50.456 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:50.456 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:23:50.456 Controller IO queue size 128, less than required. 00:23:50.456 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:50.456 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:23:50.456 Controller IO queue size 128, less than required. 00:23:50.456 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:50.456 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:23:50.456 Controller IO queue size 128, less than required. 00:23:50.456 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:50.456 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:23:50.456 Controller IO queue size 128, less than required. 00:23:50.456 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:50.456 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:23:50.456 Controller IO queue size 128, less than required. 00:23:50.456 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:50.456 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:23:50.456 Controller IO queue size 128, less than required. 00:23:50.456 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:50.456 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:23:50.456 Controller IO queue size 128, less than required. 00:23:50.456 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:50.456 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:50.456 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:23:50.456 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:23:50.456 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:23:50.456 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:23:50.456 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:23:50.456 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:23:50.456 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:23:50.456 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:23:50.456 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:23:50.456 Initialization complete. Launching workers. 00:23:50.456 ======================================================== 00:23:50.456 Latency(us) 00:23:50.456 Device Information : IOPS MiB/s Average min max 00:23:50.456 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2039.06 87.62 62789.13 683.71 114543.31 00:23:50.456 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2023.10 86.93 63303.17 553.75 113990.84 00:23:50.456 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2049.91 88.08 62500.50 424.10 113186.86 00:23:50.456 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2026.08 87.06 63259.91 771.12 114837.14 00:23:50.456 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2013.10 86.50 63696.53 751.70 110115.55 00:23:50.456 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2047.78 87.99 62636.03 774.17 118270.72 00:23:50.456 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2030.34 87.24 63206.30 843.86 120441.68 00:23:50.456 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2033.32 87.37 63130.16 629.94 114803.24 00:23:50.456 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2025.66 87.04 63388.39 766.97 123687.02 00:23:50.456 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2032.68 87.34 63214.28 470.69 114408.32 00:23:50.456 ======================================================== 00:23:50.456 Total : 20321.03 873.17 63110.66 424.10 123687.02 00:23:50.456 00:23:50.456 [2024-12-09 06:22:44.621837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf720 is same with the state(6) to be set 00:23:50.456 [2024-12-09 06:22:44.621879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf900 is same with the state(6) to be set 00:23:50.456 [2024-12-09 06:22:44.621907] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cdbc0 is same with the state(6) to be set 00:23:50.456 [2024-12-09 06:22:44.621933] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cd890 is same with the state(6) to be set 00:23:50.456 [2024-12-09 06:22:44.621959] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cdef0 is same with the state(6) to be set 00:23:50.456 [2024-12-09 06:22:44.621985] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cfae0 is same with the state(6) to be set 00:23:50.457 [2024-12-09 06:22:44.622012] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cd560 is same with the state(6) to be set 00:23:50.457 [2024-12-09 06:22:44.622040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce410 is same with the state(6) to be set 00:23:50.457 [2024-12-09 06:22:44.622066] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce740 is same with the state(6) to be set 00:23:50.457 [2024-12-09 06:22:44.622095] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cea70 is same with the state(6) to be set 00:23:50.457 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:23:50.457 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:23:51.396 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 393800 00:23:51.396 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:23:51.396 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 393800 00:23:51.396 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:23:51.396 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:51.396 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:23:51.396 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:51.396 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 393800 00:23:51.396 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:23:51.396 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:51.396 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:51.396 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:51.396 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:23:51.396 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:51.396 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:51.396 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:51.396 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:51.396 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:51.396 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:23:51.396 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:51.396 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:23:51.396 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:51.396 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:51.396 rmmod nvme_tcp 00:23:51.396 rmmod nvme_fabrics 00:23:51.396 rmmod nvme_keyring 00:23:51.396 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:51.396 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:23:51.396 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:23:51.396 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 393648 ']' 00:23:51.396 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 393648 00:23:51.396 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 393648 ']' 00:23:51.396 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 393648 00:23:51.396 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (393648) - No such process 00:23:51.396 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 393648 is not found' 00:23:51.396 Process with pid 393648 is not found 00:23:51.396 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:51.396 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:51.396 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:51.396 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:23:51.396 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:23:51.396 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:51.396 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:23:51.396 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:51.396 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:51.396 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:51.396 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:51.396 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:53.938 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:53.938 00:23:53.938 real 0m10.279s 00:23:53.938 user 0m28.241s 00:23:53.938 sys 0m3.833s 00:23:53.938 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:53.938 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:53.938 ************************************ 00:23:53.938 END TEST nvmf_shutdown_tc4 00:23:53.938 ************************************ 00:23:53.938 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:23:53.938 00:23:53.938 real 0m42.803s 00:23:53.938 user 1m45.105s 00:23:53.938 sys 0m13.260s 00:23:53.938 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:53.938 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:53.938 ************************************ 00:23:53.938 END TEST nvmf_shutdown 00:23:53.938 ************************************ 00:23:53.939 06:22:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:23:53.939 06:22:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:53.939 06:22:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:53.939 06:22:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:53.939 ************************************ 00:23:53.939 START TEST nvmf_nsid 00:23:53.939 ************************************ 00:23:53.939 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:23:53.939 * Looking for test storage... 00:23:53.939 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:53.939 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:53.939 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:23:53.939 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:53.939 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:53.939 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:53.939 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:53.939 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:53.939 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:23:53.939 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:23:53.939 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:23:53.939 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:23:53.939 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:23:53.939 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:23:53.939 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:23:53.939 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:53.939 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:23:53.939 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:23:53.939 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:53.939 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:53.939 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:23:53.939 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:23:53.939 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:53.939 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:23:53.939 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:23:53.939 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:23:53.939 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:23:53.939 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:53.939 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:23:53.939 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:23:53.939 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:53.939 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:53.939 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:23:53.939 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:53.939 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:53.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.939 --rc genhtml_branch_coverage=1 00:23:53.939 --rc genhtml_function_coverage=1 00:23:53.939 --rc genhtml_legend=1 00:23:53.939 --rc geninfo_all_blocks=1 00:23:53.939 --rc geninfo_unexecuted_blocks=1 00:23:53.939 00:23:53.939 ' 00:23:53.939 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:53.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.939 --rc genhtml_branch_coverage=1 00:23:53.939 --rc genhtml_function_coverage=1 00:23:53.939 --rc genhtml_legend=1 00:23:53.939 --rc geninfo_all_blocks=1 00:23:53.939 --rc geninfo_unexecuted_blocks=1 00:23:53.939 00:23:53.939 ' 00:23:53.939 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:53.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.939 --rc genhtml_branch_coverage=1 00:23:53.939 --rc genhtml_function_coverage=1 00:23:53.939 --rc genhtml_legend=1 00:23:53.939 --rc geninfo_all_blocks=1 00:23:53.939 --rc geninfo_unexecuted_blocks=1 00:23:53.939 00:23:53.939 ' 00:23:53.939 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:53.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.939 --rc genhtml_branch_coverage=1 00:23:53.939 --rc genhtml_function_coverage=1 00:23:53.939 --rc genhtml_legend=1 00:23:53.939 --rc geninfo_all_blocks=1 00:23:53.939 --rc geninfo_unexecuted_blocks=1 00:23:53.939 00:23:53.939 ' 00:23:53.939 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:53.939 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:23:53.939 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:53.939 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:53.939 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:53.939 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:53.939 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:53.939 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:53.939 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:53.939 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:53.939 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:53.939 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:53.939 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:23:53.939 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:23:53.939 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:53.939 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:53.939 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:53.939 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:53.939 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:53.939 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:23:53.939 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:53.939 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:53.939 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:53.939 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.939 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.940 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.940 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:23:53.940 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.940 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:23:53.940 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:53.940 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:53.940 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:53.940 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:53.940 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:53.940 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:53.940 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:53.940 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:53.940 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:53.940 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:53.940 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:23:53.940 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:23:53.940 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:23:53.940 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:23:53.940 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:23:53.940 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:23:53.940 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:53.940 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:53.940 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:53.940 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:53.940 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:53.940 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:53.940 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:53.940 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:53.940 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:53.940 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:53.940 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:23:53.940 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:02.093 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:02.093 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:24:02.093 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:02.093 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:02.093 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:02.093 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:02.093 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:02.093 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:24:02.093 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:02.093 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:24:02.093 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:24:02.093 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:24:02.093 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:24:02.093 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:24:02.093 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:24:02.093 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:02.093 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:02.093 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:02.093 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:02.093 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:02.093 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:02.093 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:02.093 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:02.093 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:02.093 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:02.093 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:02.093 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:02.093 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:02.093 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:02.093 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:02.093 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:02.093 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:02.093 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:02.093 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:02.093 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:02.093 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:02.093 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:02.093 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:02.093 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:02.093 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:02.093 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:02.093 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:02.093 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:02.093 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:02.093 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:02.093 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:02.093 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:02.093 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:02.093 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:02.093 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:02.093 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:02.093 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:02.093 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:02.093 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:02.093 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:02.093 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:02.093 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:02.093 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:02.093 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:02.093 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:02.093 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:02.093 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:02.093 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:02.093 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:02.094 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:02.094 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:02.094 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:02.094 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:02.094 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:02.094 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:02.094 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:02.094 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:02.094 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:02.094 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:24:02.094 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:02.094 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:02.094 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:02.094 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:02.094 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:02.094 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:02.094 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:02.094 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:02.094 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:02.094 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:02.094 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:02.094 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:02.094 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:02.094 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:02.094 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:02.094 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:02.094 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:02.094 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:02.094 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:02.094 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:02.094 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:02.094 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:02.094 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:02.094 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:02.094 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:02.094 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:02.094 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:02.094 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.570 ms 00:24:02.094 00:24:02.094 --- 10.0.0.2 ping statistics --- 00:24:02.094 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:02.094 rtt min/avg/max/mdev = 0.570/0.570/0.570/0.000 ms 00:24:02.094 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:02.094 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:02.094 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:24:02.094 00:24:02.094 --- 10.0.0.1 ping statistics --- 00:24:02.094 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:02.094 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:24:02.094 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:02.094 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:24:02.094 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:02.094 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:02.094 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:02.094 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:02.094 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:02.094 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:02.094 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:02.094 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:24:02.094 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:02.094 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:02.094 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:02.094 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=398855 00:24:02.094 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 398855 00:24:02.094 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:24:02.094 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 398855 ']' 00:24:02.094 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:02.094 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:02.094 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:02.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:02.094 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:02.094 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:02.094 [2024-12-09 06:22:55.860517] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:24:02.094 [2024-12-09 06:22:55.860582] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:02.094 [2024-12-09 06:22:55.956834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:02.094 [2024-12-09 06:22:56.007125] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:02.094 [2024-12-09 06:22:56.007176] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:02.094 [2024-12-09 06:22:56.007185] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:02.094 [2024-12-09 06:22:56.007192] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:02.094 [2024-12-09 06:22:56.007198] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:02.094 [2024-12-09 06:22:56.007967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:02.355 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:02.355 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:24:02.355 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:02.355 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:02.355 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:02.355 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:02.355 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:02.355 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=399094 00:24:02.355 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:24:02.355 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:24:02.355 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:24:02.355 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:24:02.355 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:02.355 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:02.355 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:02.355 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:02.355 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:02.355 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:02.355 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:02.355 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:02.355 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:02.355 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:24:02.355 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:24:02.355 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=58ba077e-c29b-43d8-8a98-48ab29bcb0a0 00:24:02.355 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:24:02.355 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=1e84d083-55e2-479b-94f6-b6e496b18707 00:24:02.355 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:24:02.355 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=b98e023f-e973-4021-b62b-7f5102eeac6a 00:24:02.355 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:24:02.355 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.355 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:02.355 null0 00:24:02.355 null1 00:24:02.355 null2 00:24:02.355 [2024-12-09 06:22:56.807273] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:24:02.355 [2024-12-09 06:22:56.807343] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid399094 ] 00:24:02.355 [2024-12-09 06:22:56.810300] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:02.355 [2024-12-09 06:22:56.834566] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:02.355 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.355 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 399094 /var/tmp/tgt2.sock 00:24:02.355 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 399094 ']' 00:24:02.355 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:24:02.356 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:02.356 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:24:02.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:24:02.356 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:02.356 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:02.356 [2024-12-09 06:22:56.881316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:02.356 [2024-12-09 06:22:56.932385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:02.621 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:02.621 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:24:02.621 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:24:03.193 [2024-12-09 06:22:57.479586] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:03.193 [2024-12-09 06:22:57.495762] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:24:03.193 nvme0n1 nvme0n2 00:24:03.193 nvme1n1 00:24:03.193 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:24:03.193 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:24:03.193 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:24:04.580 06:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:24:04.580 06:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:24:04.580 06:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:24:04.580 06:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:24:04.580 06:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:24:04.580 06:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:24:04.580 06:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:24:04.580 06:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:24:04.580 06:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:04.580 06:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:24:04.580 06:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:24:04.580 06:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:24:04.580 06:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:24:05.519 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:05.519 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:24:05.519 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:24:05.519 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:24:05.519 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:24:05.519 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 58ba077e-c29b-43d8-8a98-48ab29bcb0a0 00:24:05.519 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:24:05.519 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:24:05.519 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:24:05.519 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:24:05.519 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:24:05.519 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=58ba077ec29b43d88a9848ab29bcb0a0 00:24:05.519 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 58BA077EC29B43D88A9848AB29BCB0A0 00:24:05.519 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 58BA077EC29B43D88A9848AB29BCB0A0 == \5\8\B\A\0\7\7\E\C\2\9\B\4\3\D\8\8\A\9\8\4\8\A\B\2\9\B\C\B\0\A\0 ]] 00:24:05.519 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:24:05.519 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:24:05.519 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:24:05.519 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:05.519 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:24:05.519 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:24:05.519 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:24:05.519 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 1e84d083-55e2-479b-94f6-b6e496b18707 00:24:05.519 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:24:05.519 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:24:05.519 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:24:05.519 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:24:05.519 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:24:05.780 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=1e84d08355e2479b94f6b6e496b18707 00:24:05.780 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 1E84D08355E2479B94F6B6E496B18707 00:24:05.780 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 1E84D08355E2479B94F6B6E496B18707 == \1\E\8\4\D\0\8\3\5\5\E\2\4\7\9\B\9\4\F\6\B\6\E\4\9\6\B\1\8\7\0\7 ]] 00:24:05.780 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:24:05.780 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:24:05.780 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:05.780 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:24:05.780 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:24:05.780 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:24:05.780 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:24:05.780 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid b98e023f-e973-4021-b62b-7f5102eeac6a 00:24:05.780 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:24:05.780 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:24:05.780 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:24:05.780 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:24:05.780 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:24:05.780 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=b98e023fe9734021b62b7f5102eeac6a 00:24:05.780 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo B98E023FE9734021B62B7F5102EEAC6A 00:24:05.780 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ B98E023FE9734021B62B7F5102EEAC6A == \B\9\8\E\0\2\3\F\E\9\7\3\4\0\2\1\B\6\2\B\7\F\5\1\0\2\E\E\A\C\6\A ]] 00:24:05.780 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:24:06.040 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:24:06.040 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:24:06.040 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 399094 00:24:06.040 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 399094 ']' 00:24:06.040 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 399094 00:24:06.040 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:24:06.040 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:06.040 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 399094 00:24:06.040 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:06.040 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:06.040 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 399094' 00:24:06.040 killing process with pid 399094 00:24:06.040 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 399094 00:24:06.040 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 399094 00:24:06.299 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:24:06.299 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:06.299 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:24:06.299 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:06.299 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:24:06.299 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:06.299 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:06.299 rmmod nvme_tcp 00:24:06.299 rmmod nvme_fabrics 00:24:06.299 rmmod nvme_keyring 00:24:06.299 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:06.299 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:24:06.299 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:24:06.299 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 398855 ']' 00:24:06.299 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 398855 00:24:06.299 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 398855 ']' 00:24:06.299 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 398855 00:24:06.299 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:24:06.299 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:06.299 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 398855 00:24:06.299 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:06.299 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:06.299 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 398855' 00:24:06.299 killing process with pid 398855 00:24:06.299 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 398855 00:24:06.299 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 398855 00:24:06.558 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:06.558 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:06.558 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:06.558 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:24:06.558 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:24:06.558 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:06.558 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:24:06.558 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:06.558 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:06.558 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:06.558 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:06.558 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:08.466 06:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:08.466 00:24:08.466 real 0m14.901s 00:24:08.466 user 0m11.363s 00:24:08.466 sys 0m6.855s 00:24:08.466 06:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:08.466 06:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:08.466 ************************************ 00:24:08.466 END TEST nvmf_nsid 00:24:08.466 ************************************ 00:24:08.466 06:23:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:24:08.466 00:24:08.466 real 12m59.165s 00:24:08.466 user 27m23.549s 00:24:08.466 sys 3m47.154s 00:24:08.466 06:23:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:08.466 06:23:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:08.466 ************************************ 00:24:08.466 END TEST nvmf_target_extra 00:24:08.466 ************************************ 00:24:08.727 06:23:03 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:24:08.727 06:23:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:08.727 06:23:03 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:08.727 06:23:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:08.727 ************************************ 00:24:08.727 START TEST nvmf_host 00:24:08.727 ************************************ 00:24:08.727 06:23:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:24:08.727 * Looking for test storage... 00:24:08.727 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:24:08.727 06:23:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:08.727 06:23:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:24:08.727 06:23:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:08.727 06:23:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:08.727 06:23:03 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:08.727 06:23:03 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:08.727 06:23:03 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:08.727 06:23:03 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:08.727 06:23:03 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:08.727 06:23:03 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:08.727 06:23:03 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:08.727 06:23:03 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:08.727 06:23:03 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:08.727 06:23:03 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:08.727 06:23:03 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:08.727 06:23:03 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:24:08.727 06:23:03 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:24:08.727 06:23:03 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:08.727 06:23:03 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:08.727 06:23:03 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:24:08.727 06:23:03 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:24:08.727 06:23:03 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:08.727 06:23:03 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:24:08.727 06:23:03 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:08.727 06:23:03 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:24:08.727 06:23:03 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:24:08.727 06:23:03 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:08.727 06:23:03 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:24:08.727 06:23:03 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:08.727 06:23:03 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:08.727 06:23:03 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:08.727 06:23:03 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:24:08.727 06:23:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:08.727 06:23:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:08.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.727 --rc genhtml_branch_coverage=1 00:24:08.727 --rc genhtml_function_coverage=1 00:24:08.727 --rc genhtml_legend=1 00:24:08.727 --rc geninfo_all_blocks=1 00:24:08.727 --rc geninfo_unexecuted_blocks=1 00:24:08.727 00:24:08.727 ' 00:24:08.727 06:23:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:08.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.727 --rc genhtml_branch_coverage=1 00:24:08.727 --rc genhtml_function_coverage=1 00:24:08.727 --rc genhtml_legend=1 00:24:08.727 --rc geninfo_all_blocks=1 00:24:08.727 --rc geninfo_unexecuted_blocks=1 00:24:08.727 00:24:08.727 ' 00:24:08.727 06:23:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:08.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.727 --rc genhtml_branch_coverage=1 00:24:08.727 --rc genhtml_function_coverage=1 00:24:08.727 --rc genhtml_legend=1 00:24:08.727 --rc geninfo_all_blocks=1 00:24:08.727 --rc geninfo_unexecuted_blocks=1 00:24:08.727 00:24:08.727 ' 00:24:08.727 06:23:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:08.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.727 --rc genhtml_branch_coverage=1 00:24:08.727 --rc genhtml_function_coverage=1 00:24:08.727 --rc genhtml_legend=1 00:24:08.727 --rc geninfo_all_blocks=1 00:24:08.727 --rc geninfo_unexecuted_blocks=1 00:24:08.727 00:24:08.727 ' 00:24:08.727 06:23:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:08.727 06:23:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:24:08.988 06:23:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:08.988 06:23:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:08.988 06:23:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:08.988 06:23:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:08.988 06:23:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:08.988 06:23:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:08.988 06:23:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:08.988 06:23:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:08.988 06:23:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:08.988 06:23:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:08.988 06:23:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:24:08.988 06:23:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:24:08.988 06:23:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:08.988 06:23:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:08.988 06:23:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:08.988 06:23:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:08.988 06:23:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:08.988 06:23:03 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:08.988 06:23:03 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:08.988 06:23:03 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:08.988 06:23:03 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:08.988 06:23:03 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.988 06:23:03 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.988 06:23:03 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.988 06:23:03 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:24:08.988 06:23:03 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.988 06:23:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:24:08.988 06:23:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:08.988 06:23:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:08.988 06:23:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:08.988 06:23:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:08.988 06:23:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:08.988 06:23:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:08.988 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:08.988 06:23:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:08.988 06:23:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:08.988 06:23:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:08.988 06:23:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:24:08.988 06:23:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:24:08.988 06:23:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:24:08.988 06:23:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:08.988 06:23:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:08.988 06:23:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:08.988 06:23:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.988 ************************************ 00:24:08.988 START TEST nvmf_multicontroller 00:24:08.988 ************************************ 00:24:08.988 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:08.988 * Looking for test storage... 00:24:08.988 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:08.988 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:08.988 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:24:08.988 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:08.988 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:08.988 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:08.988 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:08.988 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:08.988 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:24:08.988 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:24:08.988 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:24:08.988 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:24:08.988 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:24:08.988 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:24:08.988 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:24:08.988 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:08.988 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:24:08.988 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:24:08.988 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:08.988 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:08.988 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:24:08.988 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:24:08.988 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:08.988 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:24:08.989 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:24:08.989 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:24:08.989 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:24:08.989 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:08.989 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:24:08.989 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:24:08.989 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:08.989 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:08.989 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:24:08.989 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:08.989 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:08.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.989 --rc genhtml_branch_coverage=1 00:24:08.989 --rc genhtml_function_coverage=1 00:24:08.989 --rc genhtml_legend=1 00:24:08.989 --rc geninfo_all_blocks=1 00:24:08.989 --rc geninfo_unexecuted_blocks=1 00:24:08.989 00:24:08.989 ' 00:24:08.989 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:08.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.989 --rc genhtml_branch_coverage=1 00:24:08.989 --rc genhtml_function_coverage=1 00:24:08.989 --rc genhtml_legend=1 00:24:08.989 --rc geninfo_all_blocks=1 00:24:08.989 --rc geninfo_unexecuted_blocks=1 00:24:08.989 00:24:08.989 ' 00:24:08.989 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:08.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.989 --rc genhtml_branch_coverage=1 00:24:08.989 --rc genhtml_function_coverage=1 00:24:08.989 --rc genhtml_legend=1 00:24:08.989 --rc geninfo_all_blocks=1 00:24:08.989 --rc geninfo_unexecuted_blocks=1 00:24:08.989 00:24:08.989 ' 00:24:08.989 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:08.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.989 --rc genhtml_branch_coverage=1 00:24:08.989 --rc genhtml_function_coverage=1 00:24:08.989 --rc genhtml_legend=1 00:24:08.989 --rc geninfo_all_blocks=1 00:24:08.989 --rc geninfo_unexecuted_blocks=1 00:24:08.989 00:24:08.989 ' 00:24:08.989 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:08.989 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:24:08.989 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:08.989 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:08.989 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:08.989 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:08.989 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:08.989 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:08.989 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:08.989 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:08.989 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:08.989 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:08.989 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:24:08.989 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:24:08.989 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:08.989 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:09.249 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:09.249 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:09.249 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:09.249 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:24:09.250 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:09.250 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:09.250 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:09.250 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.250 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.250 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.250 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:24:09.250 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.250 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:24:09.250 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:09.250 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:09.250 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:09.250 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:09.250 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:09.250 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:09.250 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:09.250 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:09.250 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:09.250 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:09.250 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:09.250 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:09.250 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:24:09.250 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:24:09.250 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:09.250 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:24:09.250 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:24:09.250 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:09.250 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:09.250 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:09.250 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:09.250 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:09.250 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:09.250 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:09.250 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:09.250 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:09.250 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:09.250 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:24:09.250 06:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:17.386 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:17.386 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:24:17.386 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:17.386 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:17.386 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:17.386 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:17.386 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:17.386 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:24:17.386 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:17.386 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:24:17.386 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:24:17.386 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:24:17.386 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:24:17.386 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:24:17.386 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:24:17.386 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:17.386 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:17.386 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:17.386 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:17.386 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:17.386 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:17.386 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:17.386 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:17.386 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:17.386 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:17.386 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:17.386 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:17.386 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:17.386 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:17.386 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:17.386 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:17.386 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:17.386 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:17.386 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:17.386 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:17.386 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:17.386 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:17.386 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:17.386 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:17.386 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:17.386 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:17.386 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:17.386 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:17.386 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:17.386 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:17.386 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:17.386 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:17.386 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:17.386 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:17.386 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:17.386 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:17.386 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:17.386 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:17.386 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:17.386 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:17.386 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:17.386 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:17.386 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:17.386 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:17.386 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:17.386 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:17.386 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:17.386 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:17.386 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:17.386 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:17.386 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:17.386 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:17.386 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:17.386 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:17.386 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:17.386 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:17.386 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:17.386 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:17.386 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:24:17.386 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:17.386 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:17.386 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:17.386 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:17.386 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:17.386 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:17.386 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:17.386 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:17.386 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:17.386 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:17.387 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:17.387 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:17.387 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:17.387 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:17.387 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:17.387 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:17.387 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:17.387 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:17.387 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:17.387 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:17.387 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:17.387 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:17.387 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:17.387 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:17.387 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:17.387 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:17.387 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:17.387 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.506 ms 00:24:17.387 00:24:17.387 --- 10.0.0.2 ping statistics --- 00:24:17.387 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:17.387 rtt min/avg/max/mdev = 0.506/0.506/0.506/0.000 ms 00:24:17.387 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:17.387 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:17.387 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:24:17.387 00:24:17.387 --- 10.0.0.1 ping statistics --- 00:24:17.387 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:17.387 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:24:17.387 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:17.387 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:24:17.387 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:17.387 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:17.387 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:17.387 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:17.387 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:17.387 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:17.387 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:17.387 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:24:17.387 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:17.387 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:17.387 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:17.387 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=403794 00:24:17.387 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 403794 00:24:17.387 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:17.387 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 403794 ']' 00:24:17.387 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:17.387 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:17.387 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:17.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:17.387 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:17.387 06:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:17.387 [2024-12-09 06:23:10.923256] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:24:17.387 [2024-12-09 06:23:10.923321] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:17.387 [2024-12-09 06:23:11.002139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:17.387 [2024-12-09 06:23:11.052711] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:17.387 [2024-12-09 06:23:11.052763] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:17.387 [2024-12-09 06:23:11.052772] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:17.387 [2024-12-09 06:23:11.052778] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:17.387 [2024-12-09 06:23:11.052784] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:17.387 [2024-12-09 06:23:11.054632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:17.387 [2024-12-09 06:23:11.054857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:17.387 [2024-12-09 06:23:11.054859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:17.387 06:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:17.387 06:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:24:17.387 06:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:17.387 06:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:17.387 06:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:17.387 06:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:17.387 06:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:17.387 06:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.387 06:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:17.387 [2024-12-09 06:23:11.828972] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:17.387 06:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.387 06:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:17.387 06:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.387 06:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:17.387 Malloc0 00:24:17.387 06:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.387 06:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:17.387 06:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.387 06:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:17.387 06:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.387 06:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:17.387 06:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.387 06:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:17.387 06:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.387 06:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:17.387 06:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.387 06:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:17.387 [2024-12-09 06:23:11.901645] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:17.387 06:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.387 06:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:17.387 06:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.387 06:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:17.387 [2024-12-09 06:23:11.913583] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:17.387 06:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.387 06:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:17.387 06:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.387 06:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:17.387 Malloc1 00:24:17.387 06:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.387 06:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:24:17.387 06:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.387 06:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:17.387 06:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.387 06:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:24:17.387 06:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.387 06:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:17.387 06:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.387 06:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:17.388 06:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.388 06:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:17.647 06:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.647 06:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:24:17.647 06:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.647 06:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:17.647 06:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.647 06:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=403978 00:24:17.647 06:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:17.647 06:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:24:17.647 06:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 403978 /var/tmp/bdevperf.sock 00:24:17.647 06:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 403978 ']' 00:24:17.647 06:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:17.647 06:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:17.647 06:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:17.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:17.647 06:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:17.647 06:23:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:18.588 06:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:18.588 06:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:24:18.588 06:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:24:18.588 06:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.588 06:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:18.588 NVMe0n1 00:24:18.588 06:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.588 06:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:18.588 06:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:24:18.588 06:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.588 06:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:18.588 06:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.588 1 00:24:18.588 06:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:24:18.588 06:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:24:18.588 06:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:24:18.588 06:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:18.588 06:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:18.588 06:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:18.588 06:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:18.588 06:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:24:18.588 06:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.588 06:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:18.588 request: 00:24:18.588 { 00:24:18.588 "name": "NVMe0", 00:24:18.588 "trtype": "tcp", 00:24:18.588 "traddr": "10.0.0.2", 00:24:18.588 "adrfam": "ipv4", 00:24:18.588 "trsvcid": "4420", 00:24:18.588 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:18.588 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:24:18.588 "hostaddr": "10.0.0.1", 00:24:18.588 "prchk_reftag": false, 00:24:18.588 "prchk_guard": false, 00:24:18.588 "hdgst": false, 00:24:18.588 "ddgst": false, 00:24:18.588 "allow_unrecognized_csi": false, 00:24:18.588 "method": "bdev_nvme_attach_controller", 00:24:18.588 "req_id": 1 00:24:18.588 } 00:24:18.588 Got JSON-RPC error response 00:24:18.588 response: 00:24:18.588 { 00:24:18.588 "code": -114, 00:24:18.588 "message": "A controller named NVMe0 already exists with the specified network path" 00:24:18.588 } 00:24:18.588 06:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:18.588 06:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:24:18.588 06:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:18.588 06:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:18.588 06:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:18.588 06:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:24:18.588 06:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:24:18.589 06:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:24:18.589 06:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:18.589 06:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:18.589 06:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:18.589 06:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:18.589 06:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:24:18.589 06:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.589 06:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:18.589 request: 00:24:18.589 { 00:24:18.589 "name": "NVMe0", 00:24:18.589 "trtype": "tcp", 00:24:18.589 "traddr": "10.0.0.2", 00:24:18.589 "adrfam": "ipv4", 00:24:18.589 "trsvcid": "4420", 00:24:18.589 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:18.589 "hostaddr": "10.0.0.1", 00:24:18.589 "prchk_reftag": false, 00:24:18.589 "prchk_guard": false, 00:24:18.589 "hdgst": false, 00:24:18.589 "ddgst": false, 00:24:18.589 "allow_unrecognized_csi": false, 00:24:18.589 "method": "bdev_nvme_attach_controller", 00:24:18.589 "req_id": 1 00:24:18.589 } 00:24:18.589 Got JSON-RPC error response 00:24:18.589 response: 00:24:18.589 { 00:24:18.589 "code": -114, 00:24:18.589 "message": "A controller named NVMe0 already exists with the specified network path" 00:24:18.589 } 00:24:18.589 06:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:18.589 06:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:24:18.589 06:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:18.589 06:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:18.589 06:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:18.589 06:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:24:18.589 06:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:24:18.589 06:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:24:18.589 06:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:18.589 06:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:18.589 06:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:18.589 06:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:18.589 06:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:24:18.589 06:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.589 06:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:18.589 request: 00:24:18.589 { 00:24:18.589 "name": "NVMe0", 00:24:18.589 "trtype": "tcp", 00:24:18.589 "traddr": "10.0.0.2", 00:24:18.589 "adrfam": "ipv4", 00:24:18.589 "trsvcid": "4420", 00:24:18.589 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:18.589 "hostaddr": "10.0.0.1", 00:24:18.589 "prchk_reftag": false, 00:24:18.589 "prchk_guard": false, 00:24:18.589 "hdgst": false, 00:24:18.589 "ddgst": false, 00:24:18.589 "multipath": "disable", 00:24:18.589 "allow_unrecognized_csi": false, 00:24:18.589 "method": "bdev_nvme_attach_controller", 00:24:18.589 "req_id": 1 00:24:18.589 } 00:24:18.589 Got JSON-RPC error response 00:24:18.589 response: 00:24:18.589 { 00:24:18.589 "code": -114, 00:24:18.589 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:24:18.589 } 00:24:18.589 06:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:18.589 06:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:24:18.589 06:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:18.589 06:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:18.589 06:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:18.589 06:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:24:18.589 06:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:24:18.589 06:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:24:18.589 06:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:18.589 06:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:18.589 06:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:18.589 06:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:18.589 06:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:24:18.589 06:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.589 06:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:18.589 request: 00:24:18.589 { 00:24:18.589 "name": "NVMe0", 00:24:18.589 "trtype": "tcp", 00:24:18.589 "traddr": "10.0.0.2", 00:24:18.589 "adrfam": "ipv4", 00:24:18.589 "trsvcid": "4420", 00:24:18.589 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:18.589 "hostaddr": "10.0.0.1", 00:24:18.589 "prchk_reftag": false, 00:24:18.589 "prchk_guard": false, 00:24:18.589 "hdgst": false, 00:24:18.589 "ddgst": false, 00:24:18.589 "multipath": "failover", 00:24:18.589 "allow_unrecognized_csi": false, 00:24:18.589 "method": "bdev_nvme_attach_controller", 00:24:18.589 "req_id": 1 00:24:18.589 } 00:24:18.589 Got JSON-RPC error response 00:24:18.589 response: 00:24:18.589 { 00:24:18.589 "code": -114, 00:24:18.589 "message": "A controller named NVMe0 already exists with the specified network path" 00:24:18.589 } 00:24:18.589 06:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:18.589 06:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:24:18.589 06:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:18.589 06:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:18.589 06:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:18.589 06:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:18.589 06:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.589 06:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:18.850 NVMe0n1 00:24:18.850 06:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.850 06:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:18.850 06:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.850 06:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:18.850 06:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.850 06:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:24:18.850 06:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.850 06:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:19.110 00:24:19.110 06:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.110 06:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:19.110 06:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:24:19.110 06:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.110 06:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:19.110 06:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.110 06:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:24:19.110 06:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:20.050 { 00:24:20.050 "results": [ 00:24:20.050 { 00:24:20.050 "job": "NVMe0n1", 00:24:20.050 "core_mask": "0x1", 00:24:20.050 "workload": "write", 00:24:20.050 "status": "finished", 00:24:20.050 "queue_depth": 128, 00:24:20.050 "io_size": 4096, 00:24:20.050 "runtime": 1.007383, 00:24:20.050 "iops": 28500.580216263326, 00:24:20.050 "mibps": 111.33039146977862, 00:24:20.050 "io_failed": 0, 00:24:20.050 "io_timeout": 0, 00:24:20.050 "avg_latency_us": 4483.535287413294, 00:24:20.050 "min_latency_us": 1877.8584615384616, 00:24:20.050 "max_latency_us": 7965.1446153846155 00:24:20.050 } 00:24:20.050 ], 00:24:20.050 "core_count": 1 00:24:20.050 } 00:24:20.324 06:23:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:24:20.324 06:23:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.324 06:23:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:20.324 06:23:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.324 06:23:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:24:20.324 06:23:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 403978 00:24:20.324 06:23:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 403978 ']' 00:24:20.324 06:23:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 403978 00:24:20.324 06:23:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:24:20.324 06:23:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:20.324 06:23:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 403978 00:24:20.324 06:23:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:20.324 06:23:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:20.324 06:23:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 403978' 00:24:20.324 killing process with pid 403978 00:24:20.324 06:23:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 403978 00:24:20.324 06:23:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 403978 00:24:20.324 06:23:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:20.324 06:23:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.324 06:23:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:20.324 06:23:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.324 06:23:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:20.324 06:23:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.324 06:23:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:20.324 06:23:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.325 06:23:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:24:20.325 06:23:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:20.325 06:23:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:24:20.325 06:23:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:24:20.325 06:23:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:24:20.325 06:23:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:24:20.325 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:20.325 [2024-12-09 06:23:12.055973] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:24:20.325 [2024-12-09 06:23:12.056059] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid403978 ] 00:24:20.325 [2024-12-09 06:23:12.147946] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:20.325 [2024-12-09 06:23:12.199509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:20.325 [2024-12-09 06:23:13.491665] bdev.c:4934:bdev_name_add: *ERROR*: Bdev name 4e039414-70a8-4ac5-b8fd-abd3de873c32 already exists 00:24:20.325 [2024-12-09 06:23:13.491693] bdev.c:8154:bdev_register: *ERROR*: Unable to add uuid:4e039414-70a8-4ac5-b8fd-abd3de873c32 alias for bdev NVMe1n1 00:24:20.325 [2024-12-09 06:23:13.491701] bdev_nvme.c:4665:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:24:20.325 Running I/O for 1 seconds... 00:24:20.325 28471.00 IOPS, 111.21 MiB/s 00:24:20.325 Latency(us) 00:24:20.325 [2024-12-09T05:23:14.912Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:20.325 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:24:20.325 NVMe0n1 : 1.01 28500.58 111.33 0.00 0.00 4483.54 1877.86 7965.14 00:24:20.325 [2024-12-09T05:23:14.912Z] =================================================================================================================== 00:24:20.325 [2024-12-09T05:23:14.912Z] Total : 28500.58 111.33 0.00 0.00 4483.54 1877.86 7965.14 00:24:20.325 Received shutdown signal, test time was about 1.000000 seconds 00:24:20.325 00:24:20.325 Latency(us) 00:24:20.325 [2024-12-09T05:23:14.912Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:20.325 [2024-12-09T05:23:14.912Z] =================================================================================================================== 00:24:20.325 [2024-12-09T05:23:14.912Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:20.325 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:20.325 06:23:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:20.325 06:23:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:24:20.325 06:23:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:24:20.325 06:23:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:20.325 06:23:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:24:20.325 06:23:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:20.325 06:23:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:24:20.325 06:23:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:20.325 06:23:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:20.325 rmmod nvme_tcp 00:24:20.325 rmmod nvme_fabrics 00:24:20.585 rmmod nvme_keyring 00:24:20.585 06:23:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:20.585 06:23:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:24:20.585 06:23:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:24:20.585 06:23:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 403794 ']' 00:24:20.585 06:23:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 403794 00:24:20.585 06:23:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 403794 ']' 00:24:20.585 06:23:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 403794 00:24:20.585 06:23:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:24:20.585 06:23:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:20.585 06:23:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 403794 00:24:20.585 06:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:20.586 06:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:20.586 06:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 403794' 00:24:20.586 killing process with pid 403794 00:24:20.586 06:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 403794 00:24:20.586 06:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 403794 00:24:20.586 06:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:20.586 06:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:20.586 06:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:20.586 06:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:24:20.586 06:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:24:20.586 06:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:20.586 06:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:24:20.586 06:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:20.586 06:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:20.586 06:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:20.586 06:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:20.586 06:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:23.131 06:23:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:23.131 00:24:23.131 real 0m13.842s 00:24:23.131 user 0m17.542s 00:24:23.131 sys 0m6.330s 00:24:23.131 06:23:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:23.131 06:23:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:23.132 ************************************ 00:24:23.132 END TEST nvmf_multicontroller 00:24:23.132 ************************************ 00:24:23.132 06:23:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:23.132 06:23:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:23.132 06:23:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:23.132 06:23:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.132 ************************************ 00:24:23.132 START TEST nvmf_aer 00:24:23.132 ************************************ 00:24:23.132 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:23.132 * Looking for test storage... 00:24:23.132 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:23.132 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:23.132 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:24:23.132 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:23.132 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:23.132 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:23.132 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:23.132 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:23.132 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:24:23.132 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:24:23.132 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:24:23.132 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:24:23.132 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:24:23.132 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:24:23.132 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:24:23.132 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:23.132 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:24:23.132 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:24:23.132 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:23.132 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:23.132 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:24:23.132 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:24:23.132 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:23.132 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:24:23.132 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:24:23.132 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:24:23.132 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:24:23.132 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:23.132 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:24:23.132 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:24:23.132 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:23.132 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:23.132 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:24:23.132 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:23.132 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:23.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.132 --rc genhtml_branch_coverage=1 00:24:23.132 --rc genhtml_function_coverage=1 00:24:23.132 --rc genhtml_legend=1 00:24:23.132 --rc geninfo_all_blocks=1 00:24:23.132 --rc geninfo_unexecuted_blocks=1 00:24:23.132 00:24:23.132 ' 00:24:23.132 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:23.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.132 --rc genhtml_branch_coverage=1 00:24:23.132 --rc genhtml_function_coverage=1 00:24:23.132 --rc genhtml_legend=1 00:24:23.132 --rc geninfo_all_blocks=1 00:24:23.132 --rc geninfo_unexecuted_blocks=1 00:24:23.132 00:24:23.132 ' 00:24:23.132 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:23.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.132 --rc genhtml_branch_coverage=1 00:24:23.132 --rc genhtml_function_coverage=1 00:24:23.132 --rc genhtml_legend=1 00:24:23.132 --rc geninfo_all_blocks=1 00:24:23.132 --rc geninfo_unexecuted_blocks=1 00:24:23.132 00:24:23.132 ' 00:24:23.132 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:23.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.132 --rc genhtml_branch_coverage=1 00:24:23.132 --rc genhtml_function_coverage=1 00:24:23.132 --rc genhtml_legend=1 00:24:23.132 --rc geninfo_all_blocks=1 00:24:23.132 --rc geninfo_unexecuted_blocks=1 00:24:23.132 00:24:23.132 ' 00:24:23.132 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:23.132 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:24:23.132 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:23.132 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:23.132 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:23.132 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:23.132 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:23.132 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:23.132 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:23.132 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:23.132 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:23.132 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:23.132 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:24:23.132 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:24:23.132 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:23.132 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:23.132 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:23.132 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:23.132 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:23.132 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:24:23.132 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:23.132 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:23.132 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:23.132 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.132 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.132 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.132 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:24:23.132 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.132 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:24:23.132 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:23.132 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:23.132 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:23.132 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:23.132 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:23.132 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:23.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:23.133 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:23.133 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:23.133 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:23.133 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:24:23.133 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:23.133 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:23.133 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:23.133 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:23.133 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:23.133 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:23.133 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:23.133 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:23.133 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:23.133 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:23.133 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:24:23.133 06:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:31.264 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:31.264 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:24:31.264 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:31.264 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:31.264 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:31.264 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:31.264 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:31.264 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:24:31.264 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:31.264 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:24:31.264 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:24:31.264 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:24:31.264 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:24:31.264 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:24:31.264 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:24:31.264 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:31.264 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:31.264 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:31.264 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:31.264 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:31.264 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:31.264 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:31.264 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:31.264 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:31.264 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:31.264 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:31.264 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:31.264 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:31.264 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:31.264 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:31.264 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:31.264 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:31.264 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:31.264 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:31.264 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:31.264 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:31.264 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:31.264 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:31.264 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:31.264 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:31.264 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:31.264 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:31.264 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:31.264 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:31.264 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:31.264 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:31.264 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:31.264 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:31.264 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:31.264 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:31.264 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:31.265 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:31.265 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:31.265 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:31.265 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:31.265 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:31.265 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:31.265 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:31.265 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:31.265 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:31.265 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:31.265 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:31.265 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:31.265 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:31.265 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:31.265 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:31.265 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:31.265 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:31.265 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:31.265 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:31.265 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:31.265 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:31.265 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:31.265 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:24:31.265 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:31.265 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:31.265 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:31.265 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:31.265 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:31.265 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:31.265 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:31.265 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:31.265 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:31.265 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:31.265 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:31.265 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:31.265 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:31.265 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:31.265 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:31.265 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:31.265 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:31.265 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:31.265 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:31.265 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:31.265 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:31.265 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:31.265 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:31.265 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:31.265 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:31.265 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:31.265 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:31.265 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.625 ms 00:24:31.265 00:24:31.265 --- 10.0.0.2 ping statistics --- 00:24:31.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:31.265 rtt min/avg/max/mdev = 0.625/0.625/0.625/0.000 ms 00:24:31.265 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:31.265 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:31.265 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:24:31.265 00:24:31.265 --- 10.0.0.1 ping statistics --- 00:24:31.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:31.265 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:24:31.265 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:31.265 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:24:31.265 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:31.265 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:31.265 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:31.265 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:31.265 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:31.265 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:31.265 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:31.265 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:24:31.265 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:31.265 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:31.265 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:31.265 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=408392 00:24:31.265 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 408392 00:24:31.265 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:31.265 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 408392 ']' 00:24:31.265 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:31.265 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:31.265 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:31.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:31.265 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:31.265 06:23:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:31.265 [2024-12-09 06:23:24.866124] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:24:31.265 [2024-12-09 06:23:24.866187] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:31.265 [2024-12-09 06:23:24.962913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:31.265 [2024-12-09 06:23:25.014283] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:31.265 [2024-12-09 06:23:25.014335] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:31.265 [2024-12-09 06:23:25.014343] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:31.265 [2024-12-09 06:23:25.014350] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:31.265 [2024-12-09 06:23:25.014356] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:31.265 [2024-12-09 06:23:25.016304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:31.265 [2024-12-09 06:23:25.016481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:31.265 [2024-12-09 06:23:25.016585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:31.265 [2024-12-09 06:23:25.016587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:31.265 06:23:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:31.265 06:23:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:24:31.265 06:23:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:31.265 06:23:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:31.265 06:23:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:31.265 06:23:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:31.265 06:23:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:31.265 06:23:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.265 06:23:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:31.265 [2024-12-09 06:23:25.764492] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:31.265 06:23:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.265 06:23:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:24:31.265 06:23:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.265 06:23:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:31.265 Malloc0 00:24:31.265 06:23:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.265 06:23:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:24:31.265 06:23:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.265 06:23:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:31.265 06:23:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.265 06:23:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:31.265 06:23:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.265 06:23:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:31.266 06:23:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.266 06:23:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:31.266 06:23:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.266 06:23:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:31.266 [2024-12-09 06:23:25.840836] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:31.266 06:23:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.266 06:23:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:24:31.266 06:23:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.266 06:23:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:31.526 [ 00:24:31.526 { 00:24:31.526 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:31.526 "subtype": "Discovery", 00:24:31.526 "listen_addresses": [], 00:24:31.526 "allow_any_host": true, 00:24:31.526 "hosts": [] 00:24:31.526 }, 00:24:31.526 { 00:24:31.526 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:31.526 "subtype": "NVMe", 00:24:31.526 "listen_addresses": [ 00:24:31.526 { 00:24:31.526 "trtype": "TCP", 00:24:31.526 "adrfam": "IPv4", 00:24:31.526 "traddr": "10.0.0.2", 00:24:31.526 "trsvcid": "4420" 00:24:31.526 } 00:24:31.526 ], 00:24:31.526 "allow_any_host": true, 00:24:31.526 "hosts": [], 00:24:31.526 "serial_number": "SPDK00000000000001", 00:24:31.526 "model_number": "SPDK bdev Controller", 00:24:31.526 "max_namespaces": 2, 00:24:31.526 "min_cntlid": 1, 00:24:31.526 "max_cntlid": 65519, 00:24:31.526 "namespaces": [ 00:24:31.526 { 00:24:31.526 "nsid": 1, 00:24:31.526 "bdev_name": "Malloc0", 00:24:31.526 "name": "Malloc0", 00:24:31.526 "nguid": "19D8268865A54A78BB361D00D76A0BD8", 00:24:31.526 "uuid": "19d82688-65a5-4a78-bb36-1d00d76a0bd8" 00:24:31.526 } 00:24:31.526 ] 00:24:31.526 } 00:24:31.526 ] 00:24:31.526 06:23:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.526 06:23:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:24:31.526 06:23:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:24:31.526 06:23:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=408671 00:24:31.526 06:23:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:24:31.526 06:23:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:24:31.526 06:23:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:24:31.526 06:23:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:31.526 06:23:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:24:31.526 06:23:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:24:31.526 06:23:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:24:31.526 06:23:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:31.526 06:23:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:24:31.526 06:23:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:24:31.526 06:23:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:24:31.526 06:23:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:31.526 06:23:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:31.526 06:23:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:24:31.526 06:23:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:24:31.527 06:23:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.527 06:23:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:31.787 Malloc1 00:24:31.788 06:23:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.788 06:23:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:24:31.788 06:23:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.788 06:23:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:31.788 06:23:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.788 06:23:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:24:31.788 06:23:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.788 06:23:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:31.788 Asynchronous Event Request test 00:24:31.788 Attaching to 10.0.0.2 00:24:31.788 Attached to 10.0.0.2 00:24:31.788 Registering asynchronous event callbacks... 00:24:31.788 Starting namespace attribute notice tests for all controllers... 00:24:31.788 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:24:31.788 aer_cb - Changed Namespace 00:24:31.788 Cleaning up... 00:24:31.788 [ 00:24:31.788 { 00:24:31.788 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:31.788 "subtype": "Discovery", 00:24:31.788 "listen_addresses": [], 00:24:31.788 "allow_any_host": true, 00:24:31.788 "hosts": [] 00:24:31.788 }, 00:24:31.788 { 00:24:31.788 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:31.788 "subtype": "NVMe", 00:24:31.788 "listen_addresses": [ 00:24:31.788 { 00:24:31.788 "trtype": "TCP", 00:24:31.788 "adrfam": "IPv4", 00:24:31.788 "traddr": "10.0.0.2", 00:24:31.788 "trsvcid": "4420" 00:24:31.788 } 00:24:31.788 ], 00:24:31.788 "allow_any_host": true, 00:24:31.788 "hosts": [], 00:24:31.788 "serial_number": "SPDK00000000000001", 00:24:31.788 "model_number": "SPDK bdev Controller", 00:24:31.788 "max_namespaces": 2, 00:24:31.788 "min_cntlid": 1, 00:24:31.788 "max_cntlid": 65519, 00:24:31.788 "namespaces": [ 00:24:31.788 { 00:24:31.788 "nsid": 1, 00:24:31.788 "bdev_name": "Malloc0", 00:24:31.788 "name": "Malloc0", 00:24:31.788 "nguid": "19D8268865A54A78BB361D00D76A0BD8", 00:24:31.788 "uuid": "19d82688-65a5-4a78-bb36-1d00d76a0bd8" 00:24:31.788 }, 00:24:31.788 { 00:24:31.788 "nsid": 2, 00:24:31.788 "bdev_name": "Malloc1", 00:24:31.788 "name": "Malloc1", 00:24:31.788 "nguid": "BEFD60773F274CE7A7A5CF193CA5DDF8", 00:24:31.788 "uuid": "befd6077-3f27-4ce7-a7a5-cf193ca5ddf8" 00:24:31.788 } 00:24:31.788 ] 00:24:31.788 } 00:24:31.788 ] 00:24:31.788 06:23:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.788 06:23:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 408671 00:24:31.788 06:23:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:24:31.788 06:23:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.788 06:23:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:31.788 06:23:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.788 06:23:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:24:31.788 06:23:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.788 06:23:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:31.788 06:23:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.788 06:23:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:31.788 06:23:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.788 06:23:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:31.788 06:23:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.788 06:23:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:24:31.788 06:23:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:24:31.788 06:23:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:31.788 06:23:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:24:31.788 06:23:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:31.788 06:23:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:24:31.788 06:23:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:31.788 06:23:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:31.788 rmmod nvme_tcp 00:24:31.788 rmmod nvme_fabrics 00:24:31.788 rmmod nvme_keyring 00:24:31.788 06:23:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:31.788 06:23:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:24:31.788 06:23:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:24:31.788 06:23:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 408392 ']' 00:24:31.788 06:23:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 408392 00:24:31.788 06:23:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 408392 ']' 00:24:31.788 06:23:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 408392 00:24:31.788 06:23:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:24:31.788 06:23:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:31.788 06:23:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 408392 00:24:31.788 06:23:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:31.788 06:23:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:31.788 06:23:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 408392' 00:24:31.788 killing process with pid 408392 00:24:31.788 06:23:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 408392 00:24:31.788 06:23:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 408392 00:24:32.049 06:23:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:32.049 06:23:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:32.049 06:23:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:32.049 06:23:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:24:32.049 06:23:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:24:32.049 06:23:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:32.049 06:23:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:24:32.049 06:23:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:32.049 06:23:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:32.049 06:23:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:32.049 06:23:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:32.049 06:23:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:34.590 00:24:34.590 real 0m11.287s 00:24:34.590 user 0m8.195s 00:24:34.590 sys 0m5.969s 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:34.590 ************************************ 00:24:34.590 END TEST nvmf_aer 00:24:34.590 ************************************ 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.590 ************************************ 00:24:34.590 START TEST nvmf_async_init 00:24:34.590 ************************************ 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:34.590 * Looking for test storage... 00:24:34.590 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:34.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:34.590 --rc genhtml_branch_coverage=1 00:24:34.590 --rc genhtml_function_coverage=1 00:24:34.590 --rc genhtml_legend=1 00:24:34.590 --rc geninfo_all_blocks=1 00:24:34.590 --rc geninfo_unexecuted_blocks=1 00:24:34.590 00:24:34.590 ' 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:34.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:34.590 --rc genhtml_branch_coverage=1 00:24:34.590 --rc genhtml_function_coverage=1 00:24:34.590 --rc genhtml_legend=1 00:24:34.590 --rc geninfo_all_blocks=1 00:24:34.590 --rc geninfo_unexecuted_blocks=1 00:24:34.590 00:24:34.590 ' 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:34.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:34.590 --rc genhtml_branch_coverage=1 00:24:34.590 --rc genhtml_function_coverage=1 00:24:34.590 --rc genhtml_legend=1 00:24:34.590 --rc geninfo_all_blocks=1 00:24:34.590 --rc geninfo_unexecuted_blocks=1 00:24:34.590 00:24:34.590 ' 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:34.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:34.590 --rc genhtml_branch_coverage=1 00:24:34.590 --rc genhtml_function_coverage=1 00:24:34.590 --rc genhtml_legend=1 00:24:34.590 --rc geninfo_all_blocks=1 00:24:34.590 --rc geninfo_unexecuted_blocks=1 00:24:34.590 00:24:34.590 ' 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:34.590 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:24:34.590 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:24:34.591 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=20e0e7f820dc4f72b49a6fc33813fdb5 00:24:34.591 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:24:34.591 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:34.591 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:34.591 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:34.591 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:34.591 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:34.591 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:34.591 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:34.591 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:34.591 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:34.591 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:34.591 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:24:34.591 06:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:42.724 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:42.724 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:24:42.724 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:42.724 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:42.724 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:42.724 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:42.724 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:42.724 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:24:42.724 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:42.724 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:24:42.724 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:24:42.724 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:24:42.724 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:24:42.724 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:24:42.724 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:24:42.724 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:42.724 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:42.724 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:42.724 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:42.724 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:42.724 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:42.724 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:42.724 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:42.724 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:42.724 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:42.724 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:42.724 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:42.724 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:42.724 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:42.724 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:42.724 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:42.724 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:42.724 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:42.724 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:42.724 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:42.724 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:42.724 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:42.724 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:42.724 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:42.724 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:42.724 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:42.724 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:42.724 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:42.724 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:42.724 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:42.724 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:42.724 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:42.724 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:42.724 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:42.724 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:42.724 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:42.724 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:42.724 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:42.724 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:42.724 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:42.724 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:42.724 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:42.724 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:42.724 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:42.724 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:42.724 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:42.724 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:42.724 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:42.724 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:42.724 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:42.724 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:42.724 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:42.724 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:42.724 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:42.724 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:42.724 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:42.724 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:42.724 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:42.724 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:24:42.724 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:42.724 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:42.724 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:42.724 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:42.724 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:42.724 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:42.724 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:42.724 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:42.724 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:42.724 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:42.724 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:42.724 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:42.725 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:42.725 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:42.725 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:42.725 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:42.725 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:42.725 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:42.725 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:42.725 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:42.725 06:23:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:42.725 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:42.725 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:42.725 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:42.725 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:42.725 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:42.725 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:42.725 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.651 ms 00:24:42.725 00:24:42.725 --- 10.0.0.2 ping statistics --- 00:24:42.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:42.725 rtt min/avg/max/mdev = 0.651/0.651/0.651/0.000 ms 00:24:42.725 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:42.725 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:42.725 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:24:42.725 00:24:42.725 --- 10.0.0.1 ping statistics --- 00:24:42.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:42.725 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:24:42.725 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:42.725 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:24:42.725 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:42.725 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:42.725 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:42.725 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:42.725 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:42.725 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:42.725 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:42.725 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:24:42.725 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:42.725 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:42.725 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:42.725 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=412600 00:24:42.725 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 412600 00:24:42.725 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 412600 ']' 00:24:42.725 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:42.725 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:42.725 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:42.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:42.725 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:42.725 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:42.725 06:23:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:42.725 [2024-12-09 06:23:36.235651] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:24:42.725 [2024-12-09 06:23:36.235716] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:42.725 [2024-12-09 06:23:36.332636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:42.725 [2024-12-09 06:23:36.381994] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:42.725 [2024-12-09 06:23:36.382048] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:42.725 [2024-12-09 06:23:36.382057] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:42.725 [2024-12-09 06:23:36.382064] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:42.725 [2024-12-09 06:23:36.382070] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:42.725 [2024-12-09 06:23:36.382858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:42.725 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:42.725 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:24:42.725 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:42.725 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:42.725 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:42.725 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:42.725 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:42.725 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.725 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:42.725 [2024-12-09 06:23:37.088503] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:42.725 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.725 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:24:42.725 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.725 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:42.725 null0 00:24:42.725 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.725 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:24:42.725 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.725 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:42.725 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.725 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:24:42.725 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.725 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:42.725 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.725 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 20e0e7f820dc4f72b49a6fc33813fdb5 00:24:42.725 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.725 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:42.725 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.725 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:42.725 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.725 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:42.725 [2024-12-09 06:23:37.140801] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:42.725 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.725 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:24:42.725 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.725 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:42.985 nvme0n1 00:24:42.985 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.985 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:42.985 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.985 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:42.985 [ 00:24:42.985 { 00:24:42.985 "name": "nvme0n1", 00:24:42.985 "aliases": [ 00:24:42.985 "20e0e7f8-20dc-4f72-b49a-6fc33813fdb5" 00:24:42.985 ], 00:24:42.985 "product_name": "NVMe disk", 00:24:42.985 "block_size": 512, 00:24:42.985 "num_blocks": 2097152, 00:24:42.985 "uuid": "20e0e7f8-20dc-4f72-b49a-6fc33813fdb5", 00:24:42.985 "numa_id": 0, 00:24:42.985 "assigned_rate_limits": { 00:24:42.985 "rw_ios_per_sec": 0, 00:24:42.985 "rw_mbytes_per_sec": 0, 00:24:42.985 "r_mbytes_per_sec": 0, 00:24:42.985 "w_mbytes_per_sec": 0 00:24:42.985 }, 00:24:42.985 "claimed": false, 00:24:42.985 "zoned": false, 00:24:42.985 "supported_io_types": { 00:24:42.985 "read": true, 00:24:42.985 "write": true, 00:24:42.985 "unmap": false, 00:24:42.985 "flush": true, 00:24:42.985 "reset": true, 00:24:42.985 "nvme_admin": true, 00:24:42.985 "nvme_io": true, 00:24:42.985 "nvme_io_md": false, 00:24:42.985 "write_zeroes": true, 00:24:42.985 "zcopy": false, 00:24:42.985 "get_zone_info": false, 00:24:42.985 "zone_management": false, 00:24:42.985 "zone_append": false, 00:24:42.985 "compare": true, 00:24:42.985 "compare_and_write": true, 00:24:42.985 "abort": true, 00:24:42.985 "seek_hole": false, 00:24:42.985 "seek_data": false, 00:24:42.985 "copy": true, 00:24:42.985 "nvme_iov_md": false 00:24:42.985 }, 00:24:42.985 "memory_domains": [ 00:24:42.985 { 00:24:42.985 "dma_device_id": "system", 00:24:42.985 "dma_device_type": 1 00:24:42.985 } 00:24:42.985 ], 00:24:42.985 "driver_specific": { 00:24:42.985 "nvme": [ 00:24:42.985 { 00:24:42.985 "trid": { 00:24:42.985 "trtype": "TCP", 00:24:42.985 "adrfam": "IPv4", 00:24:42.985 "traddr": "10.0.0.2", 00:24:42.985 "trsvcid": "4420", 00:24:42.985 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:42.985 }, 00:24:42.985 "ctrlr_data": { 00:24:42.985 "cntlid": 1, 00:24:42.985 "vendor_id": "0x8086", 00:24:42.985 "model_number": "SPDK bdev Controller", 00:24:42.985 "serial_number": "00000000000000000000", 00:24:42.985 "firmware_revision": "25.01", 00:24:42.985 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:42.985 "oacs": { 00:24:42.985 "security": 0, 00:24:42.985 "format": 0, 00:24:42.985 "firmware": 0, 00:24:42.985 "ns_manage": 0 00:24:42.985 }, 00:24:42.985 "multi_ctrlr": true, 00:24:42.985 "ana_reporting": false 00:24:42.985 }, 00:24:42.985 "vs": { 00:24:42.985 "nvme_version": "1.3" 00:24:42.985 }, 00:24:42.985 "ns_data": { 00:24:42.985 "id": 1, 00:24:42.985 "can_share": true 00:24:42.985 } 00:24:42.985 } 00:24:42.985 ], 00:24:42.985 "mp_policy": "active_passive" 00:24:42.985 } 00:24:42.985 } 00:24:42.985 ] 00:24:42.985 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.985 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:24:42.985 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.985 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:42.985 [2024-12-09 06:23:37.401333] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:42.985 [2024-12-09 06:23:37.401408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cd1870 (9): Bad file descriptor 00:24:42.985 [2024-12-09 06:23:37.543550] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:24:42.985 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.985 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:42.985 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.985 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:42.985 [ 00:24:42.985 { 00:24:42.985 "name": "nvme0n1", 00:24:42.985 "aliases": [ 00:24:42.985 "20e0e7f8-20dc-4f72-b49a-6fc33813fdb5" 00:24:42.985 ], 00:24:42.985 "product_name": "NVMe disk", 00:24:42.985 "block_size": 512, 00:24:42.985 "num_blocks": 2097152, 00:24:42.985 "uuid": "20e0e7f8-20dc-4f72-b49a-6fc33813fdb5", 00:24:42.985 "numa_id": 0, 00:24:42.985 "assigned_rate_limits": { 00:24:42.985 "rw_ios_per_sec": 0, 00:24:42.985 "rw_mbytes_per_sec": 0, 00:24:42.985 "r_mbytes_per_sec": 0, 00:24:42.985 "w_mbytes_per_sec": 0 00:24:42.985 }, 00:24:42.985 "claimed": false, 00:24:42.985 "zoned": false, 00:24:42.985 "supported_io_types": { 00:24:42.985 "read": true, 00:24:42.985 "write": true, 00:24:42.985 "unmap": false, 00:24:42.985 "flush": true, 00:24:42.985 "reset": true, 00:24:42.985 "nvme_admin": true, 00:24:42.985 "nvme_io": true, 00:24:42.985 "nvme_io_md": false, 00:24:42.985 "write_zeroes": true, 00:24:42.985 "zcopy": false, 00:24:42.985 "get_zone_info": false, 00:24:42.985 "zone_management": false, 00:24:42.985 "zone_append": false, 00:24:42.985 "compare": true, 00:24:42.985 "compare_and_write": true, 00:24:42.985 "abort": true, 00:24:42.985 "seek_hole": false, 00:24:42.985 "seek_data": false, 00:24:42.985 "copy": true, 00:24:42.985 "nvme_iov_md": false 00:24:42.985 }, 00:24:42.985 "memory_domains": [ 00:24:42.985 { 00:24:42.985 "dma_device_id": "system", 00:24:42.985 "dma_device_type": 1 00:24:42.985 } 00:24:42.985 ], 00:24:42.985 "driver_specific": { 00:24:42.985 "nvme": [ 00:24:42.985 { 00:24:42.985 "trid": { 00:24:42.985 "trtype": "TCP", 00:24:42.985 "adrfam": "IPv4", 00:24:42.985 "traddr": "10.0.0.2", 00:24:42.985 "trsvcid": "4420", 00:24:42.986 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:42.986 }, 00:24:42.986 "ctrlr_data": { 00:24:42.986 "cntlid": 2, 00:24:42.986 "vendor_id": "0x8086", 00:24:42.986 "model_number": "SPDK bdev Controller", 00:24:42.986 "serial_number": "00000000000000000000", 00:24:42.986 "firmware_revision": "25.01", 00:24:42.986 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:42.986 "oacs": { 00:24:42.986 "security": 0, 00:24:42.986 "format": 0, 00:24:42.986 "firmware": 0, 00:24:42.986 "ns_manage": 0 00:24:42.986 }, 00:24:42.986 "multi_ctrlr": true, 00:24:42.986 "ana_reporting": false 00:24:42.986 }, 00:24:42.986 "vs": { 00:24:42.986 "nvme_version": "1.3" 00:24:42.986 }, 00:24:42.986 "ns_data": { 00:24:42.986 "id": 1, 00:24:42.986 "can_share": true 00:24:42.986 } 00:24:42.986 } 00:24:42.986 ], 00:24:42.986 "mp_policy": "active_passive" 00:24:42.986 } 00:24:42.986 } 00:24:42.986 ] 00:24:42.986 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.986 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:42.986 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.986 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:43.245 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.245 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:24:43.245 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.robT5z5QHT 00:24:43.245 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:43.245 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.robT5z5QHT 00:24:43.245 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.robT5z5QHT 00:24:43.245 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.245 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:43.245 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.245 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:24:43.245 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.245 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:43.245 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.245 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:24:43.245 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.245 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:43.245 [2024-12-09 06:23:37.618001] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:43.245 [2024-12-09 06:23:37.618148] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:43.245 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.245 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:24:43.245 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.245 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:43.245 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.245 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:43.245 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.245 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:43.245 [2024-12-09 06:23:37.638068] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:43.245 nvme0n1 00:24:43.245 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.245 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:43.245 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.245 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:43.245 [ 00:24:43.245 { 00:24:43.245 "name": "nvme0n1", 00:24:43.245 "aliases": [ 00:24:43.245 "20e0e7f8-20dc-4f72-b49a-6fc33813fdb5" 00:24:43.245 ], 00:24:43.245 "product_name": "NVMe disk", 00:24:43.245 "block_size": 512, 00:24:43.245 "num_blocks": 2097152, 00:24:43.245 "uuid": "20e0e7f8-20dc-4f72-b49a-6fc33813fdb5", 00:24:43.245 "numa_id": 0, 00:24:43.245 "assigned_rate_limits": { 00:24:43.245 "rw_ios_per_sec": 0, 00:24:43.245 "rw_mbytes_per_sec": 0, 00:24:43.245 "r_mbytes_per_sec": 0, 00:24:43.245 "w_mbytes_per_sec": 0 00:24:43.245 }, 00:24:43.245 "claimed": false, 00:24:43.245 "zoned": false, 00:24:43.245 "supported_io_types": { 00:24:43.245 "read": true, 00:24:43.245 "write": true, 00:24:43.245 "unmap": false, 00:24:43.245 "flush": true, 00:24:43.245 "reset": true, 00:24:43.245 "nvme_admin": true, 00:24:43.245 "nvme_io": true, 00:24:43.245 "nvme_io_md": false, 00:24:43.245 "write_zeroes": true, 00:24:43.245 "zcopy": false, 00:24:43.245 "get_zone_info": false, 00:24:43.245 "zone_management": false, 00:24:43.245 "zone_append": false, 00:24:43.245 "compare": true, 00:24:43.245 "compare_and_write": true, 00:24:43.245 "abort": true, 00:24:43.245 "seek_hole": false, 00:24:43.245 "seek_data": false, 00:24:43.245 "copy": true, 00:24:43.245 "nvme_iov_md": false 00:24:43.245 }, 00:24:43.245 "memory_domains": [ 00:24:43.245 { 00:24:43.245 "dma_device_id": "system", 00:24:43.245 "dma_device_type": 1 00:24:43.245 } 00:24:43.245 ], 00:24:43.245 "driver_specific": { 00:24:43.245 "nvme": [ 00:24:43.245 { 00:24:43.245 "trid": { 00:24:43.245 "trtype": "TCP", 00:24:43.245 "adrfam": "IPv4", 00:24:43.245 "traddr": "10.0.0.2", 00:24:43.245 "trsvcid": "4421", 00:24:43.245 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:43.245 }, 00:24:43.245 "ctrlr_data": { 00:24:43.245 "cntlid": 3, 00:24:43.245 "vendor_id": "0x8086", 00:24:43.245 "model_number": "SPDK bdev Controller", 00:24:43.245 "serial_number": "00000000000000000000", 00:24:43.245 "firmware_revision": "25.01", 00:24:43.245 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:43.245 "oacs": { 00:24:43.245 "security": 0, 00:24:43.245 "format": 0, 00:24:43.245 "firmware": 0, 00:24:43.245 "ns_manage": 0 00:24:43.245 }, 00:24:43.245 "multi_ctrlr": true, 00:24:43.245 "ana_reporting": false 00:24:43.245 }, 00:24:43.245 "vs": { 00:24:43.245 "nvme_version": "1.3" 00:24:43.245 }, 00:24:43.245 "ns_data": { 00:24:43.245 "id": 1, 00:24:43.245 "can_share": true 00:24:43.245 } 00:24:43.245 } 00:24:43.245 ], 00:24:43.245 "mp_policy": "active_passive" 00:24:43.245 } 00:24:43.245 } 00:24:43.245 ] 00:24:43.245 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.245 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:43.245 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.245 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:43.245 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.245 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.robT5z5QHT 00:24:43.245 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:24:43.245 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:24:43.245 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:43.245 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:24:43.245 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:43.245 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:24:43.245 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:43.245 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:43.245 rmmod nvme_tcp 00:24:43.245 rmmod nvme_fabrics 00:24:43.245 rmmod nvme_keyring 00:24:43.245 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:43.245 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:24:43.245 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:24:43.245 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 412600 ']' 00:24:43.245 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 412600 00:24:43.246 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 412600 ']' 00:24:43.246 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 412600 00:24:43.246 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:24:43.246 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:43.246 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 412600 00:24:43.505 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:43.505 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:43.505 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 412600' 00:24:43.505 killing process with pid 412600 00:24:43.505 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 412600 00:24:43.505 06:23:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 412600 00:24:43.505 06:23:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:43.505 06:23:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:43.505 06:23:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:43.505 06:23:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:24:43.505 06:23:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:24:43.505 06:23:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:43.505 06:23:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:24:43.505 06:23:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:43.505 06:23:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:43.505 06:23:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:43.505 06:23:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:43.505 06:23:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:46.050 00:24:46.050 real 0m11.399s 00:24:46.050 user 0m3.960s 00:24:46.050 sys 0m5.928s 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:46.050 ************************************ 00:24:46.050 END TEST nvmf_async_init 00:24:46.050 ************************************ 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.050 ************************************ 00:24:46.050 START TEST dma 00:24:46.050 ************************************ 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:46.050 * Looking for test storage... 00:24:46.050 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:46.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:46.050 --rc genhtml_branch_coverage=1 00:24:46.050 --rc genhtml_function_coverage=1 00:24:46.050 --rc genhtml_legend=1 00:24:46.050 --rc geninfo_all_blocks=1 00:24:46.050 --rc geninfo_unexecuted_blocks=1 00:24:46.050 00:24:46.050 ' 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:46.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:46.050 --rc genhtml_branch_coverage=1 00:24:46.050 --rc genhtml_function_coverage=1 00:24:46.050 --rc genhtml_legend=1 00:24:46.050 --rc geninfo_all_blocks=1 00:24:46.050 --rc geninfo_unexecuted_blocks=1 00:24:46.050 00:24:46.050 ' 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:46.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:46.050 --rc genhtml_branch_coverage=1 00:24:46.050 --rc genhtml_function_coverage=1 00:24:46.050 --rc genhtml_legend=1 00:24:46.050 --rc geninfo_all_blocks=1 00:24:46.050 --rc geninfo_unexecuted_blocks=1 00:24:46.050 00:24:46.050 ' 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:46.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:46.050 --rc genhtml_branch_coverage=1 00:24:46.050 --rc genhtml_function_coverage=1 00:24:46.050 --rc genhtml_legend=1 00:24:46.050 --rc geninfo_all_blocks=1 00:24:46.050 --rc geninfo_unexecuted_blocks=1 00:24:46.050 00:24:46.050 ' 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:46.050 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:24:46.050 00:24:46.050 real 0m0.238s 00:24:46.050 user 0m0.128s 00:24:46.050 sys 0m0.125s 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:24:46.050 ************************************ 00:24:46.050 END TEST dma 00:24:46.050 ************************************ 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.050 ************************************ 00:24:46.050 START TEST nvmf_identify 00:24:46.050 ************************************ 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:46.050 * Looking for test storage... 00:24:46.050 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:24:46.050 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:46.311 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:46.311 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:46.311 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:46.311 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:46.311 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:24:46.311 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:24:46.311 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:24:46.311 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:24:46.311 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:24:46.311 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:24:46.311 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:24:46.311 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:46.311 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:24:46.311 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:24:46.311 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:46.311 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:46.311 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:24:46.311 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:24:46.311 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:46.311 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:24:46.311 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:24:46.311 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:24:46.311 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:24:46.311 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:46.311 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:24:46.311 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:24:46.311 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:46.311 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:46.311 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:24:46.311 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:46.311 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:46.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:46.311 --rc genhtml_branch_coverage=1 00:24:46.311 --rc genhtml_function_coverage=1 00:24:46.311 --rc genhtml_legend=1 00:24:46.311 --rc geninfo_all_blocks=1 00:24:46.311 --rc geninfo_unexecuted_blocks=1 00:24:46.311 00:24:46.311 ' 00:24:46.311 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:46.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:46.311 --rc genhtml_branch_coverage=1 00:24:46.311 --rc genhtml_function_coverage=1 00:24:46.311 --rc genhtml_legend=1 00:24:46.311 --rc geninfo_all_blocks=1 00:24:46.311 --rc geninfo_unexecuted_blocks=1 00:24:46.311 00:24:46.311 ' 00:24:46.311 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:46.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:46.311 --rc genhtml_branch_coverage=1 00:24:46.311 --rc genhtml_function_coverage=1 00:24:46.311 --rc genhtml_legend=1 00:24:46.311 --rc geninfo_all_blocks=1 00:24:46.311 --rc geninfo_unexecuted_blocks=1 00:24:46.311 00:24:46.311 ' 00:24:46.311 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:46.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:46.311 --rc genhtml_branch_coverage=1 00:24:46.311 --rc genhtml_function_coverage=1 00:24:46.311 --rc genhtml_legend=1 00:24:46.311 --rc geninfo_all_blocks=1 00:24:46.311 --rc geninfo_unexecuted_blocks=1 00:24:46.311 00:24:46.311 ' 00:24:46.311 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:46.311 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:24:46.311 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:46.311 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:46.311 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:46.311 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:46.311 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:46.311 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:46.311 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:46.311 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:46.311 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:46.311 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:46.311 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:24:46.311 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:24:46.311 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:46.311 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:46.311 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:46.311 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:46.311 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:46.311 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:24:46.311 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:46.311 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:46.311 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:46.311 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.311 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.311 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.311 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:24:46.311 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.311 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:24:46.311 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:46.311 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:46.311 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:46.311 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:46.311 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:46.311 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:46.311 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:46.311 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:46.311 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:46.311 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:46.311 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:46.311 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:46.311 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:24:46.311 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:46.311 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:46.311 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:46.312 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:46.312 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:46.312 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:46.312 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:46.312 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:46.312 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:46.312 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:46.312 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:24:46.312 06:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:54.450 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:54.450 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:24:54.450 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:54.450 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:54.450 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:54.450 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:54.450 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:54.450 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:24:54.450 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:54.450 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:24:54.450 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:24:54.450 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:24:54.450 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:24:54.450 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:24:54.450 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:24:54.450 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:54.450 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:54.450 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:54.450 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:54.451 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:54.451 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:54.451 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:54.451 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:54.451 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:54.451 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:54.451 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:54.451 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:54.451 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:54.451 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:54.451 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:54.451 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:54.451 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:54.451 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:54.451 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:54.451 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:54.451 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:54.451 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:54.451 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:54.451 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:54.451 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:54.451 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:54.451 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:54.451 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:54.451 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:54.451 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:54.451 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:54.451 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:54.451 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:54.451 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:54.451 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:54.451 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:54.451 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:54.451 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:54.451 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:54.451 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:54.451 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:54.451 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:54.451 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:54.451 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:54.451 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:54.451 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:54.451 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:54.451 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:54.451 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:54.451 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:54.451 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:54.451 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:54.451 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:54.451 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:54.451 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:54.451 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:54.451 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:54.451 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:54.451 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:24:54.451 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:54.451 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:54.451 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:54.451 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:54.451 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:54.451 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:54.451 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:54.451 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:54.451 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:54.451 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:54.451 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:54.451 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:54.451 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:54.451 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:54.451 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:54.451 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:54.451 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:54.451 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:54.451 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:54.451 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:54.451 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:54.451 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:54.451 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:54.451 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:54.451 06:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:54.451 06:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:54.451 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:54.451 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.618 ms 00:24:54.451 00:24:54.451 --- 10.0.0.2 ping statistics --- 00:24:54.451 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:54.451 rtt min/avg/max/mdev = 0.618/0.618/0.618/0.000 ms 00:24:54.451 06:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:54.451 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:54.451 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.310 ms 00:24:54.451 00:24:54.451 --- 10.0.0.1 ping statistics --- 00:24:54.451 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:54.451 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:24:54.451 06:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:54.451 06:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:24:54.451 06:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:54.451 06:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:54.451 06:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:54.451 06:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:54.451 06:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:54.451 06:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:54.451 06:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:54.451 06:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:24:54.451 06:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:54.451 06:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:54.451 06:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=417082 00:24:54.451 06:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:54.451 06:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 417082 00:24:54.451 06:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:54.451 06:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 417082 ']' 00:24:54.451 06:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:54.451 06:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:54.451 06:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:54.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:54.451 06:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:54.451 06:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:54.451 [2024-12-09 06:23:48.107328] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:24:54.452 [2024-12-09 06:23:48.107393] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:54.452 [2024-12-09 06:23:48.206105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:54.452 [2024-12-09 06:23:48.258695] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:54.452 [2024-12-09 06:23:48.258749] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:54.452 [2024-12-09 06:23:48.258757] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:54.452 [2024-12-09 06:23:48.258764] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:54.452 [2024-12-09 06:23:48.258770] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:54.452 [2024-12-09 06:23:48.260721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:54.452 [2024-12-09 06:23:48.260877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:54.452 [2024-12-09 06:23:48.261026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:54.452 [2024-12-09 06:23:48.261026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:54.452 06:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:54.452 06:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:24:54.452 06:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:54.452 06:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.452 06:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:54.452 [2024-12-09 06:23:48.952542] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:54.452 06:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.452 06:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:24:54.452 06:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:54.452 06:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:54.452 06:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:54.452 06:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.452 06:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:54.714 Malloc0 00:24:54.714 06:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.714 06:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:54.714 06:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.714 06:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:54.714 06:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.714 06:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:24:54.714 06:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.714 06:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:54.714 06:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.714 06:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:54.714 06:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.714 06:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:54.714 [2024-12-09 06:23:49.067940] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:54.714 06:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.714 06:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:54.714 06:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.714 06:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:54.714 06:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.714 06:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:24:54.714 06:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.714 06:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:54.714 [ 00:24:54.714 { 00:24:54.714 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:54.714 "subtype": "Discovery", 00:24:54.714 "listen_addresses": [ 00:24:54.714 { 00:24:54.714 "trtype": "TCP", 00:24:54.714 "adrfam": "IPv4", 00:24:54.714 "traddr": "10.0.0.2", 00:24:54.714 "trsvcid": "4420" 00:24:54.714 } 00:24:54.714 ], 00:24:54.714 "allow_any_host": true, 00:24:54.714 "hosts": [] 00:24:54.714 }, 00:24:54.714 { 00:24:54.714 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:54.714 "subtype": "NVMe", 00:24:54.714 "listen_addresses": [ 00:24:54.714 { 00:24:54.714 "trtype": "TCP", 00:24:54.714 "adrfam": "IPv4", 00:24:54.714 "traddr": "10.0.0.2", 00:24:54.714 "trsvcid": "4420" 00:24:54.714 } 00:24:54.714 ], 00:24:54.714 "allow_any_host": true, 00:24:54.714 "hosts": [], 00:24:54.714 "serial_number": "SPDK00000000000001", 00:24:54.714 "model_number": "SPDK bdev Controller", 00:24:54.714 "max_namespaces": 32, 00:24:54.714 "min_cntlid": 1, 00:24:54.714 "max_cntlid": 65519, 00:24:54.714 "namespaces": [ 00:24:54.714 { 00:24:54.714 "nsid": 1, 00:24:54.714 "bdev_name": "Malloc0", 00:24:54.714 "name": "Malloc0", 00:24:54.714 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:24:54.714 "eui64": "ABCDEF0123456789", 00:24:54.714 "uuid": "b5fdb99d-33a9-406a-8ef2-24c91791173c" 00:24:54.714 } 00:24:54.714 ] 00:24:54.714 } 00:24:54.714 ] 00:24:54.714 06:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.714 06:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:24:54.714 [2024-12-09 06:23:49.134908] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:24:54.714 [2024-12-09 06:23:49.134972] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid417189 ] 00:24:54.714 [2024-12-09 06:23:49.191438] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:24:54.714 [2024-12-09 06:23:49.191513] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:54.714 [2024-12-09 06:23:49.191518] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:54.714 [2024-12-09 06:23:49.191538] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:54.714 [2024-12-09 06:23:49.191553] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:54.714 [2024-12-09 06:23:49.192476] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:24:54.715 [2024-12-09 06:23:49.192535] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1d5e690 0 00:24:54.715 [2024-12-09 06:23:49.202467] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:54.715 [2024-12-09 06:23:49.202482] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:54.715 [2024-12-09 06:23:49.202487] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:54.715 [2024-12-09 06:23:49.202491] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:54.715 [2024-12-09 06:23:49.202534] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:54.715 [2024-12-09 06:23:49.202540] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:54.715 [2024-12-09 06:23:49.202544] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d5e690) 00:24:54.715 [2024-12-09 06:23:49.202560] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:54.715 [2024-12-09 06:23:49.202581] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc0100, cid 0, qid 0 00:24:54.715 [2024-12-09 06:23:49.210460] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:54.715 [2024-12-09 06:23:49.210469] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:54.715 [2024-12-09 06:23:49.210473] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:54.715 [2024-12-09 06:23:49.210478] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dc0100) on tqpair=0x1d5e690 00:24:54.715 [2024-12-09 06:23:49.210490] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:54.715 [2024-12-09 06:23:49.210499] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:24:54.715 [2024-12-09 06:23:49.210504] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:24:54.715 [2024-12-09 06:23:49.210519] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:54.715 [2024-12-09 06:23:49.210523] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:54.715 [2024-12-09 06:23:49.210527] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d5e690) 00:24:54.715 [2024-12-09 06:23:49.210535] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.715 [2024-12-09 06:23:49.210550] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc0100, cid 0, qid 0 00:24:54.715 [2024-12-09 06:23:49.210725] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:54.715 [2024-12-09 06:23:49.210731] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:54.715 [2024-12-09 06:23:49.210734] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:54.715 [2024-12-09 06:23:49.210738] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dc0100) on tqpair=0x1d5e690 00:24:54.715 [2024-12-09 06:23:49.210744] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:24:54.715 [2024-12-09 06:23:49.210752] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:24:54.715 [2024-12-09 06:23:49.210759] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:54.715 [2024-12-09 06:23:49.210762] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:54.715 [2024-12-09 06:23:49.210766] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d5e690) 00:24:54.715 [2024-12-09 06:23:49.210772] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.715 [2024-12-09 06:23:49.210787] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc0100, cid 0, qid 0 00:24:54.715 [2024-12-09 06:23:49.210993] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:54.715 [2024-12-09 06:23:49.210999] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:54.715 [2024-12-09 06:23:49.211002] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:54.715 [2024-12-09 06:23:49.211006] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dc0100) on tqpair=0x1d5e690 00:24:54.715 [2024-12-09 06:23:49.211011] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:24:54.715 [2024-12-09 06:23:49.211019] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:24:54.715 [2024-12-09 06:23:49.211025] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:54.715 [2024-12-09 06:23:49.211029] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:54.715 [2024-12-09 06:23:49.211032] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d5e690) 00:24:54.715 [2024-12-09 06:23:49.211038] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.715 [2024-12-09 06:23:49.211048] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc0100, cid 0, qid 0 00:24:54.715 [2024-12-09 06:23:49.211235] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:54.715 [2024-12-09 06:23:49.211241] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:54.715 [2024-12-09 06:23:49.211244] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:54.715 [2024-12-09 06:23:49.211248] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dc0100) on tqpair=0x1d5e690 00:24:54.715 [2024-12-09 06:23:49.211253] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:54.715 [2024-12-09 06:23:49.211262] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:54.715 [2024-12-09 06:23:49.211266] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:54.715 [2024-12-09 06:23:49.211269] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d5e690) 00:24:54.715 [2024-12-09 06:23:49.211275] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.715 [2024-12-09 06:23:49.211285] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc0100, cid 0, qid 0 00:24:54.715 [2024-12-09 06:23:49.211483] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:54.715 [2024-12-09 06:23:49.211489] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:54.715 [2024-12-09 06:23:49.211492] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:54.715 [2024-12-09 06:23:49.211496] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dc0100) on tqpair=0x1d5e690 00:24:54.715 [2024-12-09 06:23:49.211500] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:24:54.715 [2024-12-09 06:23:49.211505] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:24:54.715 [2024-12-09 06:23:49.211513] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:54.715 [2024-12-09 06:23:49.211624] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:24:54.715 [2024-12-09 06:23:49.211629] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:54.715 [2024-12-09 06:23:49.211638] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:54.715 [2024-12-09 06:23:49.211642] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:54.715 [2024-12-09 06:23:49.211648] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d5e690) 00:24:54.715 [2024-12-09 06:23:49.211654] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.715 [2024-12-09 06:23:49.211665] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc0100, cid 0, qid 0 00:24:54.715 [2024-12-09 06:23:49.211826] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:54.715 [2024-12-09 06:23:49.211832] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:54.715 [2024-12-09 06:23:49.211836] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:54.715 [2024-12-09 06:23:49.211839] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dc0100) on tqpair=0x1d5e690 00:24:54.715 [2024-12-09 06:23:49.211843] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:54.715 [2024-12-09 06:23:49.211852] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:54.715 [2024-12-09 06:23:49.211856] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:54.715 [2024-12-09 06:23:49.211859] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d5e690) 00:24:54.715 [2024-12-09 06:23:49.211866] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.715 [2024-12-09 06:23:49.211875] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc0100, cid 0, qid 0 00:24:54.715 [2024-12-09 06:23:49.212036] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:54.715 [2024-12-09 06:23:49.212042] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:54.715 [2024-12-09 06:23:49.212045] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:54.715 [2024-12-09 06:23:49.212049] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dc0100) on tqpair=0x1d5e690 00:24:54.715 [2024-12-09 06:23:49.212053] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:54.715 [2024-12-09 06:23:49.212058] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:24:54.715 [2024-12-09 06:23:49.212065] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:24:54.715 [2024-12-09 06:23:49.212077] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:24:54.715 [2024-12-09 06:23:49.212086] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:54.715 [2024-12-09 06:23:49.212090] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d5e690) 00:24:54.715 [2024-12-09 06:23:49.212096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.715 [2024-12-09 06:23:49.212107] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc0100, cid 0, qid 0 00:24:54.715 [2024-12-09 06:23:49.212315] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:54.715 [2024-12-09 06:23:49.212321] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:54.715 [2024-12-09 06:23:49.212325] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:54.715 [2024-12-09 06:23:49.212329] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d5e690): datao=0, datal=4096, cccid=0 00:24:54.715 [2024-12-09 06:23:49.212333] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dc0100) on tqpair(0x1d5e690): expected_datao=0, payload_size=4096 00:24:54.715 [2024-12-09 06:23:49.212338] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:54.715 [2024-12-09 06:23:49.212346] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:54.716 [2024-12-09 06:23:49.212350] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:54.716 [2024-12-09 06:23:49.253623] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:54.716 [2024-12-09 06:23:49.253635] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:54.716 [2024-12-09 06:23:49.253639] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:54.716 [2024-12-09 06:23:49.253643] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dc0100) on tqpair=0x1d5e690 00:24:54.716 [2024-12-09 06:23:49.253652] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:24:54.716 [2024-12-09 06:23:49.253661] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:24:54.716 [2024-12-09 06:23:49.253665] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:24:54.716 [2024-12-09 06:23:49.253670] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:24:54.716 [2024-12-09 06:23:49.253675] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:24:54.716 [2024-12-09 06:23:49.253680] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:24:54.716 [2024-12-09 06:23:49.253690] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:24:54.716 [2024-12-09 06:23:49.253697] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:54.716 [2024-12-09 06:23:49.253700] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:54.716 [2024-12-09 06:23:49.253704] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d5e690) 00:24:54.716 [2024-12-09 06:23:49.253712] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:54.716 [2024-12-09 06:23:49.253724] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc0100, cid 0, qid 0 00:24:54.716 [2024-12-09 06:23:49.253893] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:54.716 [2024-12-09 06:23:49.253899] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:54.716 [2024-12-09 06:23:49.253902] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:54.716 [2024-12-09 06:23:49.253906] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dc0100) on tqpair=0x1d5e690 00:24:54.716 [2024-12-09 06:23:49.253913] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:54.716 [2024-12-09 06:23:49.253917] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:54.716 [2024-12-09 06:23:49.253920] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d5e690) 00:24:54.716 [2024-12-09 06:23:49.253926] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.716 [2024-12-09 06:23:49.253932] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:54.716 [2024-12-09 06:23:49.253935] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:54.716 [2024-12-09 06:23:49.253938] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1d5e690) 00:24:54.716 [2024-12-09 06:23:49.253945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.716 [2024-12-09 06:23:49.253951] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:54.716 [2024-12-09 06:23:49.253955] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:54.716 [2024-12-09 06:23:49.253958] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1d5e690) 00:24:54.716 [2024-12-09 06:23:49.253963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.716 [2024-12-09 06:23:49.253969] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:54.716 [2024-12-09 06:23:49.253973] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:54.716 [2024-12-09 06:23:49.253980] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d5e690) 00:24:54.716 [2024-12-09 06:23:49.253986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.716 [2024-12-09 06:23:49.253991] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:54.716 [2024-12-09 06:23:49.254003] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:54.716 [2024-12-09 06:23:49.254009] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:54.716 [2024-12-09 06:23:49.254012] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d5e690) 00:24:54.716 [2024-12-09 06:23:49.254019] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.716 [2024-12-09 06:23:49.254030] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc0100, cid 0, qid 0 00:24:54.716 [2024-12-09 06:23:49.254035] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc0280, cid 1, qid 0 00:24:54.716 [2024-12-09 06:23:49.254040] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc0400, cid 2, qid 0 00:24:54.716 [2024-12-09 06:23:49.254046] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc0580, cid 3, qid 0 00:24:54.716 [2024-12-09 06:23:49.254050] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc0700, cid 4, qid 0 00:24:54.716 [2024-12-09 06:23:49.254294] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:54.716 [2024-12-09 06:23:49.254300] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:54.716 [2024-12-09 06:23:49.254303] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:54.716 [2024-12-09 06:23:49.254307] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dc0700) on tqpair=0x1d5e690 00:24:54.716 [2024-12-09 06:23:49.254312] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:24:54.716 [2024-12-09 06:23:49.254317] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:24:54.716 [2024-12-09 06:23:49.254328] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:54.716 [2024-12-09 06:23:49.254332] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d5e690) 00:24:54.716 [2024-12-09 06:23:49.254338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.716 [2024-12-09 06:23:49.254347] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc0700, cid 4, qid 0 00:24:54.716 [2024-12-09 06:23:49.258460] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:54.716 [2024-12-09 06:23:49.258470] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:54.716 [2024-12-09 06:23:49.258477] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:54.716 [2024-12-09 06:23:49.258481] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d5e690): datao=0, datal=4096, cccid=4 00:24:54.716 [2024-12-09 06:23:49.258486] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dc0700) on tqpair(0x1d5e690): expected_datao=0, payload_size=4096 00:24:54.716 [2024-12-09 06:23:49.258490] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:54.716 [2024-12-09 06:23:49.258497] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:54.716 [2024-12-09 06:23:49.258503] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:54.716 [2024-12-09 06:23:49.258510] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:54.716 [2024-12-09 06:23:49.258518] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:54.716 [2024-12-09 06:23:49.258522] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:54.716 [2024-12-09 06:23:49.258529] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dc0700) on tqpair=0x1d5e690 00:24:54.716 [2024-12-09 06:23:49.258543] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:24:54.716 [2024-12-09 06:23:49.258572] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:54.716 [2024-12-09 06:23:49.258577] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d5e690) 00:24:54.716 [2024-12-09 06:23:49.258584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.716 [2024-12-09 06:23:49.258591] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:54.716 [2024-12-09 06:23:49.258594] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:54.716 [2024-12-09 06:23:49.258597] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d5e690) 00:24:54.716 [2024-12-09 06:23:49.258603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.716 [2024-12-09 06:23:49.258618] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc0700, cid 4, qid 0 00:24:54.716 [2024-12-09 06:23:49.258624] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc0880, cid 5, qid 0 00:24:54.716 [2024-12-09 06:23:49.258843] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:54.716 [2024-12-09 06:23:49.258851] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:54.716 [2024-12-09 06:23:49.258854] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:54.716 [2024-12-09 06:23:49.258857] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d5e690): datao=0, datal=1024, cccid=4 00:24:54.716 [2024-12-09 06:23:49.258861] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dc0700) on tqpair(0x1d5e690): expected_datao=0, payload_size=1024 00:24:54.716 [2024-12-09 06:23:49.258865] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:54.716 [2024-12-09 06:23:49.258872] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:54.716 [2024-12-09 06:23:49.258876] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:54.716 [2024-12-09 06:23:49.258882] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:54.716 [2024-12-09 06:23:49.258887] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:54.716 [2024-12-09 06:23:49.258891] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:54.716 [2024-12-09 06:23:49.258894] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dc0880) on tqpair=0x1d5e690 00:24:54.980 [2024-12-09 06:23:49.300660] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:54.980 [2024-12-09 06:23:49.300676] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:54.980 [2024-12-09 06:23:49.300680] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:54.980 [2024-12-09 06:23:49.300684] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dc0700) on tqpair=0x1d5e690 00:24:54.980 [2024-12-09 06:23:49.300699] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:54.980 [2024-12-09 06:23:49.300704] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d5e690) 00:24:54.980 [2024-12-09 06:23:49.300711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.980 [2024-12-09 06:23:49.300728] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc0700, cid 4, qid 0 00:24:54.980 [2024-12-09 06:23:49.300950] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:54.980 [2024-12-09 06:23:49.300956] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:54.980 [2024-12-09 06:23:49.300960] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:54.980 [2024-12-09 06:23:49.300965] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d5e690): datao=0, datal=3072, cccid=4 00:24:54.980 [2024-12-09 06:23:49.300970] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dc0700) on tqpair(0x1d5e690): expected_datao=0, payload_size=3072 00:24:54.980 [2024-12-09 06:23:49.300979] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:54.980 [2024-12-09 06:23:49.300986] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:54.980 [2024-12-09 06:23:49.300990] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:54.980 [2024-12-09 06:23:49.301109] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:54.980 [2024-12-09 06:23:49.301115] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:54.980 [2024-12-09 06:23:49.301120] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:54.980 [2024-12-09 06:23:49.301124] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dc0700) on tqpair=0x1d5e690 00:24:54.980 [2024-12-09 06:23:49.301132] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:54.980 [2024-12-09 06:23:49.301136] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d5e690) 00:24:54.980 [2024-12-09 06:23:49.301143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.980 [2024-12-09 06:23:49.301157] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc0700, cid 4, qid 0 00:24:54.980 [2024-12-09 06:23:49.301338] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:54.980 [2024-12-09 06:23:49.301344] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:54.980 [2024-12-09 06:23:49.301348] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:54.980 [2024-12-09 06:23:49.301352] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d5e690): datao=0, datal=8, cccid=4 00:24:54.980 [2024-12-09 06:23:49.301357] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dc0700) on tqpair(0x1d5e690): expected_datao=0, payload_size=8 00:24:54.980 [2024-12-09 06:23:49.301361] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:54.980 [2024-12-09 06:23:49.301368] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:54.980 [2024-12-09 06:23:49.301371] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:54.980 [2024-12-09 06:23:49.342636] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:54.980 [2024-12-09 06:23:49.342646] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:54.980 [2024-12-09 06:23:49.342650] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:54.980 [2024-12-09 06:23:49.342654] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dc0700) on tqpair=0x1d5e690 00:24:54.980 ===================================================== 00:24:54.980 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:54.980 ===================================================== 00:24:54.980 Controller Capabilities/Features 00:24:54.980 ================================ 00:24:54.980 Vendor ID: 0000 00:24:54.980 Subsystem Vendor ID: 0000 00:24:54.980 Serial Number: .................... 00:24:54.980 Model Number: ........................................ 00:24:54.980 Firmware Version: 25.01 00:24:54.980 Recommended Arb Burst: 0 00:24:54.980 IEEE OUI Identifier: 00 00 00 00:24:54.980 Multi-path I/O 00:24:54.980 May have multiple subsystem ports: No 00:24:54.980 May have multiple controllers: No 00:24:54.980 Associated with SR-IOV VF: No 00:24:54.980 Max Data Transfer Size: 131072 00:24:54.980 Max Number of Namespaces: 0 00:24:54.980 Max Number of I/O Queues: 1024 00:24:54.980 NVMe Specification Version (VS): 1.3 00:24:54.980 NVMe Specification Version (Identify): 1.3 00:24:54.980 Maximum Queue Entries: 128 00:24:54.980 Contiguous Queues Required: Yes 00:24:54.980 Arbitration Mechanisms Supported 00:24:54.980 Weighted Round Robin: Not Supported 00:24:54.980 Vendor Specific: Not Supported 00:24:54.980 Reset Timeout: 15000 ms 00:24:54.980 Doorbell Stride: 4 bytes 00:24:54.980 NVM Subsystem Reset: Not Supported 00:24:54.980 Command Sets Supported 00:24:54.980 NVM Command Set: Supported 00:24:54.980 Boot Partition: Not Supported 00:24:54.980 Memory Page Size Minimum: 4096 bytes 00:24:54.980 Memory Page Size Maximum: 4096 bytes 00:24:54.980 Persistent Memory Region: Not Supported 00:24:54.980 Optional Asynchronous Events Supported 00:24:54.980 Namespace Attribute Notices: Not Supported 00:24:54.980 Firmware Activation Notices: Not Supported 00:24:54.980 ANA Change Notices: Not Supported 00:24:54.980 PLE Aggregate Log Change Notices: Not Supported 00:24:54.980 LBA Status Info Alert Notices: Not Supported 00:24:54.980 EGE Aggregate Log Change Notices: Not Supported 00:24:54.980 Normal NVM Subsystem Shutdown event: Not Supported 00:24:54.980 Zone Descriptor Change Notices: Not Supported 00:24:54.980 Discovery Log Change Notices: Supported 00:24:54.980 Controller Attributes 00:24:54.980 128-bit Host Identifier: Not Supported 00:24:54.980 Non-Operational Permissive Mode: Not Supported 00:24:54.980 NVM Sets: Not Supported 00:24:54.980 Read Recovery Levels: Not Supported 00:24:54.980 Endurance Groups: Not Supported 00:24:54.980 Predictable Latency Mode: Not Supported 00:24:54.980 Traffic Based Keep ALive: Not Supported 00:24:54.980 Namespace Granularity: Not Supported 00:24:54.980 SQ Associations: Not Supported 00:24:54.980 UUID List: Not Supported 00:24:54.980 Multi-Domain Subsystem: Not Supported 00:24:54.980 Fixed Capacity Management: Not Supported 00:24:54.980 Variable Capacity Management: Not Supported 00:24:54.980 Delete Endurance Group: Not Supported 00:24:54.980 Delete NVM Set: Not Supported 00:24:54.980 Extended LBA Formats Supported: Not Supported 00:24:54.980 Flexible Data Placement Supported: Not Supported 00:24:54.980 00:24:54.980 Controller Memory Buffer Support 00:24:54.980 ================================ 00:24:54.980 Supported: No 00:24:54.980 00:24:54.980 Persistent Memory Region Support 00:24:54.980 ================================ 00:24:54.980 Supported: No 00:24:54.980 00:24:54.980 Admin Command Set Attributes 00:24:54.980 ============================ 00:24:54.980 Security Send/Receive: Not Supported 00:24:54.980 Format NVM: Not Supported 00:24:54.980 Firmware Activate/Download: Not Supported 00:24:54.980 Namespace Management: Not Supported 00:24:54.980 Device Self-Test: Not Supported 00:24:54.980 Directives: Not Supported 00:24:54.980 NVMe-MI: Not Supported 00:24:54.980 Virtualization Management: Not Supported 00:24:54.980 Doorbell Buffer Config: Not Supported 00:24:54.980 Get LBA Status Capability: Not Supported 00:24:54.980 Command & Feature Lockdown Capability: Not Supported 00:24:54.980 Abort Command Limit: 1 00:24:54.980 Async Event Request Limit: 4 00:24:54.980 Number of Firmware Slots: N/A 00:24:54.980 Firmware Slot 1 Read-Only: N/A 00:24:54.980 Firmware Activation Without Reset: N/A 00:24:54.980 Multiple Update Detection Support: N/A 00:24:54.980 Firmware Update Granularity: No Information Provided 00:24:54.980 Per-Namespace SMART Log: No 00:24:54.980 Asymmetric Namespace Access Log Page: Not Supported 00:24:54.980 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:54.980 Command Effects Log Page: Not Supported 00:24:54.980 Get Log Page Extended Data: Supported 00:24:54.980 Telemetry Log Pages: Not Supported 00:24:54.980 Persistent Event Log Pages: Not Supported 00:24:54.981 Supported Log Pages Log Page: May Support 00:24:54.981 Commands Supported & Effects Log Page: Not Supported 00:24:54.981 Feature Identifiers & Effects Log Page:May Support 00:24:54.981 NVMe-MI Commands & Effects Log Page: May Support 00:24:54.981 Data Area 4 for Telemetry Log: Not Supported 00:24:54.981 Error Log Page Entries Supported: 128 00:24:54.981 Keep Alive: Not Supported 00:24:54.981 00:24:54.981 NVM Command Set Attributes 00:24:54.981 ========================== 00:24:54.981 Submission Queue Entry Size 00:24:54.981 Max: 1 00:24:54.981 Min: 1 00:24:54.981 Completion Queue Entry Size 00:24:54.981 Max: 1 00:24:54.981 Min: 1 00:24:54.981 Number of Namespaces: 0 00:24:54.981 Compare Command: Not Supported 00:24:54.981 Write Uncorrectable Command: Not Supported 00:24:54.981 Dataset Management Command: Not Supported 00:24:54.981 Write Zeroes Command: Not Supported 00:24:54.981 Set Features Save Field: Not Supported 00:24:54.981 Reservations: Not Supported 00:24:54.981 Timestamp: Not Supported 00:24:54.981 Copy: Not Supported 00:24:54.981 Volatile Write Cache: Not Present 00:24:54.981 Atomic Write Unit (Normal): 1 00:24:54.981 Atomic Write Unit (PFail): 1 00:24:54.981 Atomic Compare & Write Unit: 1 00:24:54.981 Fused Compare & Write: Supported 00:24:54.981 Scatter-Gather List 00:24:54.981 SGL Command Set: Supported 00:24:54.981 SGL Keyed: Supported 00:24:54.981 SGL Bit Bucket Descriptor: Not Supported 00:24:54.981 SGL Metadata Pointer: Not Supported 00:24:54.981 Oversized SGL: Not Supported 00:24:54.981 SGL Metadata Address: Not Supported 00:24:54.981 SGL Offset: Supported 00:24:54.981 Transport SGL Data Block: Not Supported 00:24:54.981 Replay Protected Memory Block: Not Supported 00:24:54.981 00:24:54.981 Firmware Slot Information 00:24:54.981 ========================= 00:24:54.981 Active slot: 0 00:24:54.981 00:24:54.981 00:24:54.981 Error Log 00:24:54.981 ========= 00:24:54.981 00:24:54.981 Active Namespaces 00:24:54.981 ================= 00:24:54.981 Discovery Log Page 00:24:54.981 ================== 00:24:54.981 Generation Counter: 2 00:24:54.981 Number of Records: 2 00:24:54.981 Record Format: 0 00:24:54.981 00:24:54.981 Discovery Log Entry 0 00:24:54.981 ---------------------- 00:24:54.981 Transport Type: 3 (TCP) 00:24:54.981 Address Family: 1 (IPv4) 00:24:54.981 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:54.981 Entry Flags: 00:24:54.981 Duplicate Returned Information: 1 00:24:54.981 Explicit Persistent Connection Support for Discovery: 1 00:24:54.981 Transport Requirements: 00:24:54.981 Secure Channel: Not Required 00:24:54.981 Port ID: 0 (0x0000) 00:24:54.981 Controller ID: 65535 (0xffff) 00:24:54.981 Admin Max SQ Size: 128 00:24:54.981 Transport Service Identifier: 4420 00:24:54.981 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:54.981 Transport Address: 10.0.0.2 00:24:54.981 Discovery Log Entry 1 00:24:54.981 ---------------------- 00:24:54.981 Transport Type: 3 (TCP) 00:24:54.981 Address Family: 1 (IPv4) 00:24:54.981 Subsystem Type: 2 (NVM Subsystem) 00:24:54.981 Entry Flags: 00:24:54.981 Duplicate Returned Information: 0 00:24:54.981 Explicit Persistent Connection Support for Discovery: 0 00:24:54.981 Transport Requirements: 00:24:54.981 Secure Channel: Not Required 00:24:54.981 Port ID: 0 (0x0000) 00:24:54.981 Controller ID: 65535 (0xffff) 00:24:54.981 Admin Max SQ Size: 128 00:24:54.981 Transport Service Identifier: 4420 00:24:54.981 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:24:54.981 Transport Address: 10.0.0.2 [2024-12-09 06:23:49.342757] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:24:54.981 [2024-12-09 06:23:49.342769] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dc0100) on tqpair=0x1d5e690 00:24:54.981 [2024-12-09 06:23:49.342777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.981 [2024-12-09 06:23:49.342783] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dc0280) on tqpair=0x1d5e690 00:24:54.981 [2024-12-09 06:23:49.342787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.981 [2024-12-09 06:23:49.342792] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dc0400) on tqpair=0x1d5e690 00:24:54.981 [2024-12-09 06:23:49.342796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.981 [2024-12-09 06:23:49.342801] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dc0580) on tqpair=0x1d5e690 00:24:54.981 [2024-12-09 06:23:49.342805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.981 [2024-12-09 06:23:49.342818] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:54.981 [2024-12-09 06:23:49.342822] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:54.981 [2024-12-09 06:23:49.342827] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d5e690) 00:24:54.981 [2024-12-09 06:23:49.342834] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.981 [2024-12-09 06:23:49.342848] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc0580, cid 3, qid 0 00:24:54.981 [2024-12-09 06:23:49.343028] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:54.981 [2024-12-09 06:23:49.343035] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:54.981 [2024-12-09 06:23:49.343038] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:54.981 [2024-12-09 06:23:49.343042] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dc0580) on tqpair=0x1d5e690 00:24:54.981 [2024-12-09 06:23:49.343050] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:54.981 [2024-12-09 06:23:49.343054] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:54.981 [2024-12-09 06:23:49.343058] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d5e690) 00:24:54.981 [2024-12-09 06:23:49.343064] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.981 [2024-12-09 06:23:49.343077] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc0580, cid 3, qid 0 00:24:54.981 [2024-12-09 06:23:49.343316] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:54.981 [2024-12-09 06:23:49.343322] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:54.981 [2024-12-09 06:23:49.343325] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:54.981 [2024-12-09 06:23:49.343329] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dc0580) on tqpair=0x1d5e690 00:24:54.981 [2024-12-09 06:23:49.343334] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:24:54.981 [2024-12-09 06:23:49.343338] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:24:54.981 [2024-12-09 06:23:49.343347] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:54.981 [2024-12-09 06:23:49.343351] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:54.981 [2024-12-09 06:23:49.343355] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d5e690) 00:24:54.981 [2024-12-09 06:23:49.343361] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.981 [2024-12-09 06:23:49.343371] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dc0580, cid 3, qid 0 00:24:54.981 [2024-12-09 06:23:49.347481] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:54.981 [2024-12-09 06:23:49.347491] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:54.981 [2024-12-09 06:23:49.347494] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:54.981 [2024-12-09 06:23:49.347498] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dc0580) on tqpair=0x1d5e690 00:24:54.981 [2024-12-09 06:23:49.347506] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 4 milliseconds 00:24:54.981 00:24:54.981 06:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:24:54.981 [2024-12-09 06:23:49.393245] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:24:54.981 [2024-12-09 06:23:49.393291] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid417210 ] 00:24:54.981 [2024-12-09 06:23:49.448306] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:24:54.981 [2024-12-09 06:23:49.448367] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:54.981 [2024-12-09 06:23:49.448373] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:54.981 [2024-12-09 06:23:49.448391] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:54.981 [2024-12-09 06:23:49.448401] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:54.981 [2024-12-09 06:23:49.449030] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:24:54.982 [2024-12-09 06:23:49.449076] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x10c6690 0 00:24:54.982 [2024-12-09 06:23:49.462463] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:54.982 [2024-12-09 06:23:49.462481] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:54.982 [2024-12-09 06:23:49.462486] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:54.982 [2024-12-09 06:23:49.462489] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:54.982 [2024-12-09 06:23:49.462528] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:54.982 [2024-12-09 06:23:49.462534] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:54.982 [2024-12-09 06:23:49.462538] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10c6690) 00:24:54.982 [2024-12-09 06:23:49.462551] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:54.982 [2024-12-09 06:23:49.462575] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1128100, cid 0, qid 0 00:24:54.982 [2024-12-09 06:23:49.470460] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:54.982 [2024-12-09 06:23:49.470469] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:54.982 [2024-12-09 06:23:49.470472] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:54.982 [2024-12-09 06:23:49.470477] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1128100) on tqpair=0x10c6690 00:24:54.982 [2024-12-09 06:23:49.470486] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:54.982 [2024-12-09 06:23:49.470494] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:24:54.982 [2024-12-09 06:23:49.470499] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:24:54.982 [2024-12-09 06:23:49.470512] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:54.982 [2024-12-09 06:23:49.470516] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:54.982 [2024-12-09 06:23:49.470520] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10c6690) 00:24:54.982 [2024-12-09 06:23:49.470528] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.982 [2024-12-09 06:23:49.470542] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1128100, cid 0, qid 0 00:24:54.982 [2024-12-09 06:23:49.470659] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:54.982 [2024-12-09 06:23:49.470665] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:54.982 [2024-12-09 06:23:49.470668] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:54.982 [2024-12-09 06:23:49.470672] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1128100) on tqpair=0x10c6690 00:24:54.982 [2024-12-09 06:23:49.470677] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:24:54.982 [2024-12-09 06:23:49.470684] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:24:54.982 [2024-12-09 06:23:49.470691] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:54.982 [2024-12-09 06:23:49.470699] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:54.982 [2024-12-09 06:23:49.470703] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10c6690) 00:24:54.982 [2024-12-09 06:23:49.470709] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.982 [2024-12-09 06:23:49.470719] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1128100, cid 0, qid 0 00:24:54.982 [2024-12-09 06:23:49.470876] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:54.982 [2024-12-09 06:23:49.470882] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:54.982 [2024-12-09 06:23:49.470885] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:54.982 [2024-12-09 06:23:49.470889] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1128100) on tqpair=0x10c6690 00:24:54.982 [2024-12-09 06:23:49.470894] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:24:54.982 [2024-12-09 06:23:49.470902] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:24:54.982 [2024-12-09 06:23:49.470908] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:54.982 [2024-12-09 06:23:49.470912] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:54.982 [2024-12-09 06:23:49.470915] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10c6690) 00:24:54.982 [2024-12-09 06:23:49.470921] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.982 [2024-12-09 06:23:49.470932] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1128100, cid 0, qid 0 00:24:54.982 [2024-12-09 06:23:49.471156] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:54.982 [2024-12-09 06:23:49.471162] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:54.982 [2024-12-09 06:23:49.471165] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:54.982 [2024-12-09 06:23:49.471169] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1128100) on tqpair=0x10c6690 00:24:54.982 [2024-12-09 06:23:49.471174] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:54.982 [2024-12-09 06:23:49.471183] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:54.982 [2024-12-09 06:23:49.471187] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:54.982 [2024-12-09 06:23:49.471190] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10c6690) 00:24:54.982 [2024-12-09 06:23:49.471197] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.982 [2024-12-09 06:23:49.471206] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1128100, cid 0, qid 0 00:24:54.982 [2024-12-09 06:23:49.471465] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:54.982 [2024-12-09 06:23:49.471471] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:54.982 [2024-12-09 06:23:49.471474] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:54.982 [2024-12-09 06:23:49.471478] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1128100) on tqpair=0x10c6690 00:24:54.982 [2024-12-09 06:23:49.471482] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:24:54.982 [2024-12-09 06:23:49.471487] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:24:54.982 [2024-12-09 06:23:49.471494] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:54.982 [2024-12-09 06:23:49.471603] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:24:54.982 [2024-12-09 06:23:49.471610] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:54.982 [2024-12-09 06:23:49.471618] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:54.982 [2024-12-09 06:23:49.471622] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:54.982 [2024-12-09 06:23:49.471625] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10c6690) 00:24:54.982 [2024-12-09 06:23:49.471631] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.982 [2024-12-09 06:23:49.471641] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1128100, cid 0, qid 0 00:24:54.982 [2024-12-09 06:23:49.471883] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:54.982 [2024-12-09 06:23:49.471889] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:54.982 [2024-12-09 06:23:49.471892] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:54.982 [2024-12-09 06:23:49.471895] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1128100) on tqpair=0x10c6690 00:24:54.982 [2024-12-09 06:23:49.471900] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:54.982 [2024-12-09 06:23:49.471909] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:54.982 [2024-12-09 06:23:49.471912] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:54.982 [2024-12-09 06:23:49.471916] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10c6690) 00:24:54.982 [2024-12-09 06:23:49.471922] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.982 [2024-12-09 06:23:49.471932] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1128100, cid 0, qid 0 00:24:54.982 [2024-12-09 06:23:49.472091] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:54.982 [2024-12-09 06:23:49.472097] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:54.982 [2024-12-09 06:23:49.472101] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:54.982 [2024-12-09 06:23:49.472104] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1128100) on tqpair=0x10c6690 00:24:54.982 [2024-12-09 06:23:49.472108] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:54.982 [2024-12-09 06:23:49.472113] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:24:54.982 [2024-12-09 06:23:49.472120] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:24:54.982 [2024-12-09 06:23:49.472133] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:24:54.982 [2024-12-09 06:23:49.472142] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:54.982 [2024-12-09 06:23:49.472146] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10c6690) 00:24:54.982 [2024-12-09 06:23:49.472152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.982 [2024-12-09 06:23:49.472162] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1128100, cid 0, qid 0 00:24:54.982 [2024-12-09 06:23:49.472337] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:54.982 [2024-12-09 06:23:49.472343] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:54.982 [2024-12-09 06:23:49.472346] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:54.982 [2024-12-09 06:23:49.472350] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10c6690): datao=0, datal=4096, cccid=0 00:24:54.982 [2024-12-09 06:23:49.472355] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1128100) on tqpair(0x10c6690): expected_datao=0, payload_size=4096 00:24:54.982 [2024-12-09 06:23:49.472364] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:54.982 [2024-12-09 06:23:49.472383] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:54.982 [2024-12-09 06:23:49.472387] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:54.983 [2024-12-09 06:23:49.472483] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:54.983 [2024-12-09 06:23:49.472489] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:54.983 [2024-12-09 06:23:49.472492] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:54.983 [2024-12-09 06:23:49.472496] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1128100) on tqpair=0x10c6690 00:24:54.983 [2024-12-09 06:23:49.472504] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:24:54.983 [2024-12-09 06:23:49.472511] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:24:54.983 [2024-12-09 06:23:49.472515] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:24:54.983 [2024-12-09 06:23:49.472520] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:24:54.983 [2024-12-09 06:23:49.472524] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:24:54.983 [2024-12-09 06:23:49.472529] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:24:54.983 [2024-12-09 06:23:49.472537] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:24:54.983 [2024-12-09 06:23:49.472543] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:54.983 [2024-12-09 06:23:49.472547] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:54.983 [2024-12-09 06:23:49.472550] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10c6690) 00:24:54.983 [2024-12-09 06:23:49.472557] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:54.983 [2024-12-09 06:23:49.472568] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1128100, cid 0, qid 0 00:24:54.983 [2024-12-09 06:23:49.472701] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:54.983 [2024-12-09 06:23:49.472707] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:54.983 [2024-12-09 06:23:49.472710] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:54.983 [2024-12-09 06:23:49.472714] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1128100) on tqpair=0x10c6690 00:24:54.983 [2024-12-09 06:23:49.472720] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:54.983 [2024-12-09 06:23:49.472724] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:54.983 [2024-12-09 06:23:49.472727] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10c6690) 00:24:54.983 [2024-12-09 06:23:49.472733] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.983 [2024-12-09 06:23:49.472739] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:54.983 [2024-12-09 06:23:49.472743] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:54.983 [2024-12-09 06:23:49.472746] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x10c6690) 00:24:54.983 [2024-12-09 06:23:49.472751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.983 [2024-12-09 06:23:49.472757] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:54.983 [2024-12-09 06:23:49.472760] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:54.983 [2024-12-09 06:23:49.472764] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x10c6690) 00:24:54.983 [2024-12-09 06:23:49.472769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.983 [2024-12-09 06:23:49.472777] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:54.983 [2024-12-09 06:23:49.472781] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:54.983 [2024-12-09 06:23:49.472784] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10c6690) 00:24:54.983 [2024-12-09 06:23:49.472790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.983 [2024-12-09 06:23:49.472794] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:54.983 [2024-12-09 06:23:49.472804] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:54.983 [2024-12-09 06:23:49.472810] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:54.983 [2024-12-09 06:23:49.472814] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10c6690) 00:24:54.983 [2024-12-09 06:23:49.472820] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.983 [2024-12-09 06:23:49.472832] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1128100, cid 0, qid 0 00:24:54.983 [2024-12-09 06:23:49.472837] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1128280, cid 1, qid 0 00:24:54.983 [2024-12-09 06:23:49.472842] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1128400, cid 2, qid 0 00:24:54.983 [2024-12-09 06:23:49.472846] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1128580, cid 3, qid 0 00:24:54.983 [2024-12-09 06:23:49.472851] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1128700, cid 4, qid 0 00:24:54.983 [2024-12-09 06:23:49.473005] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:54.983 [2024-12-09 06:23:49.473011] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:54.983 [2024-12-09 06:23:49.473015] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:54.983 [2024-12-09 06:23:49.473018] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1128700) on tqpair=0x10c6690 00:24:54.983 [2024-12-09 06:23:49.473023] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:24:54.983 [2024-12-09 06:23:49.473028] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:24:54.983 [2024-12-09 06:23:49.473036] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:24:54.983 [2024-12-09 06:23:49.473042] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:24:54.983 [2024-12-09 06:23:49.473048] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:54.983 [2024-12-09 06:23:49.473051] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:54.983 [2024-12-09 06:23:49.473055] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10c6690) 00:24:54.983 [2024-12-09 06:23:49.473061] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:54.983 [2024-12-09 06:23:49.473071] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1128700, cid 4, qid 0 00:24:54.983 [2024-12-09 06:23:49.473171] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:54.983 [2024-12-09 06:23:49.473177] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:54.983 [2024-12-09 06:23:49.473180] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:54.983 [2024-12-09 06:23:49.473184] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1128700) on tqpair=0x10c6690 00:24:54.983 [2024-12-09 06:23:49.473246] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:24:54.983 [2024-12-09 06:23:49.473257] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:24:54.983 [2024-12-09 06:23:49.473264] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:54.983 [2024-12-09 06:23:49.473268] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10c6690) 00:24:54.983 [2024-12-09 06:23:49.473274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.983 [2024-12-09 06:23:49.473284] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1128700, cid 4, qid 0 00:24:54.983 [2024-12-09 06:23:49.473433] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:54.983 [2024-12-09 06:23:49.473439] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:54.983 [2024-12-09 06:23:49.473442] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:54.983 [2024-12-09 06:23:49.473446] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10c6690): datao=0, datal=4096, cccid=4 00:24:54.983 [2024-12-09 06:23:49.473458] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1128700) on tqpair(0x10c6690): expected_datao=0, payload_size=4096 00:24:54.983 [2024-12-09 06:23:49.473462] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:54.983 [2024-12-09 06:23:49.473476] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:54.983 [2024-12-09 06:23:49.473479] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:54.983 [2024-12-09 06:23:49.516460] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:54.983 [2024-12-09 06:23:49.516473] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:54.983 [2024-12-09 06:23:49.516476] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:54.983 [2024-12-09 06:23:49.516480] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1128700) on tqpair=0x10c6690 00:24:54.983 [2024-12-09 06:23:49.516491] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:24:54.983 [2024-12-09 06:23:49.516505] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:24:54.983 [2024-12-09 06:23:49.516514] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:24:54.983 [2024-12-09 06:23:49.516522] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:54.983 [2024-12-09 06:23:49.516526] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10c6690) 00:24:54.983 [2024-12-09 06:23:49.516534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.983 [2024-12-09 06:23:49.516547] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1128700, cid 4, qid 0 00:24:54.983 [2024-12-09 06:23:49.516703] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:54.983 [2024-12-09 06:23:49.516711] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:54.983 [2024-12-09 06:23:49.516715] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:54.983 [2024-12-09 06:23:49.516718] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10c6690): datao=0, datal=4096, cccid=4 00:24:54.983 [2024-12-09 06:23:49.516723] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1128700) on tqpair(0x10c6690): expected_datao=0, payload_size=4096 00:24:54.983 [2024-12-09 06:23:49.516727] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:54.983 [2024-12-09 06:23:49.516741] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:54.983 [2024-12-09 06:23:49.516746] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:54.983 [2024-12-09 06:23:49.560458] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:54.983 [2024-12-09 06:23:49.560469] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:54.984 [2024-12-09 06:23:49.560476] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:54.984 [2024-12-09 06:23:49.560480] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1128700) on tqpair=0x10c6690 00:24:54.984 [2024-12-09 06:23:49.560495] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:24:54.984 [2024-12-09 06:23:49.560505] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:24:54.984 [2024-12-09 06:23:49.560513] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:54.984 [2024-12-09 06:23:49.560517] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10c6690) 00:24:54.984 [2024-12-09 06:23:49.560524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.984 [2024-12-09 06:23:49.560536] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1128700, cid 4, qid 0 00:24:54.984 [2024-12-09 06:23:49.560679] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:54.984 [2024-12-09 06:23:49.560685] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:54.984 [2024-12-09 06:23:49.560688] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:54.984 [2024-12-09 06:23:49.560692] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10c6690): datao=0, datal=4096, cccid=4 00:24:54.984 [2024-12-09 06:23:49.560696] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1128700) on tqpair(0x10c6690): expected_datao=0, payload_size=4096 00:24:54.984 [2024-12-09 06:23:49.560700] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:54.984 [2024-12-09 06:23:49.560713] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:54.984 [2024-12-09 06:23:49.560717] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:55.247 [2024-12-09 06:23:49.602636] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:55.247 [2024-12-09 06:23:49.602646] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:55.247 [2024-12-09 06:23:49.602649] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:55.247 [2024-12-09 06:23:49.602653] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1128700) on tqpair=0x10c6690 00:24:55.247 [2024-12-09 06:23:49.602662] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:24:55.247 [2024-12-09 06:23:49.602670] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:24:55.247 [2024-12-09 06:23:49.602679] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:24:55.247 [2024-12-09 06:23:49.602687] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:24:55.247 [2024-12-09 06:23:49.602692] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:24:55.247 [2024-12-09 06:23:49.602697] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:24:55.247 [2024-12-09 06:23:49.602702] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:24:55.247 [2024-12-09 06:23:49.602706] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:24:55.247 [2024-12-09 06:23:49.602712] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:24:55.247 [2024-12-09 06:23:49.602726] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:55.247 [2024-12-09 06:23:49.602730] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10c6690) 00:24:55.247 [2024-12-09 06:23:49.602741] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.247 [2024-12-09 06:23:49.602748] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:55.248 [2024-12-09 06:23:49.602751] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:55.248 [2024-12-09 06:23:49.602755] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x10c6690) 00:24:55.248 [2024-12-09 06:23:49.602760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:55.248 [2024-12-09 06:23:49.602774] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1128700, cid 4, qid 0 00:24:55.248 [2024-12-09 06:23:49.602779] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1128880, cid 5, qid 0 00:24:55.248 [2024-12-09 06:23:49.602935] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:55.248 [2024-12-09 06:23:49.602941] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:55.248 [2024-12-09 06:23:49.602944] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:55.248 [2024-12-09 06:23:49.602948] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1128700) on tqpair=0x10c6690 00:24:55.248 [2024-12-09 06:23:49.602954] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:55.248 [2024-12-09 06:23:49.602960] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:55.248 [2024-12-09 06:23:49.602963] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:55.248 [2024-12-09 06:23:49.602967] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1128880) on tqpair=0x10c6690 00:24:55.248 [2024-12-09 06:23:49.602975] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:55.248 [2024-12-09 06:23:49.602979] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x10c6690) 00:24:55.248 [2024-12-09 06:23:49.602985] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.248 [2024-12-09 06:23:49.602994] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1128880, cid 5, qid 0 00:24:55.248 [2024-12-09 06:23:49.603161] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:55.248 [2024-12-09 06:23:49.603167] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:55.248 [2024-12-09 06:23:49.603170] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:55.248 [2024-12-09 06:23:49.603174] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1128880) on tqpair=0x10c6690 00:24:55.248 [2024-12-09 06:23:49.603183] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:55.248 [2024-12-09 06:23:49.603186] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x10c6690) 00:24:55.248 [2024-12-09 06:23:49.603192] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.248 [2024-12-09 06:23:49.603201] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1128880, cid 5, qid 0 00:24:55.248 [2024-12-09 06:23:49.603467] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:55.248 [2024-12-09 06:23:49.603474] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:55.248 [2024-12-09 06:23:49.603477] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:55.248 [2024-12-09 06:23:49.603481] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1128880) on tqpair=0x10c6690 00:24:55.248 [2024-12-09 06:23:49.603489] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:55.248 [2024-12-09 06:23:49.603493] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x10c6690) 00:24:55.248 [2024-12-09 06:23:49.603499] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.248 [2024-12-09 06:23:49.603509] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1128880, cid 5, qid 0 00:24:55.248 [2024-12-09 06:23:49.603671] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:55.248 [2024-12-09 06:23:49.603677] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:55.248 [2024-12-09 06:23:49.603680] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:55.248 [2024-12-09 06:23:49.603684] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1128880) on tqpair=0x10c6690 00:24:55.248 [2024-12-09 06:23:49.603700] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:55.248 [2024-12-09 06:23:49.603704] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x10c6690) 00:24:55.248 [2024-12-09 06:23:49.603710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.248 [2024-12-09 06:23:49.603717] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:55.248 [2024-12-09 06:23:49.603721] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10c6690) 00:24:55.248 [2024-12-09 06:23:49.603727] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.248 [2024-12-09 06:23:49.603734] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:55.248 [2024-12-09 06:23:49.603737] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x10c6690) 00:24:55.248 [2024-12-09 06:23:49.603743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.248 [2024-12-09 06:23:49.603750] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:55.248 [2024-12-09 06:23:49.603753] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x10c6690) 00:24:55.248 [2024-12-09 06:23:49.603759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.248 [2024-12-09 06:23:49.603769] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1128880, cid 5, qid 0 00:24:55.248 [2024-12-09 06:23:49.603774] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1128700, cid 4, qid 0 00:24:55.248 [2024-12-09 06:23:49.603779] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1128a00, cid 6, qid 0 00:24:55.248 [2024-12-09 06:23:49.603783] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1128b80, cid 7, qid 0 00:24:55.248 [2024-12-09 06:23:49.604079] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:55.248 [2024-12-09 06:23:49.604085] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:55.248 [2024-12-09 06:23:49.604089] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:55.248 [2024-12-09 06:23:49.604092] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10c6690): datao=0, datal=8192, cccid=5 00:24:55.248 [2024-12-09 06:23:49.604096] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1128880) on tqpair(0x10c6690): expected_datao=0, payload_size=8192 00:24:55.248 [2024-12-09 06:23:49.604101] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:55.248 [2024-12-09 06:23:49.604171] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:55.248 [2024-12-09 06:23:49.604175] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:55.248 [2024-12-09 06:23:49.604180] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:55.248 [2024-12-09 06:23:49.604186] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:55.248 [2024-12-09 06:23:49.604189] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:55.248 [2024-12-09 06:23:49.604192] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10c6690): datao=0, datal=512, cccid=4 00:24:55.248 [2024-12-09 06:23:49.604197] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1128700) on tqpair(0x10c6690): expected_datao=0, payload_size=512 00:24:55.248 [2024-12-09 06:23:49.604201] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:55.248 [2024-12-09 06:23:49.604209] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:55.248 [2024-12-09 06:23:49.604212] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:55.248 [2024-12-09 06:23:49.604218] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:55.248 [2024-12-09 06:23:49.604223] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:55.248 [2024-12-09 06:23:49.604226] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:55.248 [2024-12-09 06:23:49.604230] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10c6690): datao=0, datal=512, cccid=6 00:24:55.248 [2024-12-09 06:23:49.604234] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1128a00) on tqpair(0x10c6690): expected_datao=0, payload_size=512 00:24:55.248 [2024-12-09 06:23:49.604238] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:55.248 [2024-12-09 06:23:49.604244] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:55.248 [2024-12-09 06:23:49.604247] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:55.248 [2024-12-09 06:23:49.604252] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:55.248 [2024-12-09 06:23:49.604257] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:55.248 [2024-12-09 06:23:49.604261] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:55.248 [2024-12-09 06:23:49.604264] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10c6690): datao=0, datal=4096, cccid=7 00:24:55.248 [2024-12-09 06:23:49.604268] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1128b80) on tqpair(0x10c6690): expected_datao=0, payload_size=4096 00:24:55.248 [2024-12-09 06:23:49.604272] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:55.248 [2024-12-09 06:23:49.604287] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:55.248 [2024-12-09 06:23:49.604290] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:55.248 [2024-12-09 06:23:49.645525] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:55.248 [2024-12-09 06:23:49.645534] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:55.248 [2024-12-09 06:23:49.645537] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:55.248 [2024-12-09 06:23:49.645541] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1128880) on tqpair=0x10c6690 00:24:55.248 [2024-12-09 06:23:49.645553] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:55.248 [2024-12-09 06:23:49.645559] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:55.248 [2024-12-09 06:23:49.645562] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:55.248 [2024-12-09 06:23:49.645565] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1128700) on tqpair=0x10c6690 00:24:55.248 [2024-12-09 06:23:49.645576] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:55.248 [2024-12-09 06:23:49.645581] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:55.248 [2024-12-09 06:23:49.645584] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:55.248 [2024-12-09 06:23:49.645588] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1128a00) on tqpair=0x10c6690 00:24:55.248 [2024-12-09 06:23:49.645595] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:55.248 [2024-12-09 06:23:49.645600] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:55.248 [2024-12-09 06:23:49.645603] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:55.249 [2024-12-09 06:23:49.645607] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1128b80) on tqpair=0x10c6690 00:24:55.249 ===================================================== 00:24:55.249 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:55.249 ===================================================== 00:24:55.249 Controller Capabilities/Features 00:24:55.249 ================================ 00:24:55.249 Vendor ID: 8086 00:24:55.249 Subsystem Vendor ID: 8086 00:24:55.249 Serial Number: SPDK00000000000001 00:24:55.249 Model Number: SPDK bdev Controller 00:24:55.249 Firmware Version: 25.01 00:24:55.249 Recommended Arb Burst: 6 00:24:55.249 IEEE OUI Identifier: e4 d2 5c 00:24:55.249 Multi-path I/O 00:24:55.249 May have multiple subsystem ports: Yes 00:24:55.249 May have multiple controllers: Yes 00:24:55.249 Associated with SR-IOV VF: No 00:24:55.249 Max Data Transfer Size: 131072 00:24:55.249 Max Number of Namespaces: 32 00:24:55.249 Max Number of I/O Queues: 127 00:24:55.249 NVMe Specification Version (VS): 1.3 00:24:55.249 NVMe Specification Version (Identify): 1.3 00:24:55.249 Maximum Queue Entries: 128 00:24:55.249 Contiguous Queues Required: Yes 00:24:55.249 Arbitration Mechanisms Supported 00:24:55.249 Weighted Round Robin: Not Supported 00:24:55.249 Vendor Specific: Not Supported 00:24:55.249 Reset Timeout: 15000 ms 00:24:55.249 Doorbell Stride: 4 bytes 00:24:55.249 NVM Subsystem Reset: Not Supported 00:24:55.249 Command Sets Supported 00:24:55.249 NVM Command Set: Supported 00:24:55.249 Boot Partition: Not Supported 00:24:55.249 Memory Page Size Minimum: 4096 bytes 00:24:55.249 Memory Page Size Maximum: 4096 bytes 00:24:55.249 Persistent Memory Region: Not Supported 00:24:55.249 Optional Asynchronous Events Supported 00:24:55.249 Namespace Attribute Notices: Supported 00:24:55.249 Firmware Activation Notices: Not Supported 00:24:55.249 ANA Change Notices: Not Supported 00:24:55.249 PLE Aggregate Log Change Notices: Not Supported 00:24:55.249 LBA Status Info Alert Notices: Not Supported 00:24:55.249 EGE Aggregate Log Change Notices: Not Supported 00:24:55.249 Normal NVM Subsystem Shutdown event: Not Supported 00:24:55.249 Zone Descriptor Change Notices: Not Supported 00:24:55.249 Discovery Log Change Notices: Not Supported 00:24:55.249 Controller Attributes 00:24:55.249 128-bit Host Identifier: Supported 00:24:55.249 Non-Operational Permissive Mode: Not Supported 00:24:55.249 NVM Sets: Not Supported 00:24:55.249 Read Recovery Levels: Not Supported 00:24:55.249 Endurance Groups: Not Supported 00:24:55.249 Predictable Latency Mode: Not Supported 00:24:55.249 Traffic Based Keep ALive: Not Supported 00:24:55.249 Namespace Granularity: Not Supported 00:24:55.249 SQ Associations: Not Supported 00:24:55.249 UUID List: Not Supported 00:24:55.249 Multi-Domain Subsystem: Not Supported 00:24:55.249 Fixed Capacity Management: Not Supported 00:24:55.249 Variable Capacity Management: Not Supported 00:24:55.249 Delete Endurance Group: Not Supported 00:24:55.249 Delete NVM Set: Not Supported 00:24:55.249 Extended LBA Formats Supported: Not Supported 00:24:55.249 Flexible Data Placement Supported: Not Supported 00:24:55.249 00:24:55.249 Controller Memory Buffer Support 00:24:55.249 ================================ 00:24:55.249 Supported: No 00:24:55.249 00:24:55.249 Persistent Memory Region Support 00:24:55.249 ================================ 00:24:55.249 Supported: No 00:24:55.249 00:24:55.249 Admin Command Set Attributes 00:24:55.249 ============================ 00:24:55.249 Security Send/Receive: Not Supported 00:24:55.249 Format NVM: Not Supported 00:24:55.249 Firmware Activate/Download: Not Supported 00:24:55.249 Namespace Management: Not Supported 00:24:55.249 Device Self-Test: Not Supported 00:24:55.249 Directives: Not Supported 00:24:55.249 NVMe-MI: Not Supported 00:24:55.249 Virtualization Management: Not Supported 00:24:55.249 Doorbell Buffer Config: Not Supported 00:24:55.249 Get LBA Status Capability: Not Supported 00:24:55.249 Command & Feature Lockdown Capability: Not Supported 00:24:55.249 Abort Command Limit: 4 00:24:55.249 Async Event Request Limit: 4 00:24:55.249 Number of Firmware Slots: N/A 00:24:55.249 Firmware Slot 1 Read-Only: N/A 00:24:55.249 Firmware Activation Without Reset: N/A 00:24:55.249 Multiple Update Detection Support: N/A 00:24:55.249 Firmware Update Granularity: No Information Provided 00:24:55.249 Per-Namespace SMART Log: No 00:24:55.249 Asymmetric Namespace Access Log Page: Not Supported 00:24:55.249 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:24:55.249 Command Effects Log Page: Supported 00:24:55.249 Get Log Page Extended Data: Supported 00:24:55.249 Telemetry Log Pages: Not Supported 00:24:55.249 Persistent Event Log Pages: Not Supported 00:24:55.249 Supported Log Pages Log Page: May Support 00:24:55.249 Commands Supported & Effects Log Page: Not Supported 00:24:55.249 Feature Identifiers & Effects Log Page:May Support 00:24:55.249 NVMe-MI Commands & Effects Log Page: May Support 00:24:55.249 Data Area 4 for Telemetry Log: Not Supported 00:24:55.249 Error Log Page Entries Supported: 128 00:24:55.249 Keep Alive: Supported 00:24:55.249 Keep Alive Granularity: 10000 ms 00:24:55.249 00:24:55.249 NVM Command Set Attributes 00:24:55.249 ========================== 00:24:55.249 Submission Queue Entry Size 00:24:55.249 Max: 64 00:24:55.249 Min: 64 00:24:55.249 Completion Queue Entry Size 00:24:55.249 Max: 16 00:24:55.249 Min: 16 00:24:55.249 Number of Namespaces: 32 00:24:55.249 Compare Command: Supported 00:24:55.249 Write Uncorrectable Command: Not Supported 00:24:55.249 Dataset Management Command: Supported 00:24:55.249 Write Zeroes Command: Supported 00:24:55.249 Set Features Save Field: Not Supported 00:24:55.249 Reservations: Supported 00:24:55.249 Timestamp: Not Supported 00:24:55.249 Copy: Supported 00:24:55.249 Volatile Write Cache: Present 00:24:55.249 Atomic Write Unit (Normal): 1 00:24:55.249 Atomic Write Unit (PFail): 1 00:24:55.249 Atomic Compare & Write Unit: 1 00:24:55.249 Fused Compare & Write: Supported 00:24:55.249 Scatter-Gather List 00:24:55.249 SGL Command Set: Supported 00:24:55.249 SGL Keyed: Supported 00:24:55.249 SGL Bit Bucket Descriptor: Not Supported 00:24:55.249 SGL Metadata Pointer: Not Supported 00:24:55.249 Oversized SGL: Not Supported 00:24:55.249 SGL Metadata Address: Not Supported 00:24:55.249 SGL Offset: Supported 00:24:55.249 Transport SGL Data Block: Not Supported 00:24:55.249 Replay Protected Memory Block: Not Supported 00:24:55.249 00:24:55.249 Firmware Slot Information 00:24:55.249 ========================= 00:24:55.249 Active slot: 1 00:24:55.249 Slot 1 Firmware Revision: 25.01 00:24:55.249 00:24:55.249 00:24:55.249 Commands Supported and Effects 00:24:55.249 ============================== 00:24:55.249 Admin Commands 00:24:55.249 -------------- 00:24:55.249 Get Log Page (02h): Supported 00:24:55.249 Identify (06h): Supported 00:24:55.249 Abort (08h): Supported 00:24:55.249 Set Features (09h): Supported 00:24:55.249 Get Features (0Ah): Supported 00:24:55.249 Asynchronous Event Request (0Ch): Supported 00:24:55.249 Keep Alive (18h): Supported 00:24:55.249 I/O Commands 00:24:55.249 ------------ 00:24:55.249 Flush (00h): Supported LBA-Change 00:24:55.249 Write (01h): Supported LBA-Change 00:24:55.249 Read (02h): Supported 00:24:55.249 Compare (05h): Supported 00:24:55.249 Write Zeroes (08h): Supported LBA-Change 00:24:55.249 Dataset Management (09h): Supported LBA-Change 00:24:55.249 Copy (19h): Supported LBA-Change 00:24:55.249 00:24:55.249 Error Log 00:24:55.249 ========= 00:24:55.249 00:24:55.249 Arbitration 00:24:55.249 =========== 00:24:55.249 Arbitration Burst: 1 00:24:55.249 00:24:55.249 Power Management 00:24:55.249 ================ 00:24:55.249 Number of Power States: 1 00:24:55.249 Current Power State: Power State #0 00:24:55.249 Power State #0: 00:24:55.249 Max Power: 0.00 W 00:24:55.249 Non-Operational State: Operational 00:24:55.249 Entry Latency: Not Reported 00:24:55.249 Exit Latency: Not Reported 00:24:55.249 Relative Read Throughput: 0 00:24:55.249 Relative Read Latency: 0 00:24:55.249 Relative Write Throughput: 0 00:24:55.249 Relative Write Latency: 0 00:24:55.249 Idle Power: Not Reported 00:24:55.250 Active Power: Not Reported 00:24:55.250 Non-Operational Permissive Mode: Not Supported 00:24:55.250 00:24:55.250 Health Information 00:24:55.250 ================== 00:24:55.250 Critical Warnings: 00:24:55.250 Available Spare Space: OK 00:24:55.250 Temperature: OK 00:24:55.250 Device Reliability: OK 00:24:55.250 Read Only: No 00:24:55.250 Volatile Memory Backup: OK 00:24:55.250 Current Temperature: 0 Kelvin (-273 Celsius) 00:24:55.250 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:24:55.250 Available Spare: 0% 00:24:55.250 Available Spare Threshold: 0% 00:24:55.250 Life Percentage Used:[2024-12-09 06:23:49.645697] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:55.250 [2024-12-09 06:23:49.645703] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x10c6690) 00:24:55.250 [2024-12-09 06:23:49.645709] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.250 [2024-12-09 06:23:49.645721] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1128b80, cid 7, qid 0 00:24:55.250 [2024-12-09 06:23:49.645786] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:55.250 [2024-12-09 06:23:49.645792] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:55.250 [2024-12-09 06:23:49.645796] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:55.250 [2024-12-09 06:23:49.645799] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1128b80) on tqpair=0x10c6690 00:24:55.250 [2024-12-09 06:23:49.645830] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:24:55.250 [2024-12-09 06:23:49.645839] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1128100) on tqpair=0x10c6690 00:24:55.250 [2024-12-09 06:23:49.645845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.250 [2024-12-09 06:23:49.645850] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1128280) on tqpair=0x10c6690 00:24:55.250 [2024-12-09 06:23:49.645855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.250 [2024-12-09 06:23:49.645859] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1128400) on tqpair=0x10c6690 00:24:55.250 [2024-12-09 06:23:49.645864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.250 [2024-12-09 06:23:49.645868] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1128580) on tqpair=0x10c6690 00:24:55.250 [2024-12-09 06:23:49.645872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.250 [2024-12-09 06:23:49.645880] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:55.250 [2024-12-09 06:23:49.645884] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:55.250 [2024-12-09 06:23:49.645887] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10c6690) 00:24:55.250 [2024-12-09 06:23:49.645894] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.250 [2024-12-09 06:23:49.645905] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1128580, cid 3, qid 0 00:24:55.250 [2024-12-09 06:23:49.645966] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:55.250 [2024-12-09 06:23:49.645972] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:55.250 [2024-12-09 06:23:49.645975] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:55.250 [2024-12-09 06:23:49.645979] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1128580) on tqpair=0x10c6690 00:24:55.250 [2024-12-09 06:23:49.645985] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:55.250 [2024-12-09 06:23:49.645989] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:55.250 [2024-12-09 06:23:49.645992] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10c6690) 00:24:55.250 [2024-12-09 06:23:49.645998] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.250 [2024-12-09 06:23:49.646010] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1128580, cid 3, qid 0 00:24:55.250 [2024-12-09 06:23:49.646084] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:55.250 [2024-12-09 06:23:49.646090] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:55.250 [2024-12-09 06:23:49.646094] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:55.250 [2024-12-09 06:23:49.646097] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1128580) on tqpair=0x10c6690 00:24:55.250 [2024-12-09 06:23:49.646102] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:24:55.250 [2024-12-09 06:23:49.646106] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:24:55.250 [2024-12-09 06:23:49.646115] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:55.250 [2024-12-09 06:23:49.646118] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:55.250 [2024-12-09 06:23:49.646124] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10c6690) 00:24:55.250 [2024-12-09 06:23:49.646130] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.250 [2024-12-09 06:23:49.646140] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1128580, cid 3, qid 0 00:24:55.250 [2024-12-09 06:23:49.646200] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:55.250 [2024-12-09 06:23:49.646205] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:55.250 [2024-12-09 06:23:49.646209] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:55.250 [2024-12-09 06:23:49.646212] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1128580) on tqpair=0x10c6690 00:24:55.250 [2024-12-09 06:23:49.646222] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:55.250 [2024-12-09 06:23:49.646225] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:55.250 [2024-12-09 06:23:49.646229] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10c6690) 00:24:55.250 [2024-12-09 06:23:49.646235] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.250 [2024-12-09 06:23:49.646244] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1128580, cid 3, qid 0 00:24:55.250 [2024-12-09 06:23:49.646316] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:55.250 [2024-12-09 06:23:49.646322] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:55.250 [2024-12-09 06:23:49.646325] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:55.250 [2024-12-09 06:23:49.646329] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1128580) on tqpair=0x10c6690 00:24:55.250 [2024-12-09 06:23:49.646338] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:55.250 [2024-12-09 06:23:49.646342] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:55.250 [2024-12-09 06:23:49.646345] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10c6690) 00:24:55.250 [2024-12-09 06:23:49.646351] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.250 [2024-12-09 06:23:49.646360] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1128580, cid 3, qid 0 00:24:55.250 [2024-12-09 06:23:49.646423] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:55.250 [2024-12-09 06:23:49.646429] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:55.250 [2024-12-09 06:23:49.646432] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:55.250 [2024-12-09 06:23:49.646436] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1128580) on tqpair=0x10c6690 00:24:55.250 [2024-12-09 06:23:49.646445] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:55.250 [2024-12-09 06:23:49.650455] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:55.250 [2024-12-09 06:23:49.650460] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10c6690) 00:24:55.250 [2024-12-09 06:23:49.650467] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.250 [2024-12-09 06:23:49.650478] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1128580, cid 3, qid 0 00:24:55.250 [2024-12-09 06:23:49.650561] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:55.250 [2024-12-09 06:23:49.650567] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:55.250 [2024-12-09 06:23:49.650570] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:55.250 [2024-12-09 06:23:49.650573] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1128580) on tqpair=0x10c6690 00:24:55.250 [2024-12-09 06:23:49.650580] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds 00:24:55.250 0% 00:24:55.250 Data Units Read: 0 00:24:55.250 Data Units Written: 0 00:24:55.250 Host Read Commands: 0 00:24:55.250 Host Write Commands: 0 00:24:55.250 Controller Busy Time: 0 minutes 00:24:55.250 Power Cycles: 0 00:24:55.250 Power On Hours: 0 hours 00:24:55.250 Unsafe Shutdowns: 0 00:24:55.250 Unrecoverable Media Errors: 0 00:24:55.250 Lifetime Error Log Entries: 0 00:24:55.250 Warning Temperature Time: 0 minutes 00:24:55.250 Critical Temperature Time: 0 minutes 00:24:55.250 00:24:55.250 Number of Queues 00:24:55.250 ================ 00:24:55.250 Number of I/O Submission Queues: 127 00:24:55.250 Number of I/O Completion Queues: 127 00:24:55.250 00:24:55.250 Active Namespaces 00:24:55.250 ================= 00:24:55.250 Namespace ID:1 00:24:55.250 Error Recovery Timeout: Unlimited 00:24:55.250 Command Set Identifier: NVM (00h) 00:24:55.250 Deallocate: Supported 00:24:55.250 Deallocated/Unwritten Error: Not Supported 00:24:55.250 Deallocated Read Value: Unknown 00:24:55.250 Deallocate in Write Zeroes: Not Supported 00:24:55.250 Deallocated Guard Field: 0xFFFF 00:24:55.250 Flush: Supported 00:24:55.250 Reservation: Supported 00:24:55.250 Namespace Sharing Capabilities: Multiple Controllers 00:24:55.250 Size (in LBAs): 131072 (0GiB) 00:24:55.250 Capacity (in LBAs): 131072 (0GiB) 00:24:55.250 Utilization (in LBAs): 131072 (0GiB) 00:24:55.250 NGUID: ABCDEF0123456789ABCDEF0123456789 00:24:55.250 EUI64: ABCDEF0123456789 00:24:55.250 UUID: b5fdb99d-33a9-406a-8ef2-24c91791173c 00:24:55.250 Thin Provisioning: Not Supported 00:24:55.251 Per-NS Atomic Units: Yes 00:24:55.251 Atomic Boundary Size (Normal): 0 00:24:55.251 Atomic Boundary Size (PFail): 0 00:24:55.251 Atomic Boundary Offset: 0 00:24:55.251 Maximum Single Source Range Length: 65535 00:24:55.251 Maximum Copy Length: 65535 00:24:55.251 Maximum Source Range Count: 1 00:24:55.251 NGUID/EUI64 Never Reused: No 00:24:55.251 Namespace Write Protected: No 00:24:55.251 Number of LBA Formats: 1 00:24:55.251 Current LBA Format: LBA Format #00 00:24:55.251 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:55.251 00:24:55.251 06:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:24:55.251 06:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:55.251 06:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.251 06:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:55.251 06:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.251 06:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:24:55.251 06:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:24:55.251 06:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:55.251 06:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:24:55.251 06:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:55.251 06:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:24:55.251 06:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:55.251 06:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:55.251 rmmod nvme_tcp 00:24:55.251 rmmod nvme_fabrics 00:24:55.251 rmmod nvme_keyring 00:24:55.251 06:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:55.251 06:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:24:55.251 06:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:24:55.251 06:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 417082 ']' 00:24:55.251 06:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 417082 00:24:55.251 06:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 417082 ']' 00:24:55.251 06:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 417082 00:24:55.251 06:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:24:55.251 06:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:55.251 06:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 417082 00:24:55.251 06:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:55.251 06:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:55.251 06:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 417082' 00:24:55.251 killing process with pid 417082 00:24:55.251 06:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 417082 00:24:55.251 06:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 417082 00:24:55.522 06:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:55.522 06:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:55.522 06:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:55.522 06:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:24:55.522 06:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:24:55.522 06:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:55.522 06:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:24:55.522 06:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:55.522 06:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:55.522 06:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:55.522 06:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:55.522 06:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:57.432 06:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:57.432 00:24:57.432 real 0m11.530s 00:24:57.432 user 0m8.847s 00:24:57.432 sys 0m6.047s 00:24:57.432 06:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:57.432 06:23:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:57.432 ************************************ 00:24:57.432 END TEST nvmf_identify 00:24:57.432 ************************************ 00:24:57.693 06:23:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:57.693 06:23:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:57.693 06:23:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:57.693 06:23:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.693 ************************************ 00:24:57.693 START TEST nvmf_perf 00:24:57.693 ************************************ 00:24:57.693 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:57.693 * Looking for test storage... 00:24:57.693 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:57.693 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:57.693 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:24:57.693 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:57.693 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:57.693 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:57.693 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:57.693 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:57.693 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:24:57.693 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:24:57.693 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:24:57.693 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:24:57.693 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:24:57.693 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:24:57.693 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:24:57.693 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:57.693 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:24:57.693 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:24:57.693 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:57.693 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:57.693 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:24:57.954 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:24:57.954 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:57.954 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:24:57.954 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:57.954 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:24:57.954 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:24:57.954 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:57.954 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:24:57.954 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:57.954 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:57.954 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:57.954 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:24:57.954 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:57.954 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:57.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.954 --rc genhtml_branch_coverage=1 00:24:57.954 --rc genhtml_function_coverage=1 00:24:57.954 --rc genhtml_legend=1 00:24:57.954 --rc geninfo_all_blocks=1 00:24:57.954 --rc geninfo_unexecuted_blocks=1 00:24:57.954 00:24:57.954 ' 00:24:57.954 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:57.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.954 --rc genhtml_branch_coverage=1 00:24:57.954 --rc genhtml_function_coverage=1 00:24:57.954 --rc genhtml_legend=1 00:24:57.954 --rc geninfo_all_blocks=1 00:24:57.954 --rc geninfo_unexecuted_blocks=1 00:24:57.954 00:24:57.954 ' 00:24:57.954 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:57.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.954 --rc genhtml_branch_coverage=1 00:24:57.954 --rc genhtml_function_coverage=1 00:24:57.954 --rc genhtml_legend=1 00:24:57.954 --rc geninfo_all_blocks=1 00:24:57.954 --rc geninfo_unexecuted_blocks=1 00:24:57.954 00:24:57.954 ' 00:24:57.954 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:57.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.954 --rc genhtml_branch_coverage=1 00:24:57.954 --rc genhtml_function_coverage=1 00:24:57.954 --rc genhtml_legend=1 00:24:57.954 --rc geninfo_all_blocks=1 00:24:57.954 --rc geninfo_unexecuted_blocks=1 00:24:57.954 00:24:57.954 ' 00:24:57.954 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:57.954 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:24:57.954 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:57.954 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:57.954 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:57.954 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:57.954 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:57.954 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:57.954 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:57.954 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:57.954 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:57.954 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:57.954 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:24:57.954 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:24:57.954 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:57.954 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:57.954 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:57.954 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:57.954 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:57.954 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:24:57.954 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:57.954 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:57.954 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:57.954 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.954 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.954 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.954 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:24:57.954 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.954 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:24:57.954 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:57.954 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:57.954 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:57.954 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:57.954 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:57.955 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:57.955 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:57.955 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:57.955 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:57.955 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:57.955 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:57.955 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:57.955 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:57.955 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:24:57.955 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:57.955 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:57.955 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:57.955 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:57.955 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:57.955 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:57.955 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:57.955 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:57.955 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:57.955 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:57.955 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:24:57.955 06:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:06.090 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:06.090 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:25:06.090 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:06.090 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:06.090 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:06.090 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:06.090 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:06.090 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:25:06.090 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:06.090 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:25:06.090 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:25:06.090 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:25:06.090 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:25:06.090 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:25:06.090 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:25:06.090 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:06.090 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:06.090 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:06.090 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:06.090 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:06.090 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:06.090 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:06.090 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:06.090 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:06.090 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:06.090 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:06.090 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:06.090 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:06.090 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:06.090 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:06.090 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:06.090 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:06.090 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:06.090 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:06.090 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:06.090 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:06.090 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:06.090 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:06.090 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:06.090 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:06.090 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:06.090 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:06.090 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:06.090 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:06.090 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:06.090 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:06.090 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:06.090 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:06.090 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:06.090 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:06.090 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:06.090 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:06.090 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:06.090 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:06.090 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:06.090 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:06.091 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:06.091 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:06.091 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:06.091 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:06.091 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:06.091 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:06.091 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:06.091 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:06.091 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:06.091 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:06.091 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:06.091 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:06.091 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:06.091 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:06.091 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:06.091 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:06.091 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:06.091 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:25:06.091 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:06.091 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:06.091 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:06.091 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:06.091 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:06.091 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:06.091 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:06.091 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:06.091 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:06.091 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:06.091 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:06.091 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:06.091 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:06.091 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:06.091 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:06.091 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:06.091 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:06.091 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:06.091 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:06.091 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:06.091 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:06.091 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:06.091 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:06.091 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:06.091 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:06.091 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:06.091 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:06.091 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.653 ms 00:25:06.091 00:25:06.091 --- 10.0.0.2 ping statistics --- 00:25:06.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:06.091 rtt min/avg/max/mdev = 0.653/0.653/0.653/0.000 ms 00:25:06.091 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:06.091 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:06.091 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.305 ms 00:25:06.091 00:25:06.091 --- 10.0.0.1 ping statistics --- 00:25:06.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:06.091 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:25:06.091 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:06.091 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:25:06.091 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:06.091 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:06.091 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:06.091 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:06.091 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:06.091 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:06.091 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:06.091 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:25:06.091 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:06.091 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:06.091 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:06.091 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=421383 00:25:06.091 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 421383 00:25:06.091 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:06.091 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 421383 ']' 00:25:06.091 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:06.091 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:06.091 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:06.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:06.091 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:06.091 06:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:06.091 [2024-12-09 06:23:59.906824] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:25:06.091 [2024-12-09 06:23:59.906886] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:06.091 [2024-12-09 06:24:00.002570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:06.091 [2024-12-09 06:24:00.056371] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:06.091 [2024-12-09 06:24:00.056427] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:06.091 [2024-12-09 06:24:00.056436] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:06.091 [2024-12-09 06:24:00.056444] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:06.091 [2024-12-09 06:24:00.056462] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:06.091 [2024-12-09 06:24:00.058390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:06.091 [2024-12-09 06:24:00.058661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:06.091 [2024-12-09 06:24:00.058779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:06.091 [2024-12-09 06:24:00.058478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:06.352 06:24:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:06.352 06:24:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:25:06.352 06:24:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:06.352 06:24:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:06.352 06:24:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:06.353 06:24:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:06.353 06:24:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:06.353 06:24:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:25:09.650 06:24:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:25:09.650 06:24:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:25:09.650 06:24:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:25:09.650 06:24:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:09.650 06:24:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:25:09.650 06:24:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:25:09.650 06:24:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:25:09.650 06:24:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:25:09.650 06:24:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:09.909 [2024-12-09 06:24:04.343528] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:09.909 06:24:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:10.169 06:24:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:10.169 06:24:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:10.461 06:24:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:10.461 06:24:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:10.461 06:24:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:10.720 [2024-12-09 06:24:05.107429] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:10.720 06:24:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:10.979 06:24:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:25:10.979 06:24:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:25:10.979 06:24:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:25:10.979 06:24:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:25:12.358 Initializing NVMe Controllers 00:25:12.358 Attached to NVMe Controller at 0000:65:00.0 [8086:0a54] 00:25:12.358 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:25:12.358 Initialization complete. Launching workers. 00:25:12.358 ======================================================== 00:25:12.358 Latency(us) 00:25:12.358 Device Information : IOPS MiB/s Average min max 00:25:12.358 PCIE (0000:65:00.0) NSID 1 from core 0: 106219.11 414.92 300.67 36.21 5191.57 00:25:12.358 ======================================================== 00:25:12.358 Total : 106219.11 414.92 300.67 36.21 5191.57 00:25:12.358 00:25:12.358 06:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:13.297 Initializing NVMe Controllers 00:25:13.297 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:13.297 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:13.297 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:13.297 Initialization complete. Launching workers. 00:25:13.297 ======================================================== 00:25:13.297 Latency(us) 00:25:13.297 Device Information : IOPS MiB/s Average min max 00:25:13.297 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 103.00 0.40 9784.15 122.63 44940.44 00:25:13.297 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 63.00 0.25 16199.46 7908.77 47900.04 00:25:13.297 ======================================================== 00:25:13.297 Total : 166.00 0.65 12218.87 122.63 47900.04 00:25:13.297 00:25:13.558 06:24:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:14.941 Initializing NVMe Controllers 00:25:14.941 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:14.941 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:14.941 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:14.941 Initialization complete. Launching workers. 00:25:14.941 ======================================================== 00:25:14.941 Latency(us) 00:25:14.941 Device Information : IOPS MiB/s Average min max 00:25:14.941 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11701.00 45.71 2744.33 516.44 6238.55 00:25:14.941 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3773.00 14.74 8520.57 6304.83 16199.41 00:25:14.941 ======================================================== 00:25:14.941 Total : 15474.00 60.45 4152.74 516.44 16199.41 00:25:14.941 00:25:14.941 06:24:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:25:14.941 06:24:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:25:14.941 06:24:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:17.485 Initializing NVMe Controllers 00:25:17.485 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:17.485 Controller IO queue size 128, less than required. 00:25:17.485 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:17.485 Controller IO queue size 128, less than required. 00:25:17.485 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:17.485 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:17.485 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:17.485 Initialization complete. Launching workers. 00:25:17.485 ======================================================== 00:25:17.485 Latency(us) 00:25:17.485 Device Information : IOPS MiB/s Average min max 00:25:17.485 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1937.47 484.37 66644.42 38216.16 114912.82 00:25:17.485 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 614.49 153.62 217777.41 67890.73 328333.76 00:25:17.485 ======================================================== 00:25:17.485 Total : 2551.96 637.99 103035.96 38216.16 328333.76 00:25:17.485 00:25:17.485 06:24:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:25:17.485 No valid NVMe controllers or AIO or URING devices found 00:25:17.485 Initializing NVMe Controllers 00:25:17.485 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:17.485 Controller IO queue size 128, less than required. 00:25:17.485 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:17.485 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:25:17.485 Controller IO queue size 128, less than required. 00:25:17.485 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:17.485 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:25:17.485 WARNING: Some requested NVMe devices were skipped 00:25:17.485 06:24:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:25:20.023 Initializing NVMe Controllers 00:25:20.023 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:20.023 Controller IO queue size 128, less than required. 00:25:20.023 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:20.023 Controller IO queue size 128, less than required. 00:25:20.023 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:20.023 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:20.023 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:20.023 Initialization complete. Launching workers. 00:25:20.023 00:25:20.023 ==================== 00:25:20.023 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:25:20.023 TCP transport: 00:25:20.023 polls: 35761 00:25:20.023 idle_polls: 22742 00:25:20.023 sock_completions: 13019 00:25:20.023 nvme_completions: 9515 00:25:20.023 submitted_requests: 14398 00:25:20.023 queued_requests: 1 00:25:20.023 00:25:20.023 ==================== 00:25:20.023 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:25:20.023 TCP transport: 00:25:20.023 polls: 36462 00:25:20.023 idle_polls: 22796 00:25:20.023 sock_completions: 13666 00:25:20.023 nvme_completions: 7037 00:25:20.023 submitted_requests: 10542 00:25:20.023 queued_requests: 1 00:25:20.023 ======================================================== 00:25:20.023 Latency(us) 00:25:20.023 Device Information : IOPS MiB/s Average min max 00:25:20.023 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2378.47 594.62 54209.89 25010.94 99218.94 00:25:20.023 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1758.98 439.74 73534.29 32297.65 118441.50 00:25:20.023 ======================================================== 00:25:20.023 Total : 4137.45 1034.36 62425.39 25010.94 118441.50 00:25:20.023 00:25:20.023 06:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:25:20.023 06:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:20.023 06:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:25:20.023 06:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:25:20.023 06:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:25:20.023 06:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:20.023 06:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:25:20.023 06:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:20.023 06:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:25:20.023 06:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:20.023 06:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:20.023 rmmod nvme_tcp 00:25:20.282 rmmod nvme_fabrics 00:25:20.282 rmmod nvme_keyring 00:25:20.282 06:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:20.282 06:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:25:20.282 06:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:25:20.282 06:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 421383 ']' 00:25:20.282 06:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 421383 00:25:20.282 06:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 421383 ']' 00:25:20.282 06:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 421383 00:25:20.282 06:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:25:20.282 06:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:20.282 06:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 421383 00:25:20.282 06:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:20.282 06:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:20.282 06:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 421383' 00:25:20.282 killing process with pid 421383 00:25:20.282 06:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 421383 00:25:20.282 06:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 421383 00:25:22.822 06:24:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:22.823 06:24:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:22.823 06:24:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:22.823 06:24:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:25:22.823 06:24:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:25:22.823 06:24:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:22.823 06:24:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:25:22.823 06:24:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:22.823 06:24:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:22.823 06:24:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:22.823 06:24:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:22.823 06:24:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:24.735 06:24:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:24.735 00:25:24.735 real 0m26.959s 00:25:24.735 user 1m8.168s 00:25:24.735 sys 0m8.864s 00:25:24.735 06:24:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:24.735 06:24:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:24.735 ************************************ 00:25:24.735 END TEST nvmf_perf 00:25:24.735 ************************************ 00:25:24.735 06:24:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:24.735 06:24:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:24.735 06:24:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:24.735 06:24:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.735 ************************************ 00:25:24.735 START TEST nvmf_fio_host 00:25:24.735 ************************************ 00:25:24.735 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:24.735 * Looking for test storage... 00:25:24.735 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:24.735 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:24.735 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:25:24.735 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:24.735 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:24.735 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:24.735 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:24.735 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:24.735 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:25:24.735 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:25:24.735 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:25:24.735 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:25:24.735 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:25:24.735 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:25:24.735 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:25:24.735 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:24.735 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:25:24.735 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:25:24.735 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:24.735 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:24.735 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:25:24.735 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:25:24.735 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:24.735 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:25:24.735 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:25:24.735 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:25:24.735 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:25:24.735 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:24.735 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:25:24.996 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:25:24.996 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:24.996 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:24.996 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:25:24.996 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:24.996 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:24.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:24.996 --rc genhtml_branch_coverage=1 00:25:24.996 --rc genhtml_function_coverage=1 00:25:24.996 --rc genhtml_legend=1 00:25:24.996 --rc geninfo_all_blocks=1 00:25:24.996 --rc geninfo_unexecuted_blocks=1 00:25:24.996 00:25:24.996 ' 00:25:24.996 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:24.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:24.996 --rc genhtml_branch_coverage=1 00:25:24.996 --rc genhtml_function_coverage=1 00:25:24.996 --rc genhtml_legend=1 00:25:24.996 --rc geninfo_all_blocks=1 00:25:24.996 --rc geninfo_unexecuted_blocks=1 00:25:24.996 00:25:24.996 ' 00:25:24.996 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:24.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:24.996 --rc genhtml_branch_coverage=1 00:25:24.996 --rc genhtml_function_coverage=1 00:25:24.996 --rc genhtml_legend=1 00:25:24.996 --rc geninfo_all_blocks=1 00:25:24.996 --rc geninfo_unexecuted_blocks=1 00:25:24.997 00:25:24.997 ' 00:25:24.997 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:24.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:24.997 --rc genhtml_branch_coverage=1 00:25:24.997 --rc genhtml_function_coverage=1 00:25:24.997 --rc genhtml_legend=1 00:25:24.997 --rc geninfo_all_blocks=1 00:25:24.997 --rc geninfo_unexecuted_blocks=1 00:25:24.997 00:25:24.997 ' 00:25:24.997 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:24.997 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:24.997 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:24.997 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:24.997 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:24.997 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.997 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.997 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.997 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:25:24.997 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.997 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:24.997 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:25:24.997 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:24.997 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:24.997 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:24.997 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:24.997 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:24.997 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:24.997 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:24.997 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:24.997 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:24.997 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:24.997 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:25:24.997 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:25:24.997 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:24.997 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:24.997 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:24.997 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:24.997 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:24.997 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:24.997 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:24.997 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:24.997 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:24.997 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.997 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.997 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.997 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:25:24.997 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.997 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:25:24.997 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:24.997 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:24.997 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:24.997 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:24.997 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:24.997 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:24.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:24.997 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:24.997 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:24.997 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:24.997 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:24.997 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:25:24.997 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:24.997 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:24.997 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:24.997 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:24.997 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:24.997 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:24.997 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:24.997 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:24.997 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:24.997 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:24.997 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:25:24.997 06:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.134 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:33.134 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:25:33.134 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:33.134 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:33.134 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:33.134 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:33.134 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:33.134 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:25:33.134 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:33.134 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:25:33.134 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:25:33.134 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:25:33.134 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:25:33.134 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:25:33.134 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:25:33.134 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:33.134 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:33.134 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:33.134 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:33.134 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:33.134 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:33.134 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:33.134 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:33.134 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:33.134 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:33.134 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:33.134 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:33.134 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:33.134 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:33.134 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:33.134 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:33.134 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:33.134 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:33.134 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:33.134 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:33.134 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:33.134 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:33.134 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:33.134 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:33.135 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:33.135 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:33.135 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:33.135 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:33.135 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:33.135 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:33.135 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:33.135 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:33.135 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:33.135 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:33.135 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:33.135 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:33.135 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:33.135 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:33.135 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:33.135 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:33.135 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:33.135 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:33.135 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:33.135 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:33.135 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:33.135 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:33.135 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:33.135 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:33.135 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:33.135 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:33.135 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:33.135 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:33.135 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:33.135 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:33.135 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:33.135 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:33.135 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:33.135 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:33.135 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:25:33.135 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:33.135 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:33.135 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:33.135 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:33.135 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:33.135 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:33.135 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:33.135 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:33.135 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:33.135 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:33.135 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:33.135 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:33.135 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:33.135 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:33.135 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:33.135 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:33.135 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:33.135 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:33.135 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:33.135 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:33.135 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:33.135 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:33.135 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:33.135 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:33.135 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:33.135 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:33.135 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:33.135 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.511 ms 00:25:33.135 00:25:33.135 --- 10.0.0.2 ping statistics --- 00:25:33.135 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:33.135 rtt min/avg/max/mdev = 0.511/0.511/0.511/0.000 ms 00:25:33.135 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:33.135 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:33.135 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.252 ms 00:25:33.135 00:25:33.135 --- 10.0.0.1 ping statistics --- 00:25:33.135 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:33.135 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:25:33.135 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:33.135 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:25:33.135 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:33.135 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:33.135 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:33.135 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:33.135 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:33.135 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:33.135 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:33.135 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:25:33.135 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:25:33.135 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:33.135 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.135 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=428676 00:25:33.135 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:33.135 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:33.135 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 428676 00:25:33.135 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 428676 ']' 00:25:33.135 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:33.135 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:33.135 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:33.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:33.135 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:33.135 06:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.135 [2024-12-09 06:24:26.841276] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:25:33.135 [2024-12-09 06:24:26.841340] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:33.135 [2024-12-09 06:24:26.941297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:33.135 [2024-12-09 06:24:26.992422] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:33.135 [2024-12-09 06:24:26.992492] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:33.135 [2024-12-09 06:24:26.992500] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:33.135 [2024-12-09 06:24:26.992507] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:33.135 [2024-12-09 06:24:26.992513] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:33.136 [2024-12-09 06:24:26.994493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:33.136 [2024-12-09 06:24:26.994587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:33.136 [2024-12-09 06:24:26.994731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:33.136 [2024-12-09 06:24:26.994731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:33.136 06:24:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:33.136 06:24:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:25:33.136 06:24:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:33.395 [2024-12-09 06:24:27.847134] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:33.395 06:24:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:25:33.395 06:24:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:33.395 06:24:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.395 06:24:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:25:33.655 Malloc1 00:25:33.655 06:24:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:33.915 06:24:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:33.915 06:24:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:34.175 [2024-12-09 06:24:28.620641] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:34.175 06:24:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:34.436 06:24:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:25:34.436 06:24:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:34.436 06:24:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:34.436 06:24:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:25:34.436 06:24:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:34.436 06:24:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:25:34.436 06:24:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:34.436 06:24:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:25:34.436 06:24:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:25:34.436 06:24:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:34.436 06:24:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:34.436 06:24:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:25:34.436 06:24:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:34.436 06:24:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:34.436 06:24:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:34.436 06:24:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:34.436 06:24:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:34.436 06:24:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:25:34.436 06:24:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:34.436 06:24:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:34.436 06:24:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:34.436 06:24:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:34.436 06:24:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:34.697 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:25:34.697 fio-3.35 00:25:34.697 Starting 1 thread 00:25:37.242 00:25:37.242 test: (groupid=0, jobs=1): err= 0: pid=429433: Mon Dec 9 06:24:31 2024 00:25:37.242 read: IOPS=13.5k, BW=52.7MiB/s (55.3MB/s)(106MiB/2004msec) 00:25:37.242 slat (nsec): min=1887, max=294675, avg=2026.51, stdev=2569.37 00:25:37.242 clat (usec): min=3105, max=8931, avg=5219.18, stdev=389.91 00:25:37.242 lat (usec): min=3107, max=8933, avg=5221.21, stdev=390.09 00:25:37.242 clat percentiles (usec): 00:25:37.242 | 1.00th=[ 4359], 5.00th=[ 4621], 10.00th=[ 4752], 20.00th=[ 4948], 00:25:37.242 | 30.00th=[ 5014], 40.00th=[ 5145], 50.00th=[ 5211], 60.00th=[ 5276], 00:25:37.242 | 70.00th=[ 5407], 80.00th=[ 5473], 90.00th=[ 5669], 95.00th=[ 5800], 00:25:37.242 | 99.00th=[ 6194], 99.50th=[ 6587], 99.90th=[ 8094], 99.95th=[ 8291], 00:25:37.242 | 99.99th=[ 8717] 00:25:37.242 bw ( KiB/s): min=52360, max=54528, per=99.93%, avg=53928.00, stdev=1050.47, samples=4 00:25:37.242 iops : min=13090, max=13632, avg=13482.00, stdev=262.62, samples=4 00:25:37.242 write: IOPS=13.5k, BW=52.7MiB/s (55.2MB/s)(106MiB/2004msec); 0 zone resets 00:25:37.242 slat (nsec): min=1923, max=296630, avg=2096.63, stdev=1979.47 00:25:37.242 clat (usec): min=2695, max=7630, avg=4211.70, stdev=319.82 00:25:37.242 lat (usec): min=2697, max=7632, avg=4213.80, stdev=320.10 00:25:37.242 clat percentiles (usec): 00:25:37.242 | 1.00th=[ 3490], 5.00th=[ 3752], 10.00th=[ 3851], 20.00th=[ 3982], 00:25:37.242 | 30.00th=[ 4080], 40.00th=[ 4146], 50.00th=[ 4228], 60.00th=[ 4293], 00:25:37.242 | 70.00th=[ 4359], 80.00th=[ 4424], 90.00th=[ 4555], 95.00th=[ 4686], 00:25:37.242 | 99.00th=[ 4948], 99.50th=[ 5538], 99.90th=[ 6587], 99.95th=[ 6849], 00:25:37.242 | 99.99th=[ 7570] 00:25:37.242 bw ( KiB/s): min=52760, max=54336, per=100.00%, avg=53924.00, stdev=776.74, samples=4 00:25:37.242 iops : min=13190, max=13584, avg=13481.00, stdev=194.19, samples=4 00:25:37.242 lat (msec) : 4=11.43%, 10=88.57% 00:25:37.242 cpu : usr=73.64%, sys=25.21%, ctx=32, majf=0, minf=33 00:25:37.242 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:25:37.242 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.242 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:37.242 issued rwts: total=27037,27016,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.242 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:37.242 00:25:37.242 Run status group 0 (all jobs): 00:25:37.242 READ: bw=52.7MiB/s (55.3MB/s), 52.7MiB/s-52.7MiB/s (55.3MB/s-55.3MB/s), io=106MiB (111MB), run=2004-2004msec 00:25:37.242 WRITE: bw=52.7MiB/s (55.2MB/s), 52.7MiB/s-52.7MiB/s (55.2MB/s-55.2MB/s), io=106MiB (111MB), run=2004-2004msec 00:25:37.242 06:24:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:37.242 06:24:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:37.242 06:24:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:25:37.242 06:24:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:37.242 06:24:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:25:37.242 06:24:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:37.242 06:24:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:25:37.242 06:24:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:25:37.242 06:24:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:37.242 06:24:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:37.242 06:24:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:25:37.242 06:24:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:37.242 06:24:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:37.242 06:24:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:37.242 06:24:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:37.242 06:24:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:37.242 06:24:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:25:37.242 06:24:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:37.242 06:24:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:37.242 06:24:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:37.242 06:24:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:37.243 06:24:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:37.504 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:25:37.504 fio-3.35 00:25:37.504 Starting 1 thread 00:25:40.043 00:25:40.043 test: (groupid=0, jobs=1): err= 0: pid=429891: Mon Dec 9 06:24:34 2024 00:25:40.043 read: IOPS=9866, BW=154MiB/s (162MB/s)(309MiB/2004msec) 00:25:40.043 slat (usec): min=3, max=104, avg= 3.46, stdev= 1.55 00:25:40.043 clat (usec): min=1955, max=53170, avg=7914.91, stdev=3775.72 00:25:40.043 lat (usec): min=1959, max=53173, avg=7918.36, stdev=3775.85 00:25:40.043 clat percentiles (usec): 00:25:40.043 | 1.00th=[ 3752], 5.00th=[ 4686], 10.00th=[ 5211], 20.00th=[ 5866], 00:25:40.043 | 30.00th=[ 6456], 40.00th=[ 6980], 50.00th=[ 7504], 60.00th=[ 8094], 00:25:40.043 | 70.00th=[ 8717], 80.00th=[ 9503], 90.00th=[10159], 95.00th=[11207], 00:25:40.043 | 99.00th=[14746], 99.50th=[44827], 99.90th=[52167], 99.95th=[52691], 00:25:40.043 | 99.99th=[53216] 00:25:40.043 bw ( KiB/s): min=69728, max=85824, per=49.81%, avg=78632.00, stdev=8247.99, samples=4 00:25:40.043 iops : min= 4358, max= 5364, avg=4914.50, stdev=515.50, samples=4 00:25:40.043 write: IOPS=5892, BW=92.1MiB/s (96.6MB/s)(161MiB/1747msec); 0 zone resets 00:25:40.043 slat (usec): min=36, max=326, avg=38.73, stdev= 9.48 00:25:40.043 clat (usec): min=2212, max=22358, avg=8690.20, stdev=1777.73 00:25:40.043 lat (usec): min=2249, max=22402, avg=8728.93, stdev=1781.34 00:25:40.043 clat percentiles (usec): 00:25:40.043 | 1.00th=[ 5473], 5.00th=[ 6456], 10.00th=[ 6915], 20.00th=[ 7373], 00:25:40.043 | 30.00th=[ 7701], 40.00th=[ 8094], 50.00th=[ 8455], 60.00th=[ 8848], 00:25:40.043 | 70.00th=[ 9372], 80.00th=[ 9765], 90.00th=[10552], 95.00th=[11469], 00:25:40.043 | 99.00th=[14222], 99.50th=[17433], 99.90th=[21890], 99.95th=[21890], 00:25:40.043 | 99.99th=[22152] 00:25:40.044 bw ( KiB/s): min=71488, max=90304, per=86.96%, avg=81992.00, stdev=9442.35, samples=4 00:25:40.044 iops : min= 4468, max= 5644, avg=5124.50, stdev=590.15, samples=4 00:25:40.044 lat (msec) : 2=0.01%, 4=1.13%, 10=84.29%, 20=14.02%, 50=0.47% 00:25:40.044 lat (msec) : 100=0.10% 00:25:40.044 cpu : usr=84.72%, sys=14.18%, ctx=20, majf=0, minf=49 00:25:40.044 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:25:40.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:40.044 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:40.044 issued rwts: total=19772,10295,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:40.044 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:40.044 00:25:40.044 Run status group 0 (all jobs): 00:25:40.044 READ: bw=154MiB/s (162MB/s), 154MiB/s-154MiB/s (162MB/s-162MB/s), io=309MiB (324MB), run=2004-2004msec 00:25:40.044 WRITE: bw=92.1MiB/s (96.6MB/s), 92.1MiB/s-92.1MiB/s (96.6MB/s-96.6MB/s), io=161MiB (169MB), run=1747-1747msec 00:25:40.044 06:24:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:40.044 06:24:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:25:40.044 06:24:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:25:40.044 06:24:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:25:40.044 06:24:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:25:40.044 06:24:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:40.044 06:24:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:25:40.044 06:24:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:40.044 06:24:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:25:40.044 06:24:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:40.044 06:24:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:40.044 rmmod nvme_tcp 00:25:40.044 rmmod nvme_fabrics 00:25:40.044 rmmod nvme_keyring 00:25:40.044 06:24:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:40.044 06:24:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:25:40.044 06:24:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:25:40.044 06:24:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 428676 ']' 00:25:40.044 06:24:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 428676 00:25:40.044 06:24:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 428676 ']' 00:25:40.044 06:24:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 428676 00:25:40.044 06:24:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:25:40.044 06:24:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:40.044 06:24:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 428676 00:25:40.044 06:24:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:40.044 06:24:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:40.044 06:24:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 428676' 00:25:40.044 killing process with pid 428676 00:25:40.044 06:24:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 428676 00:25:40.044 06:24:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 428676 00:25:40.304 06:24:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:40.304 06:24:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:40.304 06:24:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:40.304 06:24:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:25:40.304 06:24:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:40.304 06:24:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:25:40.304 06:24:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:25:40.304 06:24:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:40.304 06:24:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:40.304 06:24:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:40.304 06:24:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:40.304 06:24:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:42.845 06:24:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:42.845 00:25:42.845 real 0m17.683s 00:25:42.845 user 0m56.666s 00:25:42.845 sys 0m7.711s 00:25:42.845 06:24:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:42.845 06:24:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.845 ************************************ 00:25:42.845 END TEST nvmf_fio_host 00:25:42.845 ************************************ 00:25:42.845 06:24:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:42.845 06:24:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:42.845 06:24:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:42.845 06:24:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.845 ************************************ 00:25:42.845 START TEST nvmf_failover 00:25:42.845 ************************************ 00:25:42.845 06:24:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:42.845 * Looking for test storage... 00:25:42.845 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:42.845 06:24:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:42.845 06:24:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:25:42.845 06:24:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:42.845 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:42.845 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:42.845 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:42.845 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:42.845 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:25:42.845 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:25:42.845 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:25:42.845 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:25:42.845 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:25:42.845 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:25:42.845 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:25:42.845 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:42.845 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:25:42.845 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:25:42.845 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:42.845 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:42.846 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:25:42.846 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:25:42.846 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:42.846 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:25:42.846 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:25:42.846 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:25:42.846 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:25:42.846 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:42.846 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:25:42.846 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:25:42.846 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:42.846 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:42.846 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:25:42.846 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:42.846 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:42.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:42.846 --rc genhtml_branch_coverage=1 00:25:42.846 --rc genhtml_function_coverage=1 00:25:42.846 --rc genhtml_legend=1 00:25:42.846 --rc geninfo_all_blocks=1 00:25:42.846 --rc geninfo_unexecuted_blocks=1 00:25:42.846 00:25:42.846 ' 00:25:42.846 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:42.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:42.846 --rc genhtml_branch_coverage=1 00:25:42.846 --rc genhtml_function_coverage=1 00:25:42.846 --rc genhtml_legend=1 00:25:42.846 --rc geninfo_all_blocks=1 00:25:42.846 --rc geninfo_unexecuted_blocks=1 00:25:42.846 00:25:42.846 ' 00:25:42.846 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:42.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:42.846 --rc genhtml_branch_coverage=1 00:25:42.846 --rc genhtml_function_coverage=1 00:25:42.846 --rc genhtml_legend=1 00:25:42.846 --rc geninfo_all_blocks=1 00:25:42.846 --rc geninfo_unexecuted_blocks=1 00:25:42.846 00:25:42.846 ' 00:25:42.846 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:42.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:42.846 --rc genhtml_branch_coverage=1 00:25:42.846 --rc genhtml_function_coverage=1 00:25:42.846 --rc genhtml_legend=1 00:25:42.846 --rc geninfo_all_blocks=1 00:25:42.846 --rc geninfo_unexecuted_blocks=1 00:25:42.846 00:25:42.846 ' 00:25:42.846 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:42.846 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:25:42.846 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:42.846 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:42.846 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:42.846 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:42.846 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:42.846 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:42.846 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:42.846 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:42.846 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:42.846 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:42.846 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:25:42.846 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:25:42.846 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:42.846 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:42.846 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:42.846 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:42.846 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:42.846 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:25:42.846 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:42.846 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:42.846 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:42.846 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.846 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.846 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.846 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:25:42.846 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.846 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:25:42.846 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:42.846 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:42.846 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:42.846 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:42.846 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:42.846 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:42.846 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:42.846 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:42.846 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:42.846 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:42.846 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:42.846 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:42.846 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:42.846 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:42.846 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:25:42.846 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:42.846 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:42.846 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:42.846 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:42.846 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:42.846 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:42.846 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:42.846 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:42.846 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:42.846 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:42.846 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:25:42.846 06:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:49.448 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:49.448 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:49.448 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:49.448 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:49.448 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:49.449 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:49.449 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.576 ms 00:25:49.449 00:25:49.449 --- 10.0.0.2 ping statistics --- 00:25:49.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:49.449 rtt min/avg/max/mdev = 0.576/0.576/0.576/0.000 ms 00:25:49.449 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:49.449 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:49.449 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:25:49.449 00:25:49.449 --- 10.0.0.1 ping statistics --- 00:25:49.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:49.449 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:25:49.449 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:49.449 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:25:49.449 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:49.449 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:49.449 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:49.449 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:49.449 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:49.449 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:49.449 06:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:49.710 06:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:25:49.710 06:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:49.710 06:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:49.710 06:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:49.710 06:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=434290 00:25:49.710 06:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 434290 00:25:49.710 06:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 434290 ']' 00:25:49.710 06:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:49.710 06:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:49.710 06:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:49.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:49.710 06:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:49.710 06:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:49.710 06:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:49.710 [2024-12-09 06:24:44.079844] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:25:49.710 [2024-12-09 06:24:44.079903] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:49.710 [2024-12-09 06:24:44.158161] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:49.710 [2024-12-09 06:24:44.208597] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:49.710 [2024-12-09 06:24:44.208652] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:49.710 [2024-12-09 06:24:44.208659] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:49.710 [2024-12-09 06:24:44.208666] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:49.710 [2024-12-09 06:24:44.208672] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:49.710 [2024-12-09 06:24:44.210653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:49.710 [2024-12-09 06:24:44.210818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:49.710 [2024-12-09 06:24:44.210819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:50.653 06:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:50.653 06:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:25:50.653 06:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:50.653 06:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:50.653 06:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:50.653 06:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:50.653 06:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:50.653 [2024-12-09 06:24:45.133739] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:50.653 06:24:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:50.913 Malloc0 00:25:50.913 06:24:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:51.172 06:24:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:51.433 06:24:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:51.433 [2024-12-09 06:24:45.919117] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:51.433 06:24:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:51.693 [2024-12-09 06:24:46.095603] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:51.693 06:24:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:51.693 [2024-12-09 06:24:46.272135] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:51.952 06:24:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=434727 00:25:51.952 06:24:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:25:51.952 06:24:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:51.952 06:24:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 434727 /var/tmp/bdevperf.sock 00:25:51.952 06:24:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 434727 ']' 00:25:51.952 06:24:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:51.952 06:24:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:51.952 06:24:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:51.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:51.952 06:24:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:51.952 06:24:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:52.890 06:24:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:52.890 06:24:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:25:52.890 06:24:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:53.150 NVMe0n1 00:25:53.150 06:24:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:53.411 00:25:53.411 06:24:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=434900 00:25:53.411 06:24:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:53.411 06:24:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:25:54.350 06:24:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:54.610 [2024-12-09 06:24:49.053030] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51a90 is same with the state(6) to be set 00:25:54.610 [2024-12-09 06:24:49.053071] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51a90 is same with the state(6) to be set 00:25:54.610 [2024-12-09 06:24:49.053077] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51a90 is same with the state(6) to be set 00:25:54.610 [2024-12-09 06:24:49.053083] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51a90 is same with the state(6) to be set 00:25:54.610 [2024-12-09 06:24:49.053088] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51a90 is same with the state(6) to be set 00:25:54.610 [2024-12-09 06:24:49.053093] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51a90 is same with the state(6) to be set 00:25:54.610 [2024-12-09 06:24:49.053098] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51a90 is same with the state(6) to be set 00:25:54.610 [2024-12-09 06:24:49.053102] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51a90 is same with the state(6) to be set 00:25:54.610 [2024-12-09 06:24:49.053107] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51a90 is same with the state(6) to be set 00:25:54.610 [2024-12-09 06:24:49.053112] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51a90 is same with the state(6) to be set 00:25:54.610 [2024-12-09 06:24:49.053116] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51a90 is same with the state(6) to be set 00:25:54.610 [2024-12-09 06:24:49.053121] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51a90 is same with the state(6) to be set 00:25:54.610 [2024-12-09 06:24:49.053125] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51a90 is same with the state(6) to be set 00:25:54.610 [2024-12-09 06:24:49.053130] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51a90 is same with the state(6) to be set 00:25:54.610 [2024-12-09 06:24:49.053135] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51a90 is same with the state(6) to be set 00:25:54.610 06:24:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:25:57.908 06:24:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:57.908 00:25:57.908 06:24:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:58.169 [2024-12-09 06:24:52.503599] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c52540 is same with the state(6) to be set 00:25:58.169 [2024-12-09 06:24:52.503635] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c52540 is same with the state(6) to be set 00:25:58.169 [2024-12-09 06:24:52.503641] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c52540 is same with the state(6) to be set 00:25:58.169 [2024-12-09 06:24:52.503646] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c52540 is same with the state(6) to be set 00:25:58.169 [2024-12-09 06:24:52.503651] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c52540 is same with the state(6) to be set 00:25:58.169 [2024-12-09 06:24:52.503656] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c52540 is same with the state(6) to be set 00:25:58.169 [2024-12-09 06:24:52.503661] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c52540 is same with the state(6) to be set 00:25:58.169 [2024-12-09 06:24:52.503666] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c52540 is same with the state(6) to be set 00:25:58.169 [2024-12-09 06:24:52.503671] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c52540 is same with the state(6) to be set 00:25:58.169 [2024-12-09 06:24:52.503675] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c52540 is same with the state(6) to be set 00:25:58.169 [2024-12-09 06:24:52.503680] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c52540 is same with the state(6) to be set 00:25:58.169 [2024-12-09 06:24:52.503685] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c52540 is same with the state(6) to be set 00:25:58.169 [2024-12-09 06:24:52.503689] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c52540 is same with the state(6) to be set 00:25:58.169 [2024-12-09 06:24:52.503694] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c52540 is same with the state(6) to be set 00:25:58.169 [2024-12-09 06:24:52.503699] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c52540 is same with the state(6) to be set 00:25:58.169 [2024-12-09 06:24:52.503703] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c52540 is same with the state(6) to be set 00:25:58.169 [2024-12-09 06:24:52.503708] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c52540 is same with the state(6) to be set 00:25:58.170 [2024-12-09 06:24:52.503712] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c52540 is same with the state(6) to be set 00:25:58.170 [2024-12-09 06:24:52.503717] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c52540 is same with the state(6) to be set 00:25:58.170 [2024-12-09 06:24:52.503721] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c52540 is same with the state(6) to be set 00:25:58.170 [2024-12-09 06:24:52.503726] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c52540 is same with the state(6) to be set 00:25:58.170 [2024-12-09 06:24:52.503731] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c52540 is same with the state(6) to be set 00:25:58.170 [2024-12-09 06:24:52.503735] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c52540 is same with the state(6) to be set 00:25:58.170 [2024-12-09 06:24:52.503740] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c52540 is same with the state(6) to be set 00:25:58.170 [2024-12-09 06:24:52.503745] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c52540 is same with the state(6) to be set 00:25:58.170 [2024-12-09 06:24:52.503749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c52540 is same with the state(6) to be set 00:25:58.170 [2024-12-09 06:24:52.503754] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c52540 is same with the state(6) to be set 00:25:58.170 [2024-12-09 06:24:52.503763] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c52540 is same with the state(6) to be set 00:25:58.170 [2024-12-09 06:24:52.503768] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c52540 is same with the state(6) to be set 00:25:58.170 [2024-12-09 06:24:52.503773] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c52540 is same with the state(6) to be set 00:25:58.170 [2024-12-09 06:24:52.503778] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c52540 is same with the state(6) to be set 00:25:58.170 [2024-12-09 06:24:52.503783] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c52540 is same with the state(6) to be set 00:25:58.170 [2024-12-09 06:24:52.503787] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c52540 is same with the state(6) to be set 00:25:58.170 [2024-12-09 06:24:52.503792] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c52540 is same with the state(6) to be set 00:25:58.170 06:24:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:26:01.470 06:24:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:01.470 [2024-12-09 06:24:55.679967] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:01.470 06:24:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:26:02.410 06:24:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:02.410 06:24:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 434900 00:26:09.055 { 00:26:09.055 "results": [ 00:26:09.055 { 00:26:09.055 "job": "NVMe0n1", 00:26:09.055 "core_mask": "0x1", 00:26:09.055 "workload": "verify", 00:26:09.055 "status": "finished", 00:26:09.055 "verify_range": { 00:26:09.055 "start": 0, 00:26:09.055 "length": 16384 00:26:09.055 }, 00:26:09.055 "queue_depth": 128, 00:26:09.055 "io_size": 4096, 00:26:09.055 "runtime": 15.04568, 00:26:09.055 "iops": 11670.658953267648, 00:26:09.055 "mibps": 45.58851153620175, 00:26:09.055 "io_failed": 13837, 00:26:09.055 "io_timeout": 0, 00:26:09.055 "avg_latency_us": 10112.603690910788, 00:26:09.055 "min_latency_us": 382.8184615384615, 00:26:09.055 "max_latency_us": 43959.53230769231 00:26:09.055 } 00:26:09.055 ], 00:26:09.055 "core_count": 1 00:26:09.055 } 00:26:09.055 06:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 434727 00:26:09.055 06:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 434727 ']' 00:26:09.055 06:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 434727 00:26:09.055 06:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:26:09.055 06:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:09.055 06:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 434727 00:26:09.055 06:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:09.055 06:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:09.055 06:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 434727' 00:26:09.055 killing process with pid 434727 00:26:09.055 06:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 434727 00:26:09.055 06:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 434727 00:26:09.055 06:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:09.055 [2024-12-09 06:24:46.339537] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:26:09.055 [2024-12-09 06:24:46.339593] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid434727 ] 00:26:09.055 [2024-12-09 06:24:46.427424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:09.055 [2024-12-09 06:24:46.461244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:09.055 Running I/O for 15 seconds... 00:26:09.055 12246.00 IOPS, 47.84 MiB/s [2024-12-09T05:25:03.642Z] [2024-12-09 06:24:49.055030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:106960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.055 [2024-12-09 06:24:49.055062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.055 [2024-12-09 06:24:49.055077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:106968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.055 [2024-12-09 06:24:49.055085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.055 [2024-12-09 06:24:49.055094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:106976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.055 [2024-12-09 06:24:49.055102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.055 [2024-12-09 06:24:49.055111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:106984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.055 [2024-12-09 06:24:49.055117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.055 [2024-12-09 06:24:49.055127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:106992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.055 [2024-12-09 06:24:49.055134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.055 [2024-12-09 06:24:49.055143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:107000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.055 [2024-12-09 06:24:49.055150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.055 [2024-12-09 06:24:49.055159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:107008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.055 [2024-12-09 06:24:49.055166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.055 [2024-12-09 06:24:49.055175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:107016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.055 [2024-12-09 06:24:49.055182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.055 [2024-12-09 06:24:49.055190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:107024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.055 [2024-12-09 06:24:49.055197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.055 [2024-12-09 06:24:49.055206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:107032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.055 [2024-12-09 06:24:49.055213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.055 [2024-12-09 06:24:49.055222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:107040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.055 [2024-12-09 06:24:49.055228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.055 [2024-12-09 06:24:49.055244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:107048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.055 [2024-12-09 06:24:49.055252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.055 [2024-12-09 06:24:49.055262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:107056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.055 [2024-12-09 06:24:49.055269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.055 [2024-12-09 06:24:49.055278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:107064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.055 [2024-12-09 06:24:49.055285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.055 [2024-12-09 06:24:49.055294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:107072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.055 [2024-12-09 06:24:49.055301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.055 [2024-12-09 06:24:49.055310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:106512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.055 [2024-12-09 06:24:49.055317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.055 [2024-12-09 06:24:49.055326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:106520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.055 [2024-12-09 06:24:49.055333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.055 [2024-12-09 06:24:49.055342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:106528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.055 [2024-12-09 06:24:49.055348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.055 [2024-12-09 06:24:49.055357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:106536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.055 [2024-12-09 06:24:49.055364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.055 [2024-12-09 06:24:49.055374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.055 [2024-12-09 06:24:49.055381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.055 [2024-12-09 06:24:49.055390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:106552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.055 [2024-12-09 06:24:49.055397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.055 [2024-12-09 06:24:49.055406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:106560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.055 [2024-12-09 06:24:49.055413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.055 [2024-12-09 06:24:49.055422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:106568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.055 [2024-12-09 06:24:49.055429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.055 [2024-12-09 06:24:49.055438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:107080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.055 [2024-12-09 06:24:49.055447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.055 [2024-12-09 06:24:49.055460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:106576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.055 [2024-12-09 06:24:49.055466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.055 [2024-12-09 06:24:49.055475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:106584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.055 [2024-12-09 06:24:49.055482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.055 [2024-12-09 06:24:49.055491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:106592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.055 [2024-12-09 06:24:49.055498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.055 [2024-12-09 06:24:49.055506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:106600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.055 [2024-12-09 06:24:49.055513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.055 [2024-12-09 06:24:49.055522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:106608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.055 [2024-12-09 06:24:49.055528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.055 [2024-12-09 06:24:49.055537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:106616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.055 [2024-12-09 06:24:49.055544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.055 [2024-12-09 06:24:49.055552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:106624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.055 [2024-12-09 06:24:49.055559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.055 [2024-12-09 06:24:49.055568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:106632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.055 [2024-12-09 06:24:49.055574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.055 [2024-12-09 06:24:49.055583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:106640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.055 [2024-12-09 06:24:49.055590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.055 [2024-12-09 06:24:49.055599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:106648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.055 [2024-12-09 06:24:49.055606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.055 [2024-12-09 06:24:49.055615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:106656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.055 [2024-12-09 06:24:49.055622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.055 [2024-12-09 06:24:49.055631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:106664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.055 [2024-12-09 06:24:49.055637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.055 [2024-12-09 06:24:49.055648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:106672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.055 [2024-12-09 06:24:49.055655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.055 [2024-12-09 06:24:49.055664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:106680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.055 [2024-12-09 06:24:49.055670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.055 [2024-12-09 06:24:49.055679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:106688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.055 [2024-12-09 06:24:49.055686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.055 [2024-12-09 06:24:49.055695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:106696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.055 [2024-12-09 06:24:49.055702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.055 [2024-12-09 06:24:49.055711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:106704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.055 [2024-12-09 06:24:49.055717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.055 [2024-12-09 06:24:49.055726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:106712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.055 [2024-12-09 06:24:49.055733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.055 [2024-12-09 06:24:49.055742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:106720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.055 [2024-12-09 06:24:49.055749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.055 [2024-12-09 06:24:49.055757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:106728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.055 [2024-12-09 06:24:49.055764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.055 [2024-12-09 06:24:49.055773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:106736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.055 [2024-12-09 06:24:49.055780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.055 [2024-12-09 06:24:49.055788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:106744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.055 [2024-12-09 06:24:49.055795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.055 [2024-12-09 06:24:49.055804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:106752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.055 [2024-12-09 06:24:49.055811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.055 [2024-12-09 06:24:49.055820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:106760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.055 [2024-12-09 06:24:49.055826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.055 [2024-12-09 06:24:49.055835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:106768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.055 [2024-12-09 06:24:49.055843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.055 [2024-12-09 06:24:49.055852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:106776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.055 [2024-12-09 06:24:49.055859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.056 [2024-12-09 06:24:49.055868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:106784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.056 [2024-12-09 06:24:49.055874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.056 [2024-12-09 06:24:49.055883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:106792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.056 [2024-12-09 06:24:49.055890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.056 [2024-12-09 06:24:49.055898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:106800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.056 [2024-12-09 06:24:49.055905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.056 [2024-12-09 06:24:49.055914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:106808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.056 [2024-12-09 06:24:49.055920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.056 [2024-12-09 06:24:49.055929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:106816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.056 [2024-12-09 06:24:49.055936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.056 [2024-12-09 06:24:49.055944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:106824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.056 [2024-12-09 06:24:49.055951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.056 [2024-12-09 06:24:49.055959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:107088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.056 [2024-12-09 06:24:49.055966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.056 [2024-12-09 06:24:49.055975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:107096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.056 [2024-12-09 06:24:49.055982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.056 [2024-12-09 06:24:49.055990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:107104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.056 [2024-12-09 06:24:49.055997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.056 [2024-12-09 06:24:49.056006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:107112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.056 [2024-12-09 06:24:49.056012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.056 [2024-12-09 06:24:49.056021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:107120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.056 [2024-12-09 06:24:49.056028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.056 [2024-12-09 06:24:49.056038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:107128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.056 [2024-12-09 06:24:49.056045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.056 [2024-12-09 06:24:49.056053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:107136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.056 [2024-12-09 06:24:49.056063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.056 [2024-12-09 06:24:49.056072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:107144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.056 [2024-12-09 06:24:49.056079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.056 [2024-12-09 06:24:49.056088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:107152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.056 [2024-12-09 06:24:49.056094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.056 [2024-12-09 06:24:49.056103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:107160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.056 [2024-12-09 06:24:49.056110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.056 [2024-12-09 06:24:49.056118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:107168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.056 [2024-12-09 06:24:49.056125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.056 [2024-12-09 06:24:49.056134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:107176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.056 [2024-12-09 06:24:49.056141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.056 [2024-12-09 06:24:49.056150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:107184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.056 [2024-12-09 06:24:49.056156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.056 [2024-12-09 06:24:49.056165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:107192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.056 [2024-12-09 06:24:49.056172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.056 [2024-12-09 06:24:49.056180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:107200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.056 [2024-12-09 06:24:49.056187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.056 [2024-12-09 06:24:49.056196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:107208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.056 [2024-12-09 06:24:49.056203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.056 [2024-12-09 06:24:49.056212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:107216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.056 [2024-12-09 06:24:49.056218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.056 [2024-12-09 06:24:49.056227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:107224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.056 [2024-12-09 06:24:49.056235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.056 [2024-12-09 06:24:49.056244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:107232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.056 [2024-12-09 06:24:49.056251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.056 [2024-12-09 06:24:49.056259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:107240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.056 [2024-12-09 06:24:49.056266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.056 [2024-12-09 06:24:49.056274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:107248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.056 [2024-12-09 06:24:49.056281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.056 [2024-12-09 06:24:49.056289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:107256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.056 [2024-12-09 06:24:49.056296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.056 [2024-12-09 06:24:49.056305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:107264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.056 [2024-12-09 06:24:49.056312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.056 [2024-12-09 06:24:49.056320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:107272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.056 [2024-12-09 06:24:49.056327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.056 [2024-12-09 06:24:49.056335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:107280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.056 [2024-12-09 06:24:49.056343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.056 [2024-12-09 06:24:49.056351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:107288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.056 [2024-12-09 06:24:49.056358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.056 [2024-12-09 06:24:49.056367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:107296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.056 [2024-12-09 06:24:49.056373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.056 [2024-12-09 06:24:49.056382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:107304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.056 [2024-12-09 06:24:49.056389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.056 [2024-12-09 06:24:49.056397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:107312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.056 [2024-12-09 06:24:49.056404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.056 [2024-12-09 06:24:49.056412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:107320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.056 [2024-12-09 06:24:49.056419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.056 [2024-12-09 06:24:49.056428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:107328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.056 [2024-12-09 06:24:49.056435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.056 [2024-12-09 06:24:49.056444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:107336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.056 [2024-12-09 06:24:49.056453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.056 [2024-12-09 06:24:49.056462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:107344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.056 [2024-12-09 06:24:49.056469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.056 [2024-12-09 06:24:49.056478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:107352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.056 [2024-12-09 06:24:49.056484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.056 [2024-12-09 06:24:49.056493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:107360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.056 [2024-12-09 06:24:49.056499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.056 [2024-12-09 06:24:49.056508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:107368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.056 [2024-12-09 06:24:49.056515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.056 [2024-12-09 06:24:49.056524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:107376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.056 [2024-12-09 06:24:49.056530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.056 [2024-12-09 06:24:49.056539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:107384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.056 [2024-12-09 06:24:49.056546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.056 [2024-12-09 06:24:49.056555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:107392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.056 [2024-12-09 06:24:49.056563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.056 [2024-12-09 06:24:49.056572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:107400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.056 [2024-12-09 06:24:49.056579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.056 [2024-12-09 06:24:49.056587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:107408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.056 [2024-12-09 06:24:49.056594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.056 [2024-12-09 06:24:49.056603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:107416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.056 [2024-12-09 06:24:49.056610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.056 [2024-12-09 06:24:49.056619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:107424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.056 [2024-12-09 06:24:49.056625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.056 [2024-12-09 06:24:49.056636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.056 [2024-12-09 06:24:49.056643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.056 [2024-12-09 06:24:49.056651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:107440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.056 [2024-12-09 06:24:49.056658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.056 [2024-12-09 06:24:49.056667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:107448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.056 [2024-12-09 06:24:49.056673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.056 [2024-12-09 06:24:49.056682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:107456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.056 [2024-12-09 06:24:49.056689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.056 [2024-12-09 06:24:49.056698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:107464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.056 [2024-12-09 06:24:49.056704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.056 [2024-12-09 06:24:49.056724] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:09.056 [2024-12-09 06:24:49.056731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106832 len:8 PRP1 0x0 PRP2 0x0 00:26:09.056 [2024-12-09 06:24:49.056738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.056 [2024-12-09 06:24:49.056748] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:09.056 [2024-12-09 06:24:49.056754] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:09.056 [2024-12-09 06:24:49.056760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106840 len:8 PRP1 0x0 PRP2 0x0 00:26:09.056 [2024-12-09 06:24:49.056766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.056 [2024-12-09 06:24:49.056773] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:09.056 [2024-12-09 06:24:49.056778] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:09.056 [2024-12-09 06:24:49.056785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106848 len:8 PRP1 0x0 PRP2 0x0 00:26:09.056 [2024-12-09 06:24:49.056791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.056 [2024-12-09 06:24:49.056799] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:09.056 [2024-12-09 06:24:49.056804] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:09.056 [2024-12-09 06:24:49.056810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106856 len:8 PRP1 0x0 PRP2 0x0 00:26:09.056 [2024-12-09 06:24:49.056816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.056 [2024-12-09 06:24:49.056824] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:09.056 [2024-12-09 06:24:49.056829] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:09.056 [2024-12-09 06:24:49.056834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106864 len:8 PRP1 0x0 PRP2 0x0 00:26:09.056 [2024-12-09 06:24:49.056843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.056 [2024-12-09 06:24:49.056850] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:09.056 [2024-12-09 06:24:49.056855] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:09.056 [2024-12-09 06:24:49.056861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106872 len:8 PRP1 0x0 PRP2 0x0 00:26:09.056 [2024-12-09 06:24:49.056868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.056 [2024-12-09 06:24:49.056875] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:09.056 [2024-12-09 06:24:49.056880] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:09.056 [2024-12-09 06:24:49.056886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106880 len:8 PRP1 0x0 PRP2 0x0 00:26:09.056 [2024-12-09 06:24:49.056892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.056 [2024-12-09 06:24:49.056900] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:09.056 [2024-12-09 06:24:49.056905] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:09.056 [2024-12-09 06:24:49.056910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106888 len:8 PRP1 0x0 PRP2 0x0 00:26:09.056 [2024-12-09 06:24:49.056917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.056 [2024-12-09 06:24:49.056924] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:09.056 [2024-12-09 06:24:49.056929] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:09.057 [2024-12-09 06:24:49.056935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107472 len:8 PRP1 0x0 PRP2 0x0 00:26:09.057 [2024-12-09 06:24:49.056941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.057 [2024-12-09 06:24:49.056949] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:09.057 [2024-12-09 06:24:49.056954] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:09.057 [2024-12-09 06:24:49.056959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106896 len:8 PRP1 0x0 PRP2 0x0 00:26:09.057 [2024-12-09 06:24:49.056966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.057 [2024-12-09 06:24:49.056973] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:09.057 [2024-12-09 06:24:49.056978] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:09.057 [2024-12-09 06:24:49.056984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106904 len:8 PRP1 0x0 PRP2 0x0 00:26:09.057 [2024-12-09 06:24:49.056990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.057 [2024-12-09 06:24:49.056998] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:09.057 [2024-12-09 06:24:49.057003] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:09.057 [2024-12-09 06:24:49.057009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106912 len:8 PRP1 0x0 PRP2 0x0 00:26:09.057 [2024-12-09 06:24:49.057016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.057 [2024-12-09 06:24:49.057023] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:09.057 [2024-12-09 06:24:49.057028] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:09.057 [2024-12-09 06:24:49.057036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106920 len:8 PRP1 0x0 PRP2 0x0 00:26:09.057 [2024-12-09 06:24:49.057043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.057 [2024-12-09 06:24:49.057051] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:09.057 [2024-12-09 06:24:49.057056] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:09.057 [2024-12-09 06:24:49.057062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106928 len:8 PRP1 0x0 PRP2 0x0 00:26:09.057 [2024-12-09 06:24:49.057069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.057 [2024-12-09 06:24:49.057076] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:09.057 [2024-12-09 06:24:49.057081] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:09.057 [2024-12-09 06:24:49.057087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106936 len:8 PRP1 0x0 PRP2 0x0 00:26:09.057 [2024-12-09 06:24:49.057094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.057 [2024-12-09 06:24:49.057101] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:09.057 [2024-12-09 06:24:49.057106] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:09.057 [2024-12-09 06:24:49.057112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106944 len:8 PRP1 0x0 PRP2 0x0 00:26:09.057 [2024-12-09 06:24:49.057119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.057 [2024-12-09 06:24:49.057126] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:09.057 [2024-12-09 06:24:49.057131] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:09.057 [2024-12-09 06:24:49.057137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106952 len:8 PRP1 0x0 PRP2 0x0 00:26:09.057 [2024-12-09 06:24:49.057144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.057 [2024-12-09 06:24:49.057151] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:09.057 [2024-12-09 06:24:49.057157] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:09.057 [2024-12-09 06:24:49.057163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107480 len:8 PRP1 0x0 PRP2 0x0 00:26:09.057 [2024-12-09 06:24:49.057169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.057 [2024-12-09 06:24:49.057177] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:09.057 [2024-12-09 06:24:49.057182] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:09.057 [2024-12-09 06:24:49.057188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107488 len:8 PRP1 0x0 PRP2 0x0 00:26:09.057 [2024-12-09 06:24:49.057195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.057 [2024-12-09 06:24:49.057202] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:09.057 [2024-12-09 06:24:49.057207] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:09.057 [2024-12-09 06:24:49.057214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107496 len:8 PRP1 0x0 PRP2 0x0 00:26:09.057 [2024-12-09 06:24:49.057221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.057 [2024-12-09 06:24:49.057228] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:09.057 [2024-12-09 06:24:49.057234] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:09.057 [2024-12-09 06:24:49.057240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107504 len:8 PRP1 0x0 PRP2 0x0 00:26:09.057 [2024-12-09 06:24:49.057246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.057 [2024-12-09 06:24:49.057254] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:09.057 [2024-12-09 06:24:49.057259] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:09.057 [2024-12-09 06:24:49.057264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107512 len:8 PRP1 0x0 PRP2 0x0 00:26:09.057 [2024-12-09 06:24:49.057271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.057 [2024-12-09 06:24:49.057279] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:09.057 [2024-12-09 06:24:49.057284] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:09.057 [2024-12-09 06:24:49.057289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107520 len:8 PRP1 0x0 PRP2 0x0 00:26:09.057 [2024-12-09 06:24:49.057296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.057 [2024-12-09 06:24:49.057303] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:09.057 [2024-12-09 06:24:49.057308] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:09.057 [2024-12-09 06:24:49.057314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107528 len:8 PRP1 0x0 PRP2 0x0 00:26:09.057 [2024-12-09 06:24:49.057321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.057 [2024-12-09 06:24:49.057357] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:26:09.057 [2024-12-09 06:24:49.057378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:09.057 [2024-12-09 06:24:49.057386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.057 [2024-12-09 06:24:49.057394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:09.057 [2024-12-09 06:24:49.057401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.057 [2024-12-09 06:24:49.057408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:09.057 [2024-12-09 06:24:49.057415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.057 [2024-12-09 06:24:49.057422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:09.057 [2024-12-09 06:24:49.057429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.057 [2024-12-09 06:24:49.057436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:26:09.057 [2024-12-09 06:24:49.060760] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:26:09.057 [2024-12-09 06:24:49.060785] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ea790 (9): Bad file descriptor 00:26:09.057 [2024-12-09 06:24:49.218592] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:26:09.057 10726.00 IOPS, 41.90 MiB/s [2024-12-09T05:25:03.644Z] 10955.33 IOPS, 42.79 MiB/s [2024-12-09T05:25:03.644Z] 11284.25 IOPS, 44.08 MiB/s [2024-12-09T05:25:03.644Z] [2024-12-09 06:24:52.506114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:69736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.057 [2024-12-09 06:24:52.506147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.057 [2024-12-09 06:24:52.506159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:69744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.057 [2024-12-09 06:24:52.506166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.057 [2024-12-09 06:24:52.506173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:69752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.057 [2024-12-09 06:24:52.506179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.057 [2024-12-09 06:24:52.506186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:69760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.057 [2024-12-09 06:24:52.506192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.057 [2024-12-09 06:24:52.506199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:69768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.057 [2024-12-09 06:24:52.506204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.057 [2024-12-09 06:24:52.506212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:69776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.057 [2024-12-09 06:24:52.506217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.057 [2024-12-09 06:24:52.506225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:69784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.057 [2024-12-09 06:24:52.506230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.057 [2024-12-09 06:24:52.506237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:69792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.057 [2024-12-09 06:24:52.506242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.057 [2024-12-09 06:24:52.506249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:69800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.057 [2024-12-09 06:24:52.506255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.057 [2024-12-09 06:24:52.506262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:69808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.057 [2024-12-09 06:24:52.506268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.057 [2024-12-09 06:24:52.506274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:69816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.057 [2024-12-09 06:24:52.506280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.057 [2024-12-09 06:24:52.506286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:69824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.057 [2024-12-09 06:24:52.506292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.057 [2024-12-09 06:24:52.506299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:69832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.057 [2024-12-09 06:24:52.506309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.057 [2024-12-09 06:24:52.506316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:69840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.057 [2024-12-09 06:24:52.506322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.057 [2024-12-09 06:24:52.506330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:69848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.057 [2024-12-09 06:24:52.506335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.057 [2024-12-09 06:24:52.506342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:69856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.057 [2024-12-09 06:24:52.506348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.057 [2024-12-09 06:24:52.506355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:69864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.057 [2024-12-09 06:24:52.506360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.057 [2024-12-09 06:24:52.506367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:69872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.057 [2024-12-09 06:24:52.506372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.057 [2024-12-09 06:24:52.506379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:69880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.057 [2024-12-09 06:24:52.506384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.057 [2024-12-09 06:24:52.506391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:69888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.057 [2024-12-09 06:24:52.506397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.057 [2024-12-09 06:24:52.506404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:69896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.057 [2024-12-09 06:24:52.506409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.057 [2024-12-09 06:24:52.506415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:69904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.057 [2024-12-09 06:24:52.506421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.057 [2024-12-09 06:24:52.506427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:69912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.057 [2024-12-09 06:24:52.506433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.057 [2024-12-09 06:24:52.506439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:69920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.057 [2024-12-09 06:24:52.506444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.057 [2024-12-09 06:24:52.506456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:69928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.057 [2024-12-09 06:24:52.506461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.057 [2024-12-09 06:24:52.506470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:69936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.057 [2024-12-09 06:24:52.506475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.057 [2024-12-09 06:24:52.506482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:69944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.057 [2024-12-09 06:24:52.506487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.057 [2024-12-09 06:24:52.506494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:69952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.057 [2024-12-09 06:24:52.506499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.057 [2024-12-09 06:24:52.506506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:69960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.058 [2024-12-09 06:24:52.506512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.058 [2024-12-09 06:24:52.506518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:69968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.058 [2024-12-09 06:24:52.506524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.058 [2024-12-09 06:24:52.506531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:69976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.058 [2024-12-09 06:24:52.506536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.058 [2024-12-09 06:24:52.506543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:69984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.058 [2024-12-09 06:24:52.506548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.058 [2024-12-09 06:24:52.506554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:69992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.058 [2024-12-09 06:24:52.506560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.058 [2024-12-09 06:24:52.506567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:70000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.058 [2024-12-09 06:24:52.506572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.058 [2024-12-09 06:24:52.506580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:70008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.058 [2024-12-09 06:24:52.506585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.058 [2024-12-09 06:24:52.506592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:70016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.058 [2024-12-09 06:24:52.506598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.058 [2024-12-09 06:24:52.506604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:70024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.058 [2024-12-09 06:24:52.506610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.058 [2024-12-09 06:24:52.506617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:70032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.058 [2024-12-09 06:24:52.506623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.058 [2024-12-09 06:24:52.506630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:70040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.058 [2024-12-09 06:24:52.506635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.058 [2024-12-09 06:24:52.506642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:70048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.058 [2024-12-09 06:24:52.506647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.058 [2024-12-09 06:24:52.506654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:70056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.058 [2024-12-09 06:24:52.506659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.058 [2024-12-09 06:24:52.506666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:70064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.058 [2024-12-09 06:24:52.506671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.058 [2024-12-09 06:24:52.506678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:70072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.058 [2024-12-09 06:24:52.506684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.058 [2024-12-09 06:24:52.506690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:70080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.058 [2024-12-09 06:24:52.506696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.058 [2024-12-09 06:24:52.506702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:70088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.058 [2024-12-09 06:24:52.506707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.058 [2024-12-09 06:24:52.506714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:70096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.058 [2024-12-09 06:24:52.506720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.058 [2024-12-09 06:24:52.506727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:70104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.058 [2024-12-09 06:24:52.506732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.058 [2024-12-09 06:24:52.506739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:70112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.058 [2024-12-09 06:24:52.506744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.058 [2024-12-09 06:24:52.506751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:70120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.058 [2024-12-09 06:24:52.506756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.058 [2024-12-09 06:24:52.506763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:70128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.058 [2024-12-09 06:24:52.506768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.058 [2024-12-09 06:24:52.506775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:70136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.058 [2024-12-09 06:24:52.506781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.058 [2024-12-09 06:24:52.506788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:70144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.058 [2024-12-09 06:24:52.506794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.058 [2024-12-09 06:24:52.506800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:70152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.058 [2024-12-09 06:24:52.506806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.058 [2024-12-09 06:24:52.506813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:70160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.058 [2024-12-09 06:24:52.506818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.058 [2024-12-09 06:24:52.506824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:70168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.058 [2024-12-09 06:24:52.506830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.058 [2024-12-09 06:24:52.506836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:70176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.058 [2024-12-09 06:24:52.506842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.058 [2024-12-09 06:24:52.506848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:70184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.058 [2024-12-09 06:24:52.506854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.058 [2024-12-09 06:24:52.506860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:70192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.058 [2024-12-09 06:24:52.506865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.058 [2024-12-09 06:24:52.506872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:70200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.058 [2024-12-09 06:24:52.506877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.058 [2024-12-09 06:24:52.506884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:70208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.058 [2024-12-09 06:24:52.506889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.058 [2024-12-09 06:24:52.506896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:70216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.058 [2024-12-09 06:24:52.506901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.058 [2024-12-09 06:24:52.506908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:70224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.058 [2024-12-09 06:24:52.506913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.058 [2024-12-09 06:24:52.506920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.058 [2024-12-09 06:24:52.506925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.058 [2024-12-09 06:24:52.506933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:70240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.058 [2024-12-09 06:24:52.506938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.058 [2024-12-09 06:24:52.506945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:70248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.058 [2024-12-09 06:24:52.506950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.058 [2024-12-09 06:24:52.506956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:70256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.058 [2024-12-09 06:24:52.506962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.058 [2024-12-09 06:24:52.506968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:70264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.058 [2024-12-09 06:24:52.506974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.058 [2024-12-09 06:24:52.506981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:70272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.058 [2024-12-09 06:24:52.506986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.058 [2024-12-09 06:24:52.506993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:70280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.058 [2024-12-09 06:24:52.506998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.058 [2024-12-09 06:24:52.507005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:70288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.058 [2024-12-09 06:24:52.507010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.058 [2024-12-09 06:24:52.507016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:70296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.058 [2024-12-09 06:24:52.507022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.058 [2024-12-09 06:24:52.507028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:70304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.058 [2024-12-09 06:24:52.507033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.058 [2024-12-09 06:24:52.507040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:70312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.058 [2024-12-09 06:24:52.507046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.058 [2024-12-09 06:24:52.507053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:70320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.058 [2024-12-09 06:24:52.507058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.058 [2024-12-09 06:24:52.507065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:70328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.058 [2024-12-09 06:24:52.507070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.058 [2024-12-09 06:24:52.507077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:70336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.058 [2024-12-09 06:24:52.507086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.058 [2024-12-09 06:24:52.507093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:70344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.058 [2024-12-09 06:24:52.507098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.058 [2024-12-09 06:24:52.507105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:70352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.058 [2024-12-09 06:24:52.507110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.058 [2024-12-09 06:24:52.507117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:70360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.058 [2024-12-09 06:24:52.507122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.058 [2024-12-09 06:24:52.507128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:70368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.058 [2024-12-09 06:24:52.507134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.058 [2024-12-09 06:24:52.507141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:70376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.058 [2024-12-09 06:24:52.507147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.058 [2024-12-09 06:24:52.507153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:70384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.058 [2024-12-09 06:24:52.507158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.058 [2024-12-09 06:24:52.507165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:70392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.058 [2024-12-09 06:24:52.507170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.058 [2024-12-09 06:24:52.507177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:70400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.058 [2024-12-09 06:24:52.507182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.058 [2024-12-09 06:24:52.507189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:70408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.058 [2024-12-09 06:24:52.507194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.058 [2024-12-09 06:24:52.507201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:70416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.058 [2024-12-09 06:24:52.507206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.058 [2024-12-09 06:24:52.507213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:70424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.058 [2024-12-09 06:24:52.507218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.058 [2024-12-09 06:24:52.507225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:70432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.058 [2024-12-09 06:24:52.507230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.058 [2024-12-09 06:24:52.507237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:70440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.058 [2024-12-09 06:24:52.507243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.058 [2024-12-09 06:24:52.507249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:70448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.058 [2024-12-09 06:24:52.507255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.058 [2024-12-09 06:24:52.507261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:70456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.058 [2024-12-09 06:24:52.507267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.058 [2024-12-09 06:24:52.507273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:70464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.058 [2024-12-09 06:24:52.507278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.058 [2024-12-09 06:24:52.507285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:70472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.058 [2024-12-09 06:24:52.507290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.058 [2024-12-09 06:24:52.507297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:70480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.058 [2024-12-09 06:24:52.507302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.058 [2024-12-09 06:24:52.507309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:70488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.058 [2024-12-09 06:24:52.507314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.058 [2024-12-09 06:24:52.507320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:70496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.058 [2024-12-09 06:24:52.507326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.058 [2024-12-09 06:24:52.507332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:70504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.058 [2024-12-09 06:24:52.507337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.058 [2024-12-09 06:24:52.507344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:70512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.058 [2024-12-09 06:24:52.507349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.059 [2024-12-09 06:24:52.507356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:70520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.059 [2024-12-09 06:24:52.507362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.059 [2024-12-09 06:24:52.507369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:70528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.059 [2024-12-09 06:24:52.507374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.059 [2024-12-09 06:24:52.507381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:70536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.059 [2024-12-09 06:24:52.507386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.059 [2024-12-09 06:24:52.507405] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:09.059 [2024-12-09 06:24:52.507411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70544 len:8 PRP1 0x0 PRP2 0x0 00:26:09.059 [2024-12-09 06:24:52.507416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.059 [2024-12-09 06:24:52.507424] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:09.059 [2024-12-09 06:24:52.507428] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:09.059 [2024-12-09 06:24:52.507432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70552 len:8 PRP1 0x0 PRP2 0x0 00:26:09.059 [2024-12-09 06:24:52.507437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.059 [2024-12-09 06:24:52.507443] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:09.059 [2024-12-09 06:24:52.507447] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:09.059 [2024-12-09 06:24:52.507454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70560 len:8 PRP1 0x0 PRP2 0x0 00:26:09.059 [2024-12-09 06:24:52.507459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.059 [2024-12-09 06:24:52.507465] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:09.059 [2024-12-09 06:24:52.507469] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:09.059 [2024-12-09 06:24:52.507475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70568 len:8 PRP1 0x0 PRP2 0x0 00:26:09.059 [2024-12-09 06:24:52.507480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.059 [2024-12-09 06:24:52.507486] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:09.059 [2024-12-09 06:24:52.507490] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:09.059 [2024-12-09 06:24:52.507494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70576 len:8 PRP1 0x0 PRP2 0x0 00:26:09.059 [2024-12-09 06:24:52.507499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.059 [2024-12-09 06:24:52.507505] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:09.059 [2024-12-09 06:24:52.507509] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:09.059 [2024-12-09 06:24:52.507513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70584 len:8 PRP1 0x0 PRP2 0x0 00:26:09.059 [2024-12-09 06:24:52.507518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.059 [2024-12-09 06:24:52.507524] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:09.059 [2024-12-09 06:24:52.507528] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:09.059 [2024-12-09 06:24:52.507532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70592 len:8 PRP1 0x0 PRP2 0x0 00:26:09.059 [2024-12-09 06:24:52.507538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.059 [2024-12-09 06:24:52.507543] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:09.059 [2024-12-09 06:24:52.507547] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:09.059 [2024-12-09 06:24:52.507552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70600 len:8 PRP1 0x0 PRP2 0x0 00:26:09.059 [2024-12-09 06:24:52.507557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.059 [2024-12-09 06:24:52.507564] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:09.059 [2024-12-09 06:24:52.507568] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:09.059 [2024-12-09 06:24:52.507572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70608 len:8 PRP1 0x0 PRP2 0x0 00:26:09.059 [2024-12-09 06:24:52.507578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.059 [2024-12-09 06:24:52.507583] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:09.059 [2024-12-09 06:24:52.507587] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:09.059 [2024-12-09 06:24:52.507591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70616 len:8 PRP1 0x0 PRP2 0x0 00:26:09.059 [2024-12-09 06:24:52.507596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.059 [2024-12-09 06:24:52.507602] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:09.059 [2024-12-09 06:24:52.507606] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:09.059 [2024-12-09 06:24:52.507610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70624 len:8 PRP1 0x0 PRP2 0x0 00:26:09.059 [2024-12-09 06:24:52.507615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.059 [2024-12-09 06:24:52.507621] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:09.059 [2024-12-09 06:24:52.507625] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:09.059 [2024-12-09 06:24:52.507630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70632 len:8 PRP1 0x0 PRP2 0x0 00:26:09.059 [2024-12-09 06:24:52.507635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.059 [2024-12-09 06:24:52.507641] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:09.059 [2024-12-09 06:24:52.507645] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:09.059 [2024-12-09 06:24:52.507649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70640 len:8 PRP1 0x0 PRP2 0x0 00:26:09.059 [2024-12-09 06:24:52.507654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.059 [2024-12-09 06:24:52.507660] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:09.059 [2024-12-09 06:24:52.507664] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:09.059 [2024-12-09 06:24:52.507668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70648 len:8 PRP1 0x0 PRP2 0x0 00:26:09.059 [2024-12-09 06:24:52.507673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.059 [2024-12-09 06:24:52.507678] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:09.059 [2024-12-09 06:24:52.507682] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:09.059 [2024-12-09 06:24:52.507687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70656 len:8 PRP1 0x0 PRP2 0x0 00:26:09.059 [2024-12-09 06:24:52.507693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.059 [2024-12-09 06:24:52.507699] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:09.059 [2024-12-09 06:24:52.507703] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:09.059 [2024-12-09 06:24:52.507707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70664 len:8 PRP1 0x0 PRP2 0x0 00:26:09.059 [2024-12-09 06:24:52.507714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.059 [2024-12-09 06:24:52.507719] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:09.059 [2024-12-09 06:24:52.507723] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:09.059 [2024-12-09 06:24:52.507728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70672 len:8 PRP1 0x0 PRP2 0x0 00:26:09.059 [2024-12-09 06:24:52.507733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.059 [2024-12-09 06:24:52.507739] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:09.059 [2024-12-09 06:24:52.507743] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:09.059 [2024-12-09 06:24:52.507748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70680 len:8 PRP1 0x0 PRP2 0x0 00:26:09.059 [2024-12-09 06:24:52.507753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.059 [2024-12-09 06:24:52.507758] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:09.059 [2024-12-09 06:24:52.507762] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:09.059 [2024-12-09 06:24:52.507767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70688 len:8 PRP1 0x0 PRP2 0x0 00:26:09.059 [2024-12-09 06:24:52.507772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.059 [2024-12-09 06:24:52.507777] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:09.059 [2024-12-09 06:24:52.507781] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:09.059 [2024-12-09 06:24:52.507785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70696 len:8 PRP1 0x0 PRP2 0x0 00:26:09.059 [2024-12-09 06:24:52.507790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.059 [2024-12-09 06:24:52.507796] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:09.059 [2024-12-09 06:24:52.507800] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:09.059 [2024-12-09 06:24:52.507804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70704 len:8 PRP1 0x0 PRP2 0x0 00:26:09.059 [2024-12-09 06:24:52.507810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.059 [2024-12-09 06:24:52.507815] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:09.059 [2024-12-09 06:24:52.507819] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:09.059 [2024-12-09 06:24:52.507823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70712 len:8 PRP1 0x0 PRP2 0x0 00:26:09.059 [2024-12-09 06:24:52.507828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.059 [2024-12-09 06:24:52.507834] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:09.059 [2024-12-09 06:24:52.507838] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:09.059 [2024-12-09 06:24:52.507842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70720 len:8 PRP1 0x0 PRP2 0x0 00:26:09.059 [2024-12-09 06:24:52.507847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.059 [2024-12-09 06:24:52.507853] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:09.059 [2024-12-09 06:24:52.507861] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:09.059 [2024-12-09 06:24:52.507866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70728 len:8 PRP1 0x0 PRP2 0x0 00:26:09.059 [2024-12-09 06:24:52.507871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.059 [2024-12-09 06:24:52.507877] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:09.059 [2024-12-09 06:24:52.507880] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:09.059 [2024-12-09 06:24:52.507885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70736 len:8 PRP1 0x0 PRP2 0x0 00:26:09.059 [2024-12-09 06:24:52.507890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.059 [2024-12-09 06:24:52.507896] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:09.059 [2024-12-09 06:24:52.507900] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:09.059 [2024-12-09 06:24:52.507904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70744 len:8 PRP1 0x0 PRP2 0x0 00:26:09.059 [2024-12-09 06:24:52.507909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.059 [2024-12-09 06:24:52.507915] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:09.059 [2024-12-09 06:24:52.507919] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:09.059 [2024-12-09 06:24:52.517951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70752 len:8 PRP1 0x0 PRP2 0x0 00:26:09.059 [2024-12-09 06:24:52.517977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.059 [2024-12-09 06:24:52.518022] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:26:09.059 [2024-12-09 06:24:52.518049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:09.059 [2024-12-09 06:24:52.518056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.059 [2024-12-09 06:24:52.518064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:09.059 [2024-12-09 06:24:52.518070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.059 [2024-12-09 06:24:52.518077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:09.059 [2024-12-09 06:24:52.518083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.059 [2024-12-09 06:24:52.518089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:09.059 [2024-12-09 06:24:52.518095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.059 [2024-12-09 06:24:52.518101] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:26:09.059 [2024-12-09 06:24:52.518135] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ea790 (9): Bad file descriptor 00:26:09.059 [2024-12-09 06:24:52.520916] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:26:09.059 [2024-12-09 06:24:52.632341] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:26:09.059 11111.20 IOPS, 43.40 MiB/s [2024-12-09T05:25:03.646Z] 11308.00 IOPS, 44.17 MiB/s [2024-12-09T05:25:03.646Z] 11385.57 IOPS, 44.47 MiB/s [2024-12-09T05:25:03.646Z] 11468.38 IOPS, 44.80 MiB/s [2024-12-09T05:25:03.646Z] [2024-12-09 06:24:56.865572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.059 [2024-12-09 06:24:56.865610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.059 [2024-12-09 06:24:56.865623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.059 [2024-12-09 06:24:56.865629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.059 [2024-12-09 06:24:56.865636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.059 [2024-12-09 06:24:56.865641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.059 [2024-12-09 06:24:56.865648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.059 [2024-12-09 06:24:56.865654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.059 [2024-12-09 06:24:56.865661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.059 [2024-12-09 06:24:56.865666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.059 [2024-12-09 06:24:56.865673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.059 [2024-12-09 06:24:56.865679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.059 [2024-12-09 06:24:56.865686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.059 [2024-12-09 06:24:56.865691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.059 [2024-12-09 06:24:56.865698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.059 [2024-12-09 06:24:56.865703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.059 [2024-12-09 06:24:56.865710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.059 [2024-12-09 06:24:56.865715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.059 [2024-12-09 06:24:56.865722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.059 [2024-12-09 06:24:56.865728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.059 [2024-12-09 06:24:56.865734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.059 [2024-12-09 06:24:56.865740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.059 [2024-12-09 06:24:56.865746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.059 [2024-12-09 06:24:56.865751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.059 [2024-12-09 06:24:56.865759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.060 [2024-12-09 06:24:56.865769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.060 [2024-12-09 06:24:56.865776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.060 [2024-12-09 06:24:56.865781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.060 [2024-12-09 06:24:56.865788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.060 [2024-12-09 06:24:56.865793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.060 [2024-12-09 06:24:56.865801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.060 [2024-12-09 06:24:56.865806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.060 [2024-12-09 06:24:56.865813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.060 [2024-12-09 06:24:56.865818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.060 [2024-12-09 06:24:56.865825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.060 [2024-12-09 06:24:56.865830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.060 [2024-12-09 06:24:56.865837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.060 [2024-12-09 06:24:56.865842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.060 [2024-12-09 06:24:56.865849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.060 [2024-12-09 06:24:56.865854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.060 [2024-12-09 06:24:56.865861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.060 [2024-12-09 06:24:56.865868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.060 [2024-12-09 06:24:56.865874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.060 [2024-12-09 06:24:56.865880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.060 [2024-12-09 06:24:56.865887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.060 [2024-12-09 06:24:56.865892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.060 [2024-12-09 06:24:56.865899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.060 [2024-12-09 06:24:56.865904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.060 [2024-12-09 06:24:56.865911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.060 [2024-12-09 06:24:56.865916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.060 [2024-12-09 06:24:56.865923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.060 [2024-12-09 06:24:56.865929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.060 [2024-12-09 06:24:56.865936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.060 [2024-12-09 06:24:56.865941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.060 [2024-12-09 06:24:56.865948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.060 [2024-12-09 06:24:56.865954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.060 [2024-12-09 06:24:56.865961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.060 [2024-12-09 06:24:56.865966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.060 [2024-12-09 06:24:56.865972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.060 [2024-12-09 06:24:56.865978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.060 [2024-12-09 06:24:56.865985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.060 [2024-12-09 06:24:56.865990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.060 [2024-12-09 06:24:56.865997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.060 [2024-12-09 06:24:56.866003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.060 [2024-12-09 06:24:56.866010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.060 [2024-12-09 06:24:56.866015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.060 [2024-12-09 06:24:56.866022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.060 [2024-12-09 06:24:56.866028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.060 [2024-12-09 06:24:56.866035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.060 [2024-12-09 06:24:56.866041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.060 [2024-12-09 06:24:56.866048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:1008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.060 [2024-12-09 06:24:56.866053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.060 [2024-12-09 06:24:56.866060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.060 [2024-12-09 06:24:56.866065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.060 [2024-12-09 06:24:56.866072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:1024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.060 [2024-12-09 06:24:56.866077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.060 [2024-12-09 06:24:56.866085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:1032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.060 [2024-12-09 06:24:56.866091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.060 [2024-12-09 06:24:56.866097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.060 [2024-12-09 06:24:56.866103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.060 [2024-12-09 06:24:56.866110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.060 [2024-12-09 06:24:56.866115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.060 [2024-12-09 06:24:56.866121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:1056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.060 [2024-12-09 06:24:56.866127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.060 [2024-12-09 06:24:56.866134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:1064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.060 [2024-12-09 06:24:56.866139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.060 [2024-12-09 06:24:56.866146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.060 [2024-12-09 06:24:56.866152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.060 [2024-12-09 06:24:56.866158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:1080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.060 [2024-12-09 06:24:56.866164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.060 [2024-12-09 06:24:56.866171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:1088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.060 [2024-12-09 06:24:56.866176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.060 [2024-12-09 06:24:56.866183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:1096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.060 [2024-12-09 06:24:56.866188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.060 [2024-12-09 06:24:56.866195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.060 [2024-12-09 06:24:56.866200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.060 [2024-12-09 06:24:56.866206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.060 [2024-12-09 06:24:56.866212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.060 [2024-12-09 06:24:56.866218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.060 [2024-12-09 06:24:56.866223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.060 [2024-12-09 06:24:56.866230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.060 [2024-12-09 06:24:56.866237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.060 [2024-12-09 06:24:56.866243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.060 [2024-12-09 06:24:56.866249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.060 [2024-12-09 06:24:56.866255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.060 [2024-12-09 06:24:56.866260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.060 [2024-12-09 06:24:56.866267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.060 [2024-12-09 06:24:56.866272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.060 [2024-12-09 06:24:56.866279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.060 [2024-12-09 06:24:56.866284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.060 [2024-12-09 06:24:56.866291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:1104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.060 [2024-12-09 06:24:56.866296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.060 [2024-12-09 06:24:56.866308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.060 [2024-12-09 06:24:56.866313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.060 [2024-12-09 06:24:56.866320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.060 [2024-12-09 06:24:56.866325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.060 [2024-12-09 06:24:56.866332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.060 [2024-12-09 06:24:56.866337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.060 [2024-12-09 06:24:56.866344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.060 [2024-12-09 06:24:56.866349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.060 [2024-12-09 06:24:56.866356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.060 [2024-12-09 06:24:56.866361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.060 [2024-12-09 06:24:56.866368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.060 [2024-12-09 06:24:56.866373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.060 [2024-12-09 06:24:56.866380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.060 [2024-12-09 06:24:56.866385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.060 [2024-12-09 06:24:56.866392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.060 [2024-12-09 06:24:56.866398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.060 [2024-12-09 06:24:56.866405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.060 [2024-12-09 06:24:56.866410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.060 [2024-12-09 06:24:56.866417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.060 [2024-12-09 06:24:56.866423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.060 [2024-12-09 06:24:56.866429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.060 [2024-12-09 06:24:56.866435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.060 [2024-12-09 06:24:56.866442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.060 [2024-12-09 06:24:56.866447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.060 [2024-12-09 06:24:56.866458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.060 [2024-12-09 06:24:56.866463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.060 [2024-12-09 06:24:56.866470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.060 [2024-12-09 06:24:56.866475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.060 [2024-12-09 06:24:56.866482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.060 [2024-12-09 06:24:56.866487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.060 [2024-12-09 06:24:56.866494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.060 [2024-12-09 06:24:56.866500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.060 [2024-12-09 06:24:56.866507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.060 [2024-12-09 06:24:56.866512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.060 [2024-12-09 06:24:56.866519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.060 [2024-12-09 06:24:56.866524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.060 [2024-12-09 06:24:56.866531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.060 [2024-12-09 06:24:56.866536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.060 [2024-12-09 06:24:56.866542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.060 [2024-12-09 06:24:56.866547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.060 [2024-12-09 06:24:56.866555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.061 [2024-12-09 06:24:56.866560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.061 [2024-12-09 06:24:56.866567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.061 [2024-12-09 06:24:56.866573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.061 [2024-12-09 06:24:56.866580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.061 [2024-12-09 06:24:56.866585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.061 [2024-12-09 06:24:56.866595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.061 [2024-12-09 06:24:56.866603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.061 [2024-12-09 06:24:56.866612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.061 [2024-12-09 06:24:56.866619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.061 [2024-12-09 06:24:56.866629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.061 [2024-12-09 06:24:56.866637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.061 [2024-12-09 06:24:56.866647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.061 [2024-12-09 06:24:56.866655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.061 [2024-12-09 06:24:56.866665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.061 [2024-12-09 06:24:56.866674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.061 [2024-12-09 06:24:56.866686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.061 [2024-12-09 06:24:56.866694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.061 [2024-12-09 06:24:56.866704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.061 [2024-12-09 06:24:56.866710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.061 [2024-12-09 06:24:56.866717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.061 [2024-12-09 06:24:56.866722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.061 [2024-12-09 06:24:56.866728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.061 [2024-12-09 06:24:56.866734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.061 [2024-12-09 06:24:56.866742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.061 [2024-12-09 06:24:56.866749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.061 [2024-12-09 06:24:56.866756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.061 [2024-12-09 06:24:56.866761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.061 [2024-12-09 06:24:56.866768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.061 [2024-12-09 06:24:56.866773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.061 [2024-12-09 06:24:56.866780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.061 [2024-12-09 06:24:56.866785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.061 [2024-12-09 06:24:56.866792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.061 [2024-12-09 06:24:56.866797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.061 [2024-12-09 06:24:56.866804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.061 [2024-12-09 06:24:56.866809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.061 [2024-12-09 06:24:56.866816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.061 [2024-12-09 06:24:56.866821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.061 [2024-12-09 06:24:56.866828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.061 [2024-12-09 06:24:56.866833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.061 [2024-12-09 06:24:56.866840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.061 [2024-12-09 06:24:56.866845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.061 [2024-12-09 06:24:56.866853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.061 [2024-12-09 06:24:56.866858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.061 [2024-12-09 06:24:56.866865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.061 [2024-12-09 06:24:56.866871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.061 [2024-12-09 06:24:56.866877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.061 [2024-12-09 06:24:56.866883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.061 [2024-12-09 06:24:56.866890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.061 [2024-12-09 06:24:56.866895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.061 [2024-12-09 06:24:56.866902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.061 [2024-12-09 06:24:56.866909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.061 [2024-12-09 06:24:56.866916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.061 [2024-12-09 06:24:56.866921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.061 [2024-12-09 06:24:56.866928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.061 [2024-12-09 06:24:56.866934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.061 [2024-12-09 06:24:56.866941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.061 [2024-12-09 06:24:56.866946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.061 [2024-12-09 06:24:56.866954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.061 [2024-12-09 06:24:56.866959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.061 [2024-12-09 06:24:56.866966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.061 [2024-12-09 06:24:56.866971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.061 [2024-12-09 06:24:56.866978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.061 [2024-12-09 06:24:56.866983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.061 [2024-12-09 06:24:56.866990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.061 [2024-12-09 06:24:56.866995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.061 [2024-12-09 06:24:56.867001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.061 [2024-12-09 06:24:56.867007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.061 [2024-12-09 06:24:56.867014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.061 [2024-12-09 06:24:56.867019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.061 [2024-12-09 06:24:56.867026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.061 [2024-12-09 06:24:56.867031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.061 [2024-12-09 06:24:56.867038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.061 [2024-12-09 06:24:56.867043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.061 [2024-12-09 06:24:56.867050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.061 [2024-12-09 06:24:56.867055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.061 [2024-12-09 06:24:56.867065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:1120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.061 [2024-12-09 06:24:56.867071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.061 [2024-12-09 06:24:56.867078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:1128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.061 [2024-12-09 06:24:56.867083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.061 [2024-12-09 06:24:56.867090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:1136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.061 [2024-12-09 06:24:56.867095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.061 [2024-12-09 06:24:56.867101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.061 [2024-12-09 06:24:56.867107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.061 [2024-12-09 06:24:56.867113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.061 [2024-12-09 06:24:56.867118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.061 [2024-12-09 06:24:56.867125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.061 [2024-12-09 06:24:56.867130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.061 [2024-12-09 06:24:56.867139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.061 [2024-12-09 06:24:56.867144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.061 [2024-12-09 06:24:56.867151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.061 [2024-12-09 06:24:56.867157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.061 [2024-12-09 06:24:56.867163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.061 [2024-12-09 06:24:56.867168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.061 [2024-12-09 06:24:56.867175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.061 [2024-12-09 06:24:56.867181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.061 [2024-12-09 06:24:56.867187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.061 [2024-12-09 06:24:56.867193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.061 [2024-12-09 06:24:56.867199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.061 [2024-12-09 06:24:56.867205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.061 [2024-12-09 06:24:56.867211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.061 [2024-12-09 06:24:56.867216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.061 [2024-12-09 06:24:56.867224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2519e60 is same with the state(6) to be set 00:26:09.061 [2024-12-09 06:24:56.867231] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:09.061 [2024-12-09 06:24:56.867236] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:09.061 [2024-12-09 06:24:56.867240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:848 len:8 PRP1 0x0 PRP2 0x0 00:26:09.061 [2024-12-09 06:24:56.867246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.061 [2024-12-09 06:24:56.867279] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:26:09.061 [2024-12-09 06:24:56.867297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:09.061 [2024-12-09 06:24:56.867303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.061 [2024-12-09 06:24:56.867309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:09.061 [2024-12-09 06:24:56.867314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.061 [2024-12-09 06:24:56.867320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:09.061 [2024-12-09 06:24:56.867325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.061 [2024-12-09 06:24:56.867331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:09.061 [2024-12-09 06:24:56.867336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.061 [2024-12-09 06:24:56.867342] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:26:09.061 [2024-12-09 06:24:56.867361] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ea790 (9): Bad file descriptor 00:26:09.061 [2024-12-09 06:24:56.869914] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:26:09.061 [2024-12-09 06:24:56.892926] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:26:09.061 11505.56 IOPS, 44.94 MiB/s [2024-12-09T05:25:03.648Z] 11557.70 IOPS, 45.15 MiB/s [2024-12-09T05:25:03.648Z] 11578.27 IOPS, 45.23 MiB/s [2024-12-09T05:25:03.648Z] 11627.75 IOPS, 45.42 MiB/s [2024-12-09T05:25:03.648Z] 11651.38 IOPS, 45.51 MiB/s [2024-12-09T05:25:03.648Z] 11679.79 IOPS, 45.62 MiB/s [2024-12-09T05:25:03.648Z] 11697.67 IOPS, 45.69 MiB/s 00:26:09.061 Latency(us) 00:26:09.061 [2024-12-09T05:25:03.648Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:09.061 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:09.061 Verification LBA range: start 0x0 length 0x4000 00:26:09.061 NVMe0n1 : 15.05 11670.66 45.59 919.67 0.00 10112.60 382.82 43959.53 00:26:09.061 [2024-12-09T05:25:03.648Z] =================================================================================================================== 00:26:09.061 [2024-12-09T05:25:03.648Z] Total : 11670.66 45.59 919.67 0.00 10112.60 382.82 43959.53 00:26:09.061 Received shutdown signal, test time was about 15.000000 seconds 00:26:09.061 00:26:09.061 Latency(us) 00:26:09.061 [2024-12-09T05:25:03.648Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:09.061 [2024-12-09T05:25:03.648Z] =================================================================================================================== 00:26:09.061 [2024-12-09T05:25:03.648Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:09.061 06:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:26:09.061 06:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:26:09.061 06:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:26:09.061 06:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=437476 00:26:09.061 06:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 437476 /var/tmp/bdevperf.sock 00:26:09.061 06:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:26:09.061 06:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 437476 ']' 00:26:09.061 06:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:09.061 06:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:09.061 06:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:09.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:09.062 06:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:09.062 06:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:09.062 06:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:09.062 06:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:26:09.062 06:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:09.322 [2024-12-09 06:25:03.635579] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:09.322 06:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:09.322 [2024-12-09 06:25:03.803991] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:26:09.322 06:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:09.582 NVMe0n1 00:26:09.842 06:25:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:10.103 00:26:10.103 06:25:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:10.363 00:26:10.363 06:25:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:10.363 06:25:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:26:10.363 06:25:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:10.623 06:25:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:26:13.921 06:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:13.921 06:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:26:13.921 06:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=438240 00:26:13.921 06:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:13.921 06:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 438240 00:26:14.862 { 00:26:14.862 "results": [ 00:26:14.862 { 00:26:14.862 "job": "NVMe0n1", 00:26:14.862 "core_mask": "0x1", 00:26:14.862 "workload": "verify", 00:26:14.862 "status": "finished", 00:26:14.862 "verify_range": { 00:26:14.862 "start": 0, 00:26:14.862 "length": 16384 00:26:14.862 }, 00:26:14.862 "queue_depth": 128, 00:26:14.862 "io_size": 4096, 00:26:14.862 "runtime": 1.005015, 00:26:14.862 "iops": 12502.30096068218, 00:26:14.862 "mibps": 48.837113127664765, 00:26:14.862 "io_failed": 0, 00:26:14.862 "io_timeout": 0, 00:26:14.862 "avg_latency_us": 10200.46405240442, 00:26:14.862 "min_latency_us": 1260.3076923076924, 00:26:14.862 "max_latency_us": 8519.68 00:26:14.862 } 00:26:14.862 ], 00:26:14.862 "core_count": 1 00:26:14.862 } 00:26:14.862 06:25:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:14.862 [2024-12-09 06:25:03.294354] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:26:14.862 [2024-12-09 06:25:03.294409] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid437476 ] 00:26:14.862 [2024-12-09 06:25:03.377569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:14.862 [2024-12-09 06:25:03.406982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:14.862 [2024-12-09 06:25:05.058902] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:26:14.862 [2024-12-09 06:25:05.058940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:14.862 [2024-12-09 06:25:05.058949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.862 [2024-12-09 06:25:05.058956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:14.862 [2024-12-09 06:25:05.058962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.862 [2024-12-09 06:25:05.058968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:14.862 [2024-12-09 06:25:05.058974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.862 [2024-12-09 06:25:05.058979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:14.862 [2024-12-09 06:25:05.058985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.862 [2024-12-09 06:25:05.058990] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:26:14.862 [2024-12-09 06:25:05.059011] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:26:14.862 [2024-12-09 06:25:05.059023] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10a3790 (9): Bad file descriptor 00:26:14.862 [2024-12-09 06:25:05.070209] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:26:14.862 Running I/O for 1 seconds... 00:26:14.862 12437.00 IOPS, 48.58 MiB/s 00:26:14.862 Latency(us) 00:26:14.862 [2024-12-09T05:25:09.449Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:14.862 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:14.862 Verification LBA range: start 0x0 length 0x4000 00:26:14.862 NVMe0n1 : 1.01 12502.30 48.84 0.00 0.00 10200.46 1260.31 8519.68 00:26:14.862 [2024-12-09T05:25:09.449Z] =================================================================================================================== 00:26:14.862 [2024-12-09T05:25:09.449Z] Total : 12502.30 48.84 0.00 0.00 10200.46 1260.31 8519.68 00:26:14.862 06:25:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:14.862 06:25:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:26:15.122 06:25:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:15.383 06:25:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:15.383 06:25:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:26:15.383 06:25:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:15.651 06:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:26:18.952 06:25:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:18.952 06:25:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:26:18.952 06:25:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 437476 00:26:18.953 06:25:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 437476 ']' 00:26:18.953 06:25:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 437476 00:26:18.953 06:25:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:26:18.953 06:25:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:18.953 06:25:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 437476 00:26:18.953 06:25:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:18.953 06:25:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:18.953 06:25:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 437476' 00:26:18.953 killing process with pid 437476 00:26:18.953 06:25:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 437476 00:26:18.953 06:25:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 437476 00:26:18.953 06:25:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:26:18.953 06:25:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:19.213 06:25:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:26:19.213 06:25:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:19.213 06:25:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:26:19.213 06:25:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:19.213 06:25:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:26:19.213 06:25:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:19.213 06:25:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:26:19.213 06:25:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:19.213 06:25:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:19.213 rmmod nvme_tcp 00:26:19.213 rmmod nvme_fabrics 00:26:19.213 rmmod nvme_keyring 00:26:19.213 06:25:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:19.213 06:25:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:26:19.213 06:25:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:26:19.213 06:25:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 434290 ']' 00:26:19.213 06:25:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 434290 00:26:19.213 06:25:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 434290 ']' 00:26:19.213 06:25:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 434290 00:26:19.213 06:25:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:26:19.213 06:25:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:19.213 06:25:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 434290 00:26:19.213 06:25:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:19.213 06:25:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:19.213 06:25:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 434290' 00:26:19.213 killing process with pid 434290 00:26:19.213 06:25:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 434290 00:26:19.213 06:25:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 434290 00:26:19.473 06:25:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:19.473 06:25:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:19.473 06:25:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:19.473 06:25:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:26:19.473 06:25:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:26:19.473 06:25:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:19.473 06:25:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:26:19.473 06:25:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:19.473 06:25:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:19.473 06:25:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:19.473 06:25:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:19.473 06:25:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:21.386 06:25:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:21.386 00:26:21.386 real 0m39.041s 00:26:21.386 user 2m0.759s 00:26:21.386 sys 0m8.170s 00:26:21.386 06:25:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:21.386 06:25:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:21.386 ************************************ 00:26:21.386 END TEST nvmf_failover 00:26:21.386 ************************************ 00:26:21.386 06:25:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:21.386 06:25:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:21.386 06:25:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:21.386 06:25:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.648 ************************************ 00:26:21.648 START TEST nvmf_host_discovery 00:26:21.648 ************************************ 00:26:21.648 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:21.648 * Looking for test storage... 00:26:21.648 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:21.648 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:21.648 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:26:21.648 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:21.648 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:21.648 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:21.648 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:21.648 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:21.648 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:26:21.648 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:26:21.648 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:26:21.648 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:26:21.648 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:26:21.648 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:26:21.648 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:26:21.648 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:21.648 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:26:21.648 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:26:21.648 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:21.648 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:21.648 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:26:21.648 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:26:21.648 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:21.648 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:26:21.648 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:26:21.648 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:26:21.648 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:26:21.648 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:21.648 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:26:21.648 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:26:21.648 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:21.648 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:21.648 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:26:21.648 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:21.648 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:21.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:21.648 --rc genhtml_branch_coverage=1 00:26:21.648 --rc genhtml_function_coverage=1 00:26:21.648 --rc genhtml_legend=1 00:26:21.648 --rc geninfo_all_blocks=1 00:26:21.648 --rc geninfo_unexecuted_blocks=1 00:26:21.648 00:26:21.648 ' 00:26:21.648 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:21.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:21.648 --rc genhtml_branch_coverage=1 00:26:21.648 --rc genhtml_function_coverage=1 00:26:21.648 --rc genhtml_legend=1 00:26:21.648 --rc geninfo_all_blocks=1 00:26:21.648 --rc geninfo_unexecuted_blocks=1 00:26:21.648 00:26:21.648 ' 00:26:21.648 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:21.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:21.649 --rc genhtml_branch_coverage=1 00:26:21.649 --rc genhtml_function_coverage=1 00:26:21.649 --rc genhtml_legend=1 00:26:21.649 --rc geninfo_all_blocks=1 00:26:21.649 --rc geninfo_unexecuted_blocks=1 00:26:21.649 00:26:21.649 ' 00:26:21.649 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:21.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:21.649 --rc genhtml_branch_coverage=1 00:26:21.649 --rc genhtml_function_coverage=1 00:26:21.649 --rc genhtml_legend=1 00:26:21.649 --rc geninfo_all_blocks=1 00:26:21.649 --rc geninfo_unexecuted_blocks=1 00:26:21.649 00:26:21.649 ' 00:26:21.649 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:21.649 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:26:21.649 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:21.649 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:21.649 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:21.649 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:21.649 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:21.649 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:21.649 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:21.649 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:21.649 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:21.649 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:21.649 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:26:21.649 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:26:21.649 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:21.649 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:21.649 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:21.649 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:21.649 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:21.649 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:26:21.910 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:21.910 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:21.910 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:21.910 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.910 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.910 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.910 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:26:21.910 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.910 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:26:21.910 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:21.910 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:21.910 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:21.910 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:21.910 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:21.910 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:21.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:21.910 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:21.910 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:21.910 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:21.910 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:26:21.910 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:26:21.910 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:26:21.910 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:26:21.910 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:26:21.910 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:26:21.910 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:26:21.910 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:21.910 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:21.910 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:21.910 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:21.910 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:21.910 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:21.910 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:21.910 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:21.910 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:21.910 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:21.910 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:26:21.910 06:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:30.055 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:30.055 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:26:30.055 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:30.055 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:30.055 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:30.055 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:30.055 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:30.055 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:26:30.055 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:30.055 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:26:30.055 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:26:30.055 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:26:30.055 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:26:30.055 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:26:30.055 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:26:30.055 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:30.055 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:30.055 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:30.055 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:30.055 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:30.055 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:30.055 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:30.055 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:30.055 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:30.055 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:30.055 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:30.055 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:30.055 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:30.055 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:30.055 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:30.055 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:30.055 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:30.055 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:30.055 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:30.055 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:30.055 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:30.055 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:30.055 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:30.055 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:30.055 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:30.055 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:30.055 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:30.055 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:30.055 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:30.055 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:30.055 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:30.055 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:30.056 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:30.056 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:30.056 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:30.056 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:30.056 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:30.056 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:30.056 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:30.056 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:30.056 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:30.056 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:30.056 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:30.056 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:30.056 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:30.056 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:30.056 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:30.056 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:30.056 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:30.056 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:30.056 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:30.056 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:30.056 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:30.056 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:30.056 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:30.056 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:30.056 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:30.056 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:30.056 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:26:30.056 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:30.056 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:30.056 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:30.056 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:30.056 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:30.056 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:30.056 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:30.056 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:30.056 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:30.056 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:30.056 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:30.056 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:30.056 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:30.056 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:30.056 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:30.056 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:30.056 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:30.056 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:30.056 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:30.056 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:30.056 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:30.056 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:30.056 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:30.056 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:30.056 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:30.056 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:30.056 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:30.056 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.620 ms 00:26:30.056 00:26:30.056 --- 10.0.0.2 ping statistics --- 00:26:30.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:30.056 rtt min/avg/max/mdev = 0.620/0.620/0.620/0.000 ms 00:26:30.056 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:30.056 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:30.056 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:26:30.056 00:26:30.056 --- 10.0.0.1 ping statistics --- 00:26:30.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:30.056 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:26:30.056 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:30.056 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:26:30.056 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:30.056 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:30.056 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:30.056 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:30.056 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:30.056 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:30.056 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:30.056 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:26:30.056 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:30.056 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:30.056 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:30.056 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=443212 00:26:30.056 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 443212 00:26:30.056 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:30.056 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 443212 ']' 00:26:30.056 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:30.056 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:30.056 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:30.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:30.056 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:30.056 06:25:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:30.056 [2024-12-09 06:25:23.741128] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:26:30.056 [2024-12-09 06:25:23.741192] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:30.056 [2024-12-09 06:25:23.821643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:30.056 [2024-12-09 06:25:23.870274] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:30.056 [2024-12-09 06:25:23.870327] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:30.056 [2024-12-09 06:25:23.870334] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:30.056 [2024-12-09 06:25:23.870341] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:30.056 [2024-12-09 06:25:23.870347] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:30.056 [2024-12-09 06:25:23.871075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:30.056 06:25:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:30.056 06:25:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:26:30.056 06:25:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:30.056 06:25:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:30.057 06:25:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:30.057 06:25:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:30.057 06:25:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:30.057 06:25:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.057 06:25:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:30.057 [2024-12-09 06:25:24.619615] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:30.057 06:25:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.057 06:25:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:26:30.057 06:25:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.057 06:25:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:30.057 [2024-12-09 06:25:24.631857] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:30.057 06:25:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.057 06:25:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:26:30.057 06:25:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.057 06:25:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:30.318 null0 00:26:30.318 06:25:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.318 06:25:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:26:30.318 06:25:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.318 06:25:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:30.318 null1 00:26:30.318 06:25:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.318 06:25:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:26:30.318 06:25:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.318 06:25:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:30.318 06:25:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.318 06:25:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=443248 00:26:30.318 06:25:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 443248 /tmp/host.sock 00:26:30.318 06:25:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:26:30.318 06:25:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 443248 ']' 00:26:30.318 06:25:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:26:30.318 06:25:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:30.318 06:25:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:30.318 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:30.318 06:25:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:30.318 06:25:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:30.318 [2024-12-09 06:25:24.729291] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:26:30.318 [2024-12-09 06:25:24.729354] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid443248 ] 00:26:30.318 [2024-12-09 06:25:24.819388] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:30.318 [2024-12-09 06:25:24.870378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:31.258 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:31.258 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:26:31.258 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:31.259 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:26:31.259 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.259 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:31.259 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.259 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:26:31.259 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.259 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:31.259 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.259 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:26:31.259 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:26:31.259 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:31.259 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:31.259 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:31.259 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:31.259 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.259 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:31.259 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.259 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:26:31.259 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:26:31.259 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:31.259 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:31.259 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.259 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:31.259 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:31.259 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:31.259 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.259 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:26:31.259 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:26:31.259 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.259 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:31.259 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.259 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:26:31.259 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:31.259 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:31.259 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.259 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:31.259 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:31.259 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:31.259 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.259 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:26:31.259 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:26:31.259 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:31.259 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:31.259 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.259 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:31.259 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:31.259 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:31.259 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.259 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:26:31.259 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:26:31.259 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.259 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:31.259 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.259 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:26:31.259 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:31.259 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:31.259 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.259 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:31.259 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:31.259 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:31.259 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.259 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:26:31.259 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:26:31.259 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:31.259 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:31.259 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.259 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:31.259 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:31.259 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:31.259 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.521 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:26:31.521 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:31.521 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.521 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:31.521 [2024-12-09 06:25:25.882884] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:31.521 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.521 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:26:31.521 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:31.521 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:31.521 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.521 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:31.521 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:31.521 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:31.521 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.521 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:26:31.521 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:26:31.521 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:31.521 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:31.521 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.521 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:31.521 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:31.521 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:31.521 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.521 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:26:31.521 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:26:31.521 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:31.521 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:31.521 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:31.521 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:31.521 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:31.521 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:31.521 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:31.521 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:31.521 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:31.521 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.521 06:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:31.521 06:25:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.521 06:25:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:31.521 06:25:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:26:31.521 06:25:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:31.521 06:25:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:31.521 06:25:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:26:31.521 06:25:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.521 06:25:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:31.521 06:25:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.521 06:25:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:31.521 06:25:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:31.521 06:25:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:31.521 06:25:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:31.521 06:25:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:31.522 06:25:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:31.522 06:25:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:31.522 06:25:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.522 06:25:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:31.522 06:25:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:31.522 06:25:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:31.522 06:25:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:31.522 06:25:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.522 06:25:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:26:31.522 06:25:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:26:32.093 [2024-12-09 06:25:26.610341] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:32.093 [2024-12-09 06:25:26.610360] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:32.093 [2024-12-09 06:25:26.610373] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:32.355 [2024-12-09 06:25:26.698643] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:32.355 [2024-12-09 06:25:26.756346] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:26:32.355 [2024-12-09 06:25:26.757241] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x16c2010:1 started. 00:26:32.355 [2024-12-09 06:25:26.758791] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:32.355 [2024-12-09 06:25:26.758808] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:32.355 [2024-12-09 06:25:26.767370] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x16c2010 was disconnected and freed. delete nvme_qpair. 00:26:32.616 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:32.616 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:32.616 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:32.616 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:32.616 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:32.616 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.616 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:32.616 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:32.616 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:32.616 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.616 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:32.616 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:32.617 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:32.617 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:32.617 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:32.617 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:32.617 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:26:32.617 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:32.617 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:32.617 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:32.617 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.617 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:32.617 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:32.617 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:32.617 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.877 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:26:32.877 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:32.877 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:32.877 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:32.877 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:32.877 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:32.877 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:26:32.877 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:26:32.877 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:32.877 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:32.877 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.877 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:32.877 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:32.877 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:32.877 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.877 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:26:32.877 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:32.877 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:26:32.877 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:32.877 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:32.877 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:32.877 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:32.877 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:32.877 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:32.877 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:32.877 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:32.877 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.877 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:32.877 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:32.877 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.877 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:32.877 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:26:32.877 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:32.878 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:32.878 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:26:32.878 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.878 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:32.878 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.878 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:32.878 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:32.878 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:32.878 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:32.878 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:32.878 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:32.878 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:32.878 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:32.878 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:32.878 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:32.878 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.878 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:33.138 [2024-12-09 06:25:27.528533] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x16c2240:1 started. 00:26:33.138 [2024-12-09 06:25:27.539032] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x16c2240 was disconnected and freed. delete nvme_qpair. 00:26:33.138 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.138 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:33.138 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:33.138 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:26:33.138 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:33.138 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:33.138 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:33.138 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:33.138 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:33.138 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:33.138 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:33.138 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:26:33.138 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:33.138 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.138 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:33.138 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.138 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:33.138 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:33.138 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:33.138 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:33.138 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:26:33.138 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.138 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:33.138 [2024-12-09 06:25:27.619384] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:33.138 [2024-12-09 06:25:27.620257] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:33.138 [2024-12-09 06:25:27.620277] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:33.138 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.138 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:33.138 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:33.138 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:33.138 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:33.138 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:33.138 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:33.138 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:33.138 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:33.138 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.138 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:33.138 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:33.138 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:33.138 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.138 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:33.138 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:33.138 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:33.138 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:33.138 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:33.138 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:33.138 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:33.138 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:33.138 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:33.138 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:33.138 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.138 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:33.138 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:33.138 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:33.138 [2024-12-09 06:25:27.707761] bdev_nvme.c:7435:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:26:33.138 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.399 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:33.399 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:33.399 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:33.399 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:33.399 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:33.399 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:33.399 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:33.399 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:26:33.399 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:33.399 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:33.399 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.399 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:33.399 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:33.399 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:33.399 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.399 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:26:33.399 06:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:26:33.399 [2024-12-09 06:25:27.814657] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:26:33.399 [2024-12-09 06:25:27.814691] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:33.399 [2024-12-09 06:25:27.814699] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:33.399 [2024-12-09 06:25:27.814704] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:34.341 06:25:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:34.341 06:25:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:34.341 06:25:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:26:34.341 06:25:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:34.341 06:25:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:34.341 06:25:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.341 06:25:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:34.341 06:25:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:34.341 06:25:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:34.341 06:25:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.341 06:25:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:26:34.341 06:25:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:34.341 06:25:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:26:34.341 06:25:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:34.342 06:25:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:34.342 06:25:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:34.342 06:25:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:34.342 06:25:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:34.342 06:25:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:34.342 06:25:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:34.342 06:25:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:34.342 06:25:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:34.342 06:25:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.342 06:25:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:34.342 06:25:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.342 06:25:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:34.342 06:25:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:34.342 06:25:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:34.342 06:25:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:34.342 06:25:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:34.342 06:25:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.342 06:25:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:34.342 [2024-12-09 06:25:28.887416] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:34.342 [2024-12-09 06:25:28.887434] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:34.342 06:25:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.342 06:25:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:34.342 06:25:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:34.342 06:25:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:34.342 06:25:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:34.342 [2024-12-09 06:25:28.893044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.342 [2024-12-09 06:25:28.893058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.342 [2024-12-09 06:25:28.893066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.342 [2024-12-09 06:25:28.893071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.342 [2024-12-09 06:25:28.893081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.342 [2024-12-09 06:25:28.893086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.342 [2024-12-09 06:25:28.893092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.342 [2024-12-09 06:25:28.893097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.342 [2024-12-09 06:25:28.893103] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1692650 is same with the state(6) to be set 00:26:34.342 06:25:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:34.342 06:25:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:34.342 06:25:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:34.342 06:25:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:34.342 06:25:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.342 06:25:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:34.342 06:25:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:34.342 06:25:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:34.342 [2024-12-09 06:25:28.903060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1692650 (9): Bad file descriptor 00:26:34.342 06:25:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.342 [2024-12-09 06:25:28.913093] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:34.342 [2024-12-09 06:25:28.913102] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:34.342 [2024-12-09 06:25:28.913107] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:34.342 [2024-12-09 06:25:28.913111] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:34.342 [2024-12-09 06:25:28.913124] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:34.342 [2024-12-09 06:25:28.913382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.342 [2024-12-09 06:25:28.913394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1692650 with addr=10.0.0.2, port=4420 00:26:34.342 [2024-12-09 06:25:28.913400] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1692650 is same with the state(6) to be set 00:26:34.342 [2024-12-09 06:25:28.913409] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1692650 (9): Bad file descriptor 00:26:34.342 [2024-12-09 06:25:28.913416] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:34.342 [2024-12-09 06:25:28.913421] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:34.342 [2024-12-09 06:25:28.913427] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:34.342 [2024-12-09 06:25:28.913432] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:34.342 [2024-12-09 06:25:28.913436] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:34.342 [2024-12-09 06:25:28.913439] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:34.342 [2024-12-09 06:25:28.923153] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:34.342 [2024-12-09 06:25:28.923164] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:34.342 [2024-12-09 06:25:28.923168] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:34.342 [2024-12-09 06:25:28.923171] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:34.342 [2024-12-09 06:25:28.923182] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:34.342 [2024-12-09 06:25:28.923457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.342 [2024-12-09 06:25:28.923467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1692650 with addr=10.0.0.2, port=4420 00:26:34.342 [2024-12-09 06:25:28.923472] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1692650 is same with the state(6) to be set 00:26:34.342 [2024-12-09 06:25:28.923480] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1692650 (9): Bad file descriptor 00:26:34.342 [2024-12-09 06:25:28.923488] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:34.342 [2024-12-09 06:25:28.923492] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:34.342 [2024-12-09 06:25:28.923498] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:34.342 [2024-12-09 06:25:28.923502] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:34.342 [2024-12-09 06:25:28.923505] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:34.342 [2024-12-09 06:25:28.923509] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:34.605 [2024-12-09 06:25:28.933211] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:34.605 [2024-12-09 06:25:28.933221] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:34.605 [2024-12-09 06:25:28.933225] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:34.605 [2024-12-09 06:25:28.933228] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:34.605 [2024-12-09 06:25:28.933239] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:34.605 [2024-12-09 06:25:28.933638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.605 [2024-12-09 06:25:28.933668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1692650 with addr=10.0.0.2, port=4420 00:26:34.605 [2024-12-09 06:25:28.933677] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1692650 is same with the state(6) to be set 00:26:34.605 [2024-12-09 06:25:28.933691] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1692650 (9): Bad file descriptor 00:26:34.605 [2024-12-09 06:25:28.933748] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:34.605 [2024-12-09 06:25:28.933757] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:34.605 [2024-12-09 06:25:28.933763] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:34.605 [2024-12-09 06:25:28.933768] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:34.605 [2024-12-09 06:25:28.933773] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:34.605 [2024-12-09 06:25:28.933776] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:34.605 06:25:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:34.605 06:25:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:34.605 06:25:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:34.605 [2024-12-09 06:25:28.943270] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:34.605 [2024-12-09 06:25:28.943282] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:34.605 [2024-12-09 06:25:28.943286] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:34.605 [2024-12-09 06:25:28.943289] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:34.605 [2024-12-09 06:25:28.943301] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:34.605 06:25:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:34.605 06:25:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:34.605 [2024-12-09 06:25:28.943686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.605 [2024-12-09 06:25:28.943716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1692650 with addr=10.0.0.2, port=4420 00:26:34.605 [2024-12-09 06:25:28.943725] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1692650 is same with the state(6) to be set 00:26:34.605 [2024-12-09 06:25:28.943739] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1692650 (9): Bad file descriptor 00:26:34.605 [2024-12-09 06:25:28.943757] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:34.605 [2024-12-09 06:25:28.943763] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:34.605 [2024-12-09 06:25:28.943770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:34.605 [2024-12-09 06:25:28.943775] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:34.605 [2024-12-09 06:25:28.943779] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:34.605 [2024-12-09 06:25:28.943785] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:34.605 06:25:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:34.605 06:25:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:34.605 06:25:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:34.605 06:25:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:34.605 06:25:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:34.605 06:25:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.605 06:25:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:34.605 06:25:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:34.605 06:25:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:34.605 [2024-12-09 06:25:28.953332] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:34.605 [2024-12-09 06:25:28.953343] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:34.605 [2024-12-09 06:25:28.953347] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:34.605 [2024-12-09 06:25:28.953355] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:34.605 [2024-12-09 06:25:28.953367] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:34.605 [2024-12-09 06:25:28.953594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.605 [2024-12-09 06:25:28.953604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1692650 with addr=10.0.0.2, port=4420 00:26:34.605 [2024-12-09 06:25:28.953610] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1692650 is same with the state(6) to be set 00:26:34.605 [2024-12-09 06:25:28.953618] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1692650 (9): Bad file descriptor 00:26:34.605 [2024-12-09 06:25:28.953625] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:34.605 [2024-12-09 06:25:28.953630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:34.605 [2024-12-09 06:25:28.953636] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:34.605 [2024-12-09 06:25:28.953640] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:34.605 [2024-12-09 06:25:28.953643] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:34.605 [2024-12-09 06:25:28.953647] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:34.605 [2024-12-09 06:25:28.963395] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:34.605 [2024-12-09 06:25:28.963406] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:34.605 [2024-12-09 06:25:28.963410] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:34.605 [2024-12-09 06:25:28.963413] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:34.605 [2024-12-09 06:25:28.963425] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:34.605 [2024-12-09 06:25:28.963703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.605 [2024-12-09 06:25:28.963713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1692650 with addr=10.0.0.2, port=4420 00:26:34.605 [2024-12-09 06:25:28.963718] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1692650 is same with the state(6) to be set 00:26:34.605 [2024-12-09 06:25:28.963727] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1692650 (9): Bad file descriptor 00:26:34.605 [2024-12-09 06:25:28.963734] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:34.605 [2024-12-09 06:25:28.963739] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:34.605 [2024-12-09 06:25:28.963744] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:34.605 [2024-12-09 06:25:28.963749] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:34.605 [2024-12-09 06:25:28.963752] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:34.605 [2024-12-09 06:25:28.963755] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:34.605 [2024-12-09 06:25:28.973457] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:34.605 [2024-12-09 06:25:28.973465] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:34.605 [2024-12-09 06:25:28.973471] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:34.605 [2024-12-09 06:25:28.973475] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:34.605 [2024-12-09 06:25:28.973485] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:34.605 [2024-12-09 06:25:28.973763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.605 [2024-12-09 06:25:28.973772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1692650 with addr=10.0.0.2, port=4420 00:26:34.605 [2024-12-09 06:25:28.973777] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1692650 is same with the state(6) to be set 00:26:34.605 [2024-12-09 06:25:28.973785] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1692650 (9): Bad file descriptor 00:26:34.605 [2024-12-09 06:25:28.973792] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:34.605 [2024-12-09 06:25:28.973797] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:34.606 [2024-12-09 06:25:28.973802] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:34.606 [2024-12-09 06:25:28.973806] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:34.606 [2024-12-09 06:25:28.973809] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:34.606 [2024-12-09 06:25:28.973813] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:34.606 [2024-12-09 06:25:28.977410] bdev_nvme.c:7298:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:26:34.606 [2024-12-09 06:25:28.977423] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:34.606 06:25:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.606 06:25:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:34.606 06:25:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:34.606 06:25:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:34.606 06:25:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:34.606 06:25:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:34.606 06:25:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:34.606 06:25:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:26:34.606 06:25:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:26:34.606 06:25:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:34.606 06:25:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:34.606 06:25:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.606 06:25:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:34.606 06:25:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:34.606 06:25:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:34.606 06:25:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.606 06:25:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:26:34.606 06:25:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:34.606 06:25:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:26:34.606 06:25:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:34.606 06:25:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:34.606 06:25:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:34.606 06:25:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:34.606 06:25:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:34.606 06:25:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:34.606 06:25:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:34.606 06:25:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:34.606 06:25:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:34.606 06:25:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.606 06:25:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:34.606 06:25:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.606 06:25:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:34.606 06:25:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:34.606 06:25:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:34.606 06:25:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:34.606 06:25:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:26:34.606 06:25:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.606 06:25:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:34.606 06:25:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.606 06:25:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:26:34.606 06:25:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:26:34.606 06:25:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:34.606 06:25:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:34.606 06:25:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:26:34.606 06:25:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:34.606 06:25:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:34.606 06:25:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:34.606 06:25:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.606 06:25:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:34.606 06:25:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:34.606 06:25:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:34.606 06:25:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.606 06:25:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:26:34.606 06:25:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:34.606 06:25:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:26:34.606 06:25:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:26:34.606 06:25:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:34.606 06:25:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:34.606 06:25:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:26:34.606 06:25:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:34.606 06:25:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:34.606 06:25:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:34.606 06:25:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:34.606 06:25:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.606 06:25:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:34.606 06:25:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:34.606 06:25:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.866 06:25:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:26:34.866 06:25:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:34.866 06:25:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:26:34.866 06:25:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:26:34.866 06:25:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:34.866 06:25:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:34.866 06:25:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:34.866 06:25:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:34.866 06:25:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:34.866 06:25:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:34.866 06:25:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:34.866 06:25:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:34.866 06:25:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.866 06:25:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:34.866 06:25:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.867 06:25:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:26:34.867 06:25:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:26:34.867 06:25:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:34.867 06:25:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:34.867 06:25:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:34.867 06:25:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.867 06:25:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:35.820 [2024-12-09 06:25:30.329616] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:35.820 [2024-12-09 06:25:30.329638] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:35.820 [2024-12-09 06:25:30.329648] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:36.081 [2024-12-09 06:25:30.417881] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:26:36.081 [2024-12-09 06:25:30.480420] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:26:36.081 [2024-12-09 06:25:30.481067] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x168fd20:1 started. 00:26:36.081 [2024-12-09 06:25:30.482518] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:36.081 [2024-12-09 06:25:30.482543] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:36.081 06:25:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.081 06:25:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:36.081 06:25:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:26:36.081 06:25:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:36.081 06:25:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:36.081 [2024-12-09 06:25:30.486430] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x168fd20 was disconnected and freed. delete nvme_qpair. 00:26:36.081 06:25:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:36.081 06:25:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:36.081 06:25:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:36.081 06:25:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:36.081 06:25:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.081 06:25:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:36.081 request: 00:26:36.081 { 00:26:36.081 "name": "nvme", 00:26:36.081 "trtype": "tcp", 00:26:36.081 "traddr": "10.0.0.2", 00:26:36.081 "adrfam": "ipv4", 00:26:36.081 "trsvcid": "8009", 00:26:36.081 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:36.081 "wait_for_attach": true, 00:26:36.081 "method": "bdev_nvme_start_discovery", 00:26:36.081 "req_id": 1 00:26:36.081 } 00:26:36.081 Got JSON-RPC error response 00:26:36.081 response: 00:26:36.081 { 00:26:36.081 "code": -17, 00:26:36.081 "message": "File exists" 00:26:36.081 } 00:26:36.081 06:25:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:36.081 06:25:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:26:36.081 06:25:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:36.081 06:25:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:36.081 06:25:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:36.081 06:25:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:26:36.081 06:25:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:36.081 06:25:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:36.081 06:25:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.081 06:25:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:36.081 06:25:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:36.081 06:25:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:36.081 06:25:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.081 06:25:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:26:36.081 06:25:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:26:36.081 06:25:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:36.081 06:25:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:36.081 06:25:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.081 06:25:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:36.081 06:25:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:36.081 06:25:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:36.081 06:25:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.081 06:25:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:36.081 06:25:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:36.081 06:25:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:26:36.081 06:25:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:36.081 06:25:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:36.081 06:25:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:36.081 06:25:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:36.081 06:25:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:36.081 06:25:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:36.081 06:25:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.081 06:25:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:36.081 request: 00:26:36.081 { 00:26:36.081 "name": "nvme_second", 00:26:36.081 "trtype": "tcp", 00:26:36.081 "traddr": "10.0.0.2", 00:26:36.081 "adrfam": "ipv4", 00:26:36.081 "trsvcid": "8009", 00:26:36.081 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:36.081 "wait_for_attach": true, 00:26:36.081 "method": "bdev_nvme_start_discovery", 00:26:36.081 "req_id": 1 00:26:36.081 } 00:26:36.081 Got JSON-RPC error response 00:26:36.081 response: 00:26:36.081 { 00:26:36.081 "code": -17, 00:26:36.081 "message": "File exists" 00:26:36.081 } 00:26:36.081 06:25:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:36.081 06:25:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:26:36.081 06:25:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:36.081 06:25:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:36.081 06:25:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:36.081 06:25:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:26:36.081 06:25:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:36.081 06:25:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:36.081 06:25:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.081 06:25:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:36.081 06:25:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:36.081 06:25:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:36.081 06:25:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.340 06:25:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:26:36.340 06:25:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:26:36.340 06:25:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:36.340 06:25:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:36.340 06:25:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.340 06:25:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:36.340 06:25:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:36.340 06:25:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:36.340 06:25:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.340 06:25:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:36.340 06:25:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:36.340 06:25:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:26:36.340 06:25:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:36.340 06:25:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:36.340 06:25:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:36.340 06:25:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:36.340 06:25:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:36.340 06:25:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:36.340 06:25:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.340 06:25:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:37.304 [2024-12-09 06:25:31.737947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.304 [2024-12-09 06:25:31.737973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17efa20 with addr=10.0.0.2, port=8010 00:26:37.304 [2024-12-09 06:25:31.737986] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:37.304 [2024-12-09 06:25:31.737991] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:37.304 [2024-12-09 06:25:31.737996] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:38.243 [2024-12-09 06:25:32.740285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.243 [2024-12-09 06:25:32.740304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17efa20 with addr=10.0.0.2, port=8010 00:26:38.243 [2024-12-09 06:25:32.740312] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:38.243 [2024-12-09 06:25:32.740317] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:38.243 [2024-12-09 06:25:32.740322] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:39.182 [2024-12-09 06:25:33.742311] bdev_nvme.c:7554:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:26:39.182 request: 00:26:39.182 { 00:26:39.182 "name": "nvme_second", 00:26:39.182 "trtype": "tcp", 00:26:39.182 "traddr": "10.0.0.2", 00:26:39.182 "adrfam": "ipv4", 00:26:39.182 "trsvcid": "8010", 00:26:39.182 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:39.182 "wait_for_attach": false, 00:26:39.182 "attach_timeout_ms": 3000, 00:26:39.182 "method": "bdev_nvme_start_discovery", 00:26:39.182 "req_id": 1 00:26:39.182 } 00:26:39.182 Got JSON-RPC error response 00:26:39.182 response: 00:26:39.182 { 00:26:39.182 "code": -110, 00:26:39.182 "message": "Connection timed out" 00:26:39.182 } 00:26:39.182 06:25:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:39.182 06:25:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:26:39.182 06:25:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:39.182 06:25:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:39.182 06:25:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:39.182 06:25:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:26:39.182 06:25:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:39.182 06:25:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:39.182 06:25:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.182 06:25:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:39.182 06:25:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:39.182 06:25:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:39.443 06:25:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.443 06:25:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:26:39.443 06:25:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:26:39.443 06:25:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 443248 00:26:39.443 06:25:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:26:39.443 06:25:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:39.443 06:25:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:26:39.443 06:25:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:39.443 06:25:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:26:39.443 06:25:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:39.443 06:25:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:39.443 rmmod nvme_tcp 00:26:39.443 rmmod nvme_fabrics 00:26:39.443 rmmod nvme_keyring 00:26:39.443 06:25:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:39.443 06:25:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:26:39.443 06:25:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:26:39.443 06:25:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 443212 ']' 00:26:39.443 06:25:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 443212 00:26:39.443 06:25:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 443212 ']' 00:26:39.443 06:25:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 443212 00:26:39.443 06:25:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:26:39.443 06:25:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:39.443 06:25:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 443212 00:26:39.443 06:25:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:39.443 06:25:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:39.443 06:25:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 443212' 00:26:39.443 killing process with pid 443212 00:26:39.443 06:25:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 443212 00:26:39.443 06:25:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 443212 00:26:39.704 06:25:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:39.704 06:25:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:39.704 06:25:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:39.704 06:25:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:26:39.704 06:25:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:26:39.704 06:25:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:39.704 06:25:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:26:39.704 06:25:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:39.704 06:25:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:39.704 06:25:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:39.704 06:25:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:39.704 06:25:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:41.614 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:41.614 00:26:41.614 real 0m20.096s 00:26:41.614 user 0m23.303s 00:26:41.614 sys 0m7.119s 00:26:41.614 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:41.614 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:41.614 ************************************ 00:26:41.614 END TEST nvmf_host_discovery 00:26:41.614 ************************************ 00:26:41.614 06:25:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:41.614 06:25:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:41.614 06:25:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:41.614 06:25:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.614 ************************************ 00:26:41.614 START TEST nvmf_host_multipath_status 00:26:41.614 ************************************ 00:26:41.614 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:41.877 * Looking for test storage... 00:26:41.877 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:41.877 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:41.877 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:26:41.877 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:41.877 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:41.877 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:41.877 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:41.877 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:41.877 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:26:41.877 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:26:41.877 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:26:41.877 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:26:41.877 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:26:41.877 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:26:41.877 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:26:41.877 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:41.877 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:26:41.877 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:26:41.877 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:41.877 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:41.877 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:26:41.877 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:26:41.877 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:41.877 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:26:41.877 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:26:41.877 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:26:41.877 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:26:41.878 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:41.878 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:26:41.878 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:26:41.878 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:41.878 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:41.878 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:26:41.878 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:41.878 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:41.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:41.878 --rc genhtml_branch_coverage=1 00:26:41.878 --rc genhtml_function_coverage=1 00:26:41.878 --rc genhtml_legend=1 00:26:41.878 --rc geninfo_all_blocks=1 00:26:41.878 --rc geninfo_unexecuted_blocks=1 00:26:41.878 00:26:41.878 ' 00:26:41.878 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:41.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:41.878 --rc genhtml_branch_coverage=1 00:26:41.878 --rc genhtml_function_coverage=1 00:26:41.878 --rc genhtml_legend=1 00:26:41.878 --rc geninfo_all_blocks=1 00:26:41.878 --rc geninfo_unexecuted_blocks=1 00:26:41.878 00:26:41.878 ' 00:26:41.878 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:41.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:41.878 --rc genhtml_branch_coverage=1 00:26:41.878 --rc genhtml_function_coverage=1 00:26:41.878 --rc genhtml_legend=1 00:26:41.878 --rc geninfo_all_blocks=1 00:26:41.878 --rc geninfo_unexecuted_blocks=1 00:26:41.878 00:26:41.878 ' 00:26:41.878 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:41.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:41.878 --rc genhtml_branch_coverage=1 00:26:41.878 --rc genhtml_function_coverage=1 00:26:41.878 --rc genhtml_legend=1 00:26:41.878 --rc geninfo_all_blocks=1 00:26:41.878 --rc geninfo_unexecuted_blocks=1 00:26:41.878 00:26:41.878 ' 00:26:41.878 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:41.878 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:26:41.878 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:41.878 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:41.878 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:41.878 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:41.878 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:41.878 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:41.878 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:41.878 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:41.878 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:41.878 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:41.878 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:26:41.878 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:26:41.878 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:41.878 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:41.878 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:41.878 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:41.878 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:41.878 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:26:41.878 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:41.878 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:41.878 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:41.878 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.878 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.878 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.878 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:26:41.878 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.878 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:26:41.878 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:41.878 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:41.878 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:41.878 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:41.878 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:41.878 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:41.878 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:41.878 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:41.878 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:41.878 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:41.878 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:41.879 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:41.879 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:41.879 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:26:41.879 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:41.879 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:26:41.879 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:26:41.879 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:41.879 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:41.879 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:41.879 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:41.879 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:41.879 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:41.879 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:41.879 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:41.879 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:41.879 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:41.879 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:26:41.879 06:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:50.019 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:50.019 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:50.019 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:50.019 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:50.019 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:50.019 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.428 ms 00:26:50.019 00:26:50.019 --- 10.0.0.2 ping statistics --- 00:26:50.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:50.019 rtt min/avg/max/mdev = 0.428/0.428/0.428/0.000 ms 00:26:50.019 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:50.019 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:50.019 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:26:50.019 00:26:50.020 --- 10.0.0.1 ping statistics --- 00:26:50.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:50.020 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:26:50.020 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:50.020 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:26:50.020 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:50.020 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:50.020 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:50.020 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:50.020 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:50.020 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:50.020 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:50.020 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:26:50.020 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:50.020 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:50.020 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:50.020 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=448865 00:26:50.020 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 448865 00:26:50.020 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 448865 ']' 00:26:50.020 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:50.020 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:50.020 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:50.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:50.020 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:50.020 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:50.020 06:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:26:50.020 [2024-12-09 06:25:43.756958] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:26:50.020 [2024-12-09 06:25:43.757019] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:50.020 [2024-12-09 06:25:43.854003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:50.020 [2024-12-09 06:25:43.903859] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:50.020 [2024-12-09 06:25:43.903908] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:50.020 [2024-12-09 06:25:43.903917] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:50.020 [2024-12-09 06:25:43.903923] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:50.020 [2024-12-09 06:25:43.903930] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:50.020 [2024-12-09 06:25:43.905699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:50.020 [2024-12-09 06:25:43.905796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:50.020 06:25:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:50.020 06:25:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:26:50.020 06:25:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:50.020 06:25:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:50.020 06:25:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:50.281 06:25:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:50.281 06:25:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=448865 00:26:50.281 06:25:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:50.281 [2024-12-09 06:25:44.796717] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:50.281 06:25:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:50.561 Malloc0 00:26:50.561 06:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:26:50.821 06:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:51.081 06:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:51.081 [2024-12-09 06:25:45.574182] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:51.081 06:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:51.341 [2024-12-09 06:25:45.754658] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:51.341 06:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:26:51.341 06:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=449200 00:26:51.341 06:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:51.341 06:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 449200 /var/tmp/bdevperf.sock 00:26:51.341 06:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 449200 ']' 00:26:51.341 06:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:51.341 06:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:51.341 06:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:51.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:51.341 06:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:51.341 06:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:51.601 06:25:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:51.601 06:25:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:26:51.602 06:25:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:51.861 06:25:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:52.122 Nvme0n1 00:26:52.122 06:25:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:52.691 Nvme0n1 00:26:52.691 06:25:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:26:52.691 06:25:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:26:54.600 06:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:26:54.600 06:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:54.861 06:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:55.121 06:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:26:56.057 06:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:26:56.057 06:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:56.057 06:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:56.057 06:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:56.317 06:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:56.317 06:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:56.317 06:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:56.317 06:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:56.317 06:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:56.317 06:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:56.317 06:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:56.317 06:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:56.577 06:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:56.577 06:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:56.577 06:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:56.577 06:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:56.837 06:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:56.837 06:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:56.837 06:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:56.837 06:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:56.837 06:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:56.837 06:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:56.837 06:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:56.837 06:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:57.096 06:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:57.096 06:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:26:57.096 06:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:57.356 06:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:57.356 06:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:26:58.738 06:25:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:26:58.738 06:25:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:58.738 06:25:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:58.738 06:25:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:58.738 06:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:58.738 06:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:58.738 06:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:58.738 06:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:58.738 06:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:58.738 06:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:58.738 06:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:58.738 06:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:58.997 06:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:58.997 06:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:58.997 06:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:58.997 06:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:59.258 06:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:59.258 06:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:59.258 06:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:59.258 06:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:59.258 06:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:59.258 06:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:59.258 06:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:59.258 06:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:59.517 06:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:59.517 06:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:26:59.518 06:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:59.777 06:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:27:00.037 06:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:27:00.977 06:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:27:00.977 06:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:00.977 06:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:00.977 06:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:00.977 06:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:00.977 06:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:00.977 06:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:00.977 06:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:01.235 06:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:01.235 06:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:01.235 06:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:01.235 06:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:01.494 06:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:01.494 06:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:01.494 06:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:01.494 06:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:01.753 06:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:01.753 06:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:01.753 06:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:01.753 06:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:01.753 06:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:01.753 06:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:01.753 06:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:01.753 06:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:02.012 06:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:02.012 06:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:27:02.012 06:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:02.271 06:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:02.271 06:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:27:03.652 06:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:27:03.652 06:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:03.652 06:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:03.652 06:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:03.652 06:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:03.653 06:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:03.653 06:25:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:03.653 06:25:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:03.653 06:25:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:03.653 06:25:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:03.653 06:25:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:03.653 06:25:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:03.913 06:25:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:03.913 06:25:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:03.913 06:25:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:03.913 06:25:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:04.173 06:25:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:04.173 06:25:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:04.173 06:25:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:04.173 06:25:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:04.173 06:25:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:04.173 06:25:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:04.173 06:25:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:04.173 06:25:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:04.432 06:25:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:04.433 06:25:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:27:04.433 06:25:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:04.693 06:25:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:04.693 06:25:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:27:06.073 06:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:27:06.073 06:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:06.073 06:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:06.073 06:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:06.073 06:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:06.073 06:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:06.073 06:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:06.073 06:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:06.073 06:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:06.073 06:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:06.073 06:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:06.073 06:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:06.333 06:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:06.333 06:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:06.333 06:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:06.333 06:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:06.592 06:26:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:06.592 06:26:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:27:06.592 06:26:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:06.592 06:26:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:06.851 06:26:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:06.851 06:26:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:06.851 06:26:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:06.851 06:26:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:06.851 06:26:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:06.851 06:26:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:27:06.851 06:26:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:07.109 06:26:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:07.369 06:26:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:27:08.306 06:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:27:08.306 06:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:08.306 06:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:08.306 06:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:08.565 06:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:08.565 06:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:08.565 06:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:08.565 06:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:08.565 06:26:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:08.565 06:26:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:08.565 06:26:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:08.565 06:26:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:08.825 06:26:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:08.825 06:26:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:08.825 06:26:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:08.825 06:26:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:09.084 06:26:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:09.084 06:26:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:27:09.084 06:26:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:09.084 06:26:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:09.084 06:26:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:09.084 06:26:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:09.084 06:26:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:09.084 06:26:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:09.344 06:26:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:09.344 06:26:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:27:09.604 06:26:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:27:09.604 06:26:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:27:09.864 06:26:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:09.864 06:26:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:27:10.804 06:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:27:10.804 06:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:10.804 06:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:10.804 06:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:11.064 06:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:11.064 06:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:11.064 06:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:11.064 06:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:11.323 06:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:11.323 06:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:11.323 06:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:11.323 06:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:11.583 06:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:11.583 06:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:11.583 06:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:11.583 06:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:11.583 06:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:11.583 06:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:11.583 06:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:11.583 06:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:11.843 06:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:11.843 06:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:11.843 06:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:11.843 06:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:12.103 06:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:12.103 06:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:27:12.103 06:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:12.103 06:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:12.362 06:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:27:13.301 06:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:27:13.301 06:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:13.301 06:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:13.301 06:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:13.561 06:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:13.561 06:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:13.561 06:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:13.561 06:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:13.821 06:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:13.821 06:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:13.821 06:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:13.821 06:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:13.821 06:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:13.821 06:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:13.821 06:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:13.821 06:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:14.081 06:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:14.081 06:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:14.081 06:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:14.081 06:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:14.342 06:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:14.342 06:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:14.342 06:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:14.342 06:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:14.342 06:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:14.342 06:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:27:14.342 06:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:14.601 06:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:27:14.861 06:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:27:15.796 06:26:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:27:15.796 06:26:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:15.796 06:26:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:15.796 06:26:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:16.054 06:26:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:16.054 06:26:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:16.054 06:26:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:16.054 06:26:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:16.312 06:26:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:16.312 06:26:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:16.312 06:26:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:16.312 06:26:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:16.312 06:26:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:16.312 06:26:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:16.312 06:26:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:16.312 06:26:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:16.571 06:26:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:16.571 06:26:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:16.571 06:26:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:16.571 06:26:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:16.830 06:26:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:16.830 06:26:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:16.830 06:26:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:16.830 06:26:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:16.830 06:26:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:16.830 06:26:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:27:16.830 06:26:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:17.089 06:26:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:17.348 06:26:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:27:18.286 06:26:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:27:18.286 06:26:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:18.286 06:26:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:18.286 06:26:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:18.546 06:26:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:18.546 06:26:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:18.546 06:26:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:18.546 06:26:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:18.546 06:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:18.546 06:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:18.546 06:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:18.546 06:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:18.807 06:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:18.807 06:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:18.807 06:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:18.807 06:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:19.067 06:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:19.067 06:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:19.067 06:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:19.067 06:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:19.067 06:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:19.067 06:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:19.067 06:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:19.067 06:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:19.327 06:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:19.327 06:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 449200 00:27:19.327 06:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 449200 ']' 00:27:19.327 06:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 449200 00:27:19.327 06:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:27:19.327 06:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:19.327 06:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 449200 00:27:19.327 06:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:27:19.327 06:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:27:19.327 06:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 449200' 00:27:19.327 killing process with pid 449200 00:27:19.327 06:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 449200 00:27:19.327 06:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 449200 00:27:19.327 { 00:27:19.327 "results": [ 00:27:19.327 { 00:27:19.327 "job": "Nvme0n1", 00:27:19.327 "core_mask": "0x4", 00:27:19.327 "workload": "verify", 00:27:19.327 "status": "terminated", 00:27:19.327 "verify_range": { 00:27:19.327 "start": 0, 00:27:19.327 "length": 16384 00:27:19.327 }, 00:27:19.327 "queue_depth": 128, 00:27:19.327 "io_size": 4096, 00:27:19.327 "runtime": 26.619045, 00:27:19.327 "iops": 11665.144260434587, 00:27:19.327 "mibps": 45.566969767322604, 00:27:19.327 "io_failed": 0, 00:27:19.327 "io_timeout": 0, 00:27:19.327 "avg_latency_us": 10953.282011388028, 00:27:19.327 "min_latency_us": 401.7230769230769, 00:27:19.327 "max_latency_us": 3071521.083076923 00:27:19.327 } 00:27:19.327 ], 00:27:19.327 "core_count": 1 00:27:19.327 } 00:27:19.613 06:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 449200 00:27:19.613 06:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:19.613 [2024-12-09 06:25:45.814540] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:27:19.613 [2024-12-09 06:25:45.814613] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid449200 ] 00:27:19.613 [2024-12-09 06:25:45.888486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:19.613 [2024-12-09 06:25:45.937985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:19.613 Running I/O for 90 seconds... 00:27:19.613 10372.00 IOPS, 40.52 MiB/s [2024-12-09T05:26:14.200Z] 11377.00 IOPS, 44.44 MiB/s [2024-12-09T05:26:14.200Z] 11790.67 IOPS, 46.06 MiB/s [2024-12-09T05:26:14.200Z] 11964.00 IOPS, 46.73 MiB/s [2024-12-09T05:26:14.200Z] 12110.00 IOPS, 47.30 MiB/s [2024-12-09T05:26:14.200Z] 12158.83 IOPS, 47.50 MiB/s [2024-12-09T05:26:14.200Z] 12204.00 IOPS, 47.67 MiB/s [2024-12-09T05:26:14.200Z] 12218.12 IOPS, 47.73 MiB/s [2024-12-09T05:26:14.200Z] 12248.78 IOPS, 47.85 MiB/s [2024-12-09T05:26:14.200Z] 12269.20 IOPS, 47.93 MiB/s [2024-12-09T05:26:14.200Z] 12292.09 IOPS, 48.02 MiB/s [2024-12-09T05:26:14.200Z] [2024-12-09 06:25:59.082153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:120264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.613 [2024-12-09 06:25:59.082186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:19.613 [2024-12-09 06:25:59.082203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:120272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.613 [2024-12-09 06:25:59.082210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:19.613 [2024-12-09 06:25:59.082221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:120280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.613 [2024-12-09 06:25:59.082227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:19.613 [2024-12-09 06:25:59.082238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:120288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.613 [2024-12-09 06:25:59.082243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:19.613 [2024-12-09 06:25:59.082254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:120296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.613 [2024-12-09 06:25:59.082259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:19.614 [2024-12-09 06:25:59.082270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:120304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.614 [2024-12-09 06:25:59.082275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:19.614 [2024-12-09 06:25:59.082286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:120312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.614 [2024-12-09 06:25:59.082292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:19.614 [2024-12-09 06:25:59.082302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:120320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.614 [2024-12-09 06:25:59.082307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:19.614 [2024-12-09 06:25:59.082318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:120328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.614 [2024-12-09 06:25:59.082323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:19.614 [2024-12-09 06:25:59.082334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:120336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.614 [2024-12-09 06:25:59.082345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:19.614 [2024-12-09 06:25:59.082356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:120344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.614 [2024-12-09 06:25:59.082361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:19.614 [2024-12-09 06:25:59.082372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:120352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.614 [2024-12-09 06:25:59.082377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:19.614 [2024-12-09 06:25:59.082388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:120360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.614 [2024-12-09 06:25:59.082393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:19.614 [2024-12-09 06:25:59.082404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:120368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.614 [2024-12-09 06:25:59.082410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:19.614 [2024-12-09 06:25:59.082421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:120376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.614 [2024-12-09 06:25:59.082426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:19.614 [2024-12-09 06:25:59.082437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:120384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.614 [2024-12-09 06:25:59.082443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:19.614 [2024-12-09 06:25:59.082457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:120392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.614 [2024-12-09 06:25:59.082463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:19.614 [2024-12-09 06:25:59.082608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:120400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.614 [2024-12-09 06:25:59.082616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:19.614 [2024-12-09 06:25:59.082628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:120408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.614 [2024-12-09 06:25:59.082633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:19.614 [2024-12-09 06:25:59.082644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:120416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.614 [2024-12-09 06:25:59.082649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:19.614 [2024-12-09 06:25:59.082660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:120424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.614 [2024-12-09 06:25:59.082666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:19.614 [2024-12-09 06:25:59.082677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:120432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.614 [2024-12-09 06:25:59.082682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.614 [2024-12-09 06:25:59.082695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:120440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.614 [2024-12-09 06:25:59.082701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.614 [2024-12-09 06:25:59.082711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:120448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.614 [2024-12-09 06:25:59.082717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:19.614 [2024-12-09 06:25:59.082728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:120456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.614 [2024-12-09 06:25:59.082733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:19.614 [2024-12-09 06:25:59.082744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:120464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.614 [2024-12-09 06:25:59.082750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:19.614 [2024-12-09 06:25:59.082761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:120472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.614 [2024-12-09 06:25:59.082767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:19.614 [2024-12-09 06:25:59.082777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:120480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.614 [2024-12-09 06:25:59.082783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:19.614 [2024-12-09 06:25:59.082794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:120488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.614 [2024-12-09 06:25:59.082799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:19.614 [2024-12-09 06:25:59.082810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:120496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.614 [2024-12-09 06:25:59.082815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:19.614 [2024-12-09 06:25:59.082825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:120504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.614 [2024-12-09 06:25:59.082831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:19.614 [2024-12-09 06:25:59.082842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:120512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.614 [2024-12-09 06:25:59.082847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:19.614 [2024-12-09 06:25:59.082858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:120520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.614 [2024-12-09 06:25:59.082863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:19.614 [2024-12-09 06:25:59.082874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:120528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.614 [2024-12-09 06:25:59.082879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:19.614 [2024-12-09 06:25:59.082891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:120536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.614 [2024-12-09 06:25:59.082897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:19.614 [2024-12-09 06:25:59.082908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:120544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.615 [2024-12-09 06:25:59.082913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:19.615 [2024-12-09 06:25:59.082923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:120552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.615 [2024-12-09 06:25:59.082929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:19.615 [2024-12-09 06:25:59.082940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:120560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.615 [2024-12-09 06:25:59.082945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:19.615 [2024-12-09 06:25:59.082956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:120568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.615 [2024-12-09 06:25:59.082961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:19.615 [2024-12-09 06:25:59.082972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:120576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.615 [2024-12-09 06:25:59.082978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:19.615 [2024-12-09 06:25:59.082988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:120584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.615 [2024-12-09 06:25:59.082994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:19.615 [2024-12-09 06:25:59.083004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:120592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.615 [2024-12-09 06:25:59.083010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:19.615 [2024-12-09 06:25:59.083021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:120600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.615 [2024-12-09 06:25:59.083026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:19.615 [2024-12-09 06:25:59.083037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:120608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.615 [2024-12-09 06:25:59.083042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:19.615 [2024-12-09 06:25:59.083053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:120616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.615 [2024-12-09 06:25:59.083059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:19.615 [2024-12-09 06:25:59.083069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:120624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.615 [2024-12-09 06:25:59.083074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:19.615 [2024-12-09 06:25:59.083086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:120632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.615 [2024-12-09 06:25:59.083092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:19.615 [2024-12-09 06:25:59.083103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:120640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.615 [2024-12-09 06:25:59.083108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:19.615 [2024-12-09 06:25:59.083119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:120648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.615 [2024-12-09 06:25:59.083124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:19.615 [2024-12-09 06:25:59.083135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:120656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.615 [2024-12-09 06:25:59.083140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:19.615 [2024-12-09 06:25:59.083150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:120664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.615 [2024-12-09 06:25:59.083156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:19.615 [2024-12-09 06:25:59.083168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:120672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.615 [2024-12-09 06:25:59.083173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:19.615 [2024-12-09 06:25:59.083184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:120680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.615 [2024-12-09 06:25:59.083189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:19.615 [2024-12-09 06:25:59.083200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:120688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.615 [2024-12-09 06:25:59.083205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:19.615 [2024-12-09 06:25:59.083216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:120696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.615 [2024-12-09 06:25:59.083221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.615 [2024-12-09 06:25:59.083231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:120704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.615 [2024-12-09 06:25:59.083237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:19.615 [2024-12-09 06:25:59.083247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:120712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.615 [2024-12-09 06:25:59.083253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:19.615 [2024-12-09 06:25:59.083263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:120720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.615 [2024-12-09 06:25:59.083270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:19.615 [2024-12-09 06:25:59.083280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:120728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.615 [2024-12-09 06:25:59.083289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:19.615 [2024-12-09 06:25:59.083300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:120736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.615 [2024-12-09 06:25:59.083306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:19.615 [2024-12-09 06:25:59.083316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:120744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.615 [2024-12-09 06:25:59.083322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:19.615 [2024-12-09 06:25:59.083333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:120752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.615 [2024-12-09 06:25:59.083338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:19.615 [2024-12-09 06:25:59.083349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:120760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.615 [2024-12-09 06:25:59.083354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:19.615 [2024-12-09 06:25:59.083366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:120768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.615 [2024-12-09 06:25:59.083371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:19.615 [2024-12-09 06:25:59.083756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:120776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.615 [2024-12-09 06:25:59.083765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:19.615 [2024-12-09 06:25:59.083777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:120784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.615 [2024-12-09 06:25:59.083783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:19.615 [2024-12-09 06:25:59.083793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:120792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.615 [2024-12-09 06:25:59.083799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:19.615 [2024-12-09 06:25:59.083810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:120800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.615 [2024-12-09 06:25:59.083815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:19.615 [2024-12-09 06:25:59.083825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:120808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.615 [2024-12-09 06:25:59.083831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:19.615 [2024-12-09 06:25:59.083842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:120816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.615 [2024-12-09 06:25:59.083847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:19.615 [2024-12-09 06:25:59.083858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:120824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.615 [2024-12-09 06:25:59.083865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:19.615 [2024-12-09 06:25:59.083876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:120832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.615 [2024-12-09 06:25:59.083882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:19.616 [2024-12-09 06:25:59.083892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:120840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.616 [2024-12-09 06:25:59.083897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:19.616 [2024-12-09 06:25:59.083908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:120848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.616 [2024-12-09 06:25:59.083913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:19.616 [2024-12-09 06:25:59.083924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:120856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.616 [2024-12-09 06:25:59.083929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:19.616 [2024-12-09 06:25:59.083940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:120864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.616 [2024-12-09 06:25:59.083945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:19.616 [2024-12-09 06:25:59.083956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:120872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.616 [2024-12-09 06:25:59.083961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:19.616 [2024-12-09 06:25:59.083972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:120880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.616 [2024-12-09 06:25:59.083977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:19.616 [2024-12-09 06:25:59.083988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:120888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.616 [2024-12-09 06:25:59.083993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:19.616 [2024-12-09 06:25:59.084004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:119880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.616 [2024-12-09 06:25:59.084010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:19.616 [2024-12-09 06:25:59.084020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:119888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.616 [2024-12-09 06:25:59.084026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:19.616 [2024-12-09 06:25:59.084036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:119896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.616 [2024-12-09 06:25:59.084042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:19.616 [2024-12-09 06:25:59.084053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:119904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.616 [2024-12-09 06:25:59.084058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:19.616 [2024-12-09 06:25:59.084070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:119912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.616 [2024-12-09 06:25:59.084076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:19.616 [2024-12-09 06:25:59.084086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.616 [2024-12-09 06:25:59.084092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:19.616 [2024-12-09 06:25:59.084103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:119928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.616 [2024-12-09 06:25:59.084108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:19.616 [2024-12-09 06:25:59.084119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:119936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.616 [2024-12-09 06:25:59.084124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.616 [2024-12-09 06:25:59.084135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:119944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.616 [2024-12-09 06:25:59.084140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:19.616 [2024-12-09 06:25:59.084151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:119952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.616 [2024-12-09 06:25:59.084156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:19.616 [2024-12-09 06:25:59.084167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:119960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.616 [2024-12-09 06:25:59.084172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:19.616 [2024-12-09 06:25:59.084183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:119968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.616 [2024-12-09 06:25:59.084188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:19.616 [2024-12-09 06:25:59.084199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:119976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.616 [2024-12-09 06:25:59.084204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:19.616 [2024-12-09 06:25:59.084215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:119984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.616 [2024-12-09 06:25:59.084220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:19.616 [2024-12-09 06:25:59.084230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:119992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.616 [2024-12-09 06:25:59.084235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:19.616 [2024-12-09 06:25:59.084246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:120000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.616 [2024-12-09 06:25:59.084252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:19.616 [2024-12-09 06:25:59.084481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:120896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.616 [2024-12-09 06:25:59.084489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:19.616 [2024-12-09 06:25:59.084500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:120008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.616 [2024-12-09 06:25:59.084506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:19.616 [2024-12-09 06:25:59.084516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:120016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.616 [2024-12-09 06:25:59.084522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:19.616 [2024-12-09 06:25:59.084532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:120024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.616 [2024-12-09 06:25:59.084538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:19.616 [2024-12-09 06:25:59.084549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:120032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.616 [2024-12-09 06:25:59.084554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:19.616 [2024-12-09 06:25:59.084565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:120040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.616 [2024-12-09 06:25:59.084570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:19.616 [2024-12-09 06:25:59.084581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:120048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.616 [2024-12-09 06:25:59.084586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:19.616 [2024-12-09 06:25:59.084597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:120056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.616 [2024-12-09 06:25:59.084602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:19.616 [2024-12-09 06:25:59.084613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:120064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.616 [2024-12-09 06:25:59.084618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:19.616 [2024-12-09 06:25:59.084629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:120072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.616 [2024-12-09 06:25:59.084634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:19.616 [2024-12-09 06:25:59.084645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:120080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.616 [2024-12-09 06:25:59.084650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:19.616 [2024-12-09 06:25:59.084661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:120088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.616 [2024-12-09 06:25:59.084666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:19.616 [2024-12-09 06:25:59.084678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:120096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.616 [2024-12-09 06:25:59.084684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:19.616 [2024-12-09 06:25:59.084695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:120104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.616 [2024-12-09 06:25:59.084700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:19.616 [2024-12-09 06:25:59.084711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:120112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.617 [2024-12-09 06:25:59.084717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:19.617 [2024-12-09 06:25:59.084728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:120120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.617 [2024-12-09 06:25:59.084733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:19.617 [2024-12-09 06:25:59.084744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:120128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.617 [2024-12-09 06:25:59.084749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:19.617 [2024-12-09 06:25:59.084760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:120136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.617 [2024-12-09 06:25:59.084766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:19.617 [2024-12-09 06:25:59.084776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:120144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.617 [2024-12-09 06:25:59.084782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:19.617 [2024-12-09 06:25:59.084792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:120152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.617 [2024-12-09 06:25:59.084797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:19.617 [2024-12-09 06:25:59.084809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:120160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.617 [2024-12-09 06:25:59.084814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:19.617 [2024-12-09 06:25:59.084825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:120168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.617 [2024-12-09 06:25:59.084830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:19.617 [2024-12-09 06:25:59.084841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:120176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.617 [2024-12-09 06:25:59.084846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:19.617 [2024-12-09 06:25:59.084857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:120184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.617 [2024-12-09 06:25:59.084862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.617 [2024-12-09 06:25:59.084873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:120192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.617 [2024-12-09 06:25:59.084880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:19.617 [2024-12-09 06:25:59.084890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:120200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.617 [2024-12-09 06:25:59.084896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:19.617 [2024-12-09 06:25:59.084906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:120208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.617 [2024-12-09 06:25:59.084912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:19.617 [2024-12-09 06:25:59.084922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:120216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.617 [2024-12-09 06:25:59.084928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:19.617 [2024-12-09 06:25:59.084938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:120224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.617 [2024-12-09 06:25:59.084944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:19.617 [2024-12-09 06:25:59.084954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:120232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.617 [2024-12-09 06:25:59.084960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:19.617 [2024-12-09 06:25:59.084970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:120240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.617 [2024-12-09 06:25:59.084976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:19.617 [2024-12-09 06:25:59.084986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:120248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.617 [2024-12-09 06:25:59.084992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:19.617 [2024-12-09 06:25:59.085002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:120256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.617 [2024-12-09 06:25:59.085008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:19.617 [2024-12-09 06:25:59.085018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:120264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.617 [2024-12-09 06:25:59.085024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:19.617 [2024-12-09 06:25:59.085034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:120272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.617 [2024-12-09 06:25:59.085039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:19.617 [2024-12-09 06:25:59.085050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:120280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.617 [2024-12-09 06:25:59.085055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:19.617 [2024-12-09 06:25:59.085066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:120288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.617 [2024-12-09 06:25:59.085072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:19.617 [2024-12-09 06:25:59.085083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:120296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.617 [2024-12-09 06:25:59.085089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:19.617 [2024-12-09 06:25:59.085099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:120304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.617 [2024-12-09 06:25:59.085105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:19.617 [2024-12-09 06:25:59.085116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:120312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.617 [2024-12-09 06:25:59.085121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:19.617 [2024-12-09 06:25:59.085396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:120320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.617 [2024-12-09 06:25:59.085404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:19.617 [2024-12-09 06:25:59.085416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:120328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.617 [2024-12-09 06:25:59.085421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:19.617 [2024-12-09 06:25:59.085432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:120336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.617 [2024-12-09 06:25:59.085437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:19.617 [2024-12-09 06:25:59.085453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:120344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.617 [2024-12-09 06:25:59.085459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:19.617 [2024-12-09 06:25:59.085470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:120352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.617 [2024-12-09 06:25:59.085476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:19.617 [2024-12-09 06:25:59.085487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:120360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.617 [2024-12-09 06:25:59.085492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:19.617 [2024-12-09 06:25:59.085503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:120368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.617 [2024-12-09 06:25:59.085508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:19.617 [2024-12-09 06:25:59.085519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:120376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.617 [2024-12-09 06:25:59.085525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:19.617 [2024-12-09 06:25:59.085535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:120384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.617 [2024-12-09 06:25:59.085540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:19.617 [2024-12-09 06:25:59.085553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:120392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.617 [2024-12-09 06:25:59.085559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:19.617 [2024-12-09 06:25:59.085569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:120400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.617 [2024-12-09 06:25:59.085574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:19.617 [2024-12-09 06:25:59.085585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:120408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.617 [2024-12-09 06:25:59.085591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:19.617 [2024-12-09 06:25:59.085601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:120416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.617 [2024-12-09 06:25:59.085607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:19.618 [2024-12-09 06:25:59.085617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:120424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.618 [2024-12-09 06:25:59.085623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:19.618 [2024-12-09 06:25:59.085634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:120432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.618 [2024-12-09 06:25:59.085639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.618 [2024-12-09 06:25:59.085649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:120440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.618 [2024-12-09 06:25:59.085655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.618 [2024-12-09 06:25:59.085665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:120448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.618 [2024-12-09 06:25:59.085671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:19.618 [2024-12-09 06:25:59.085681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:120456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.618 [2024-12-09 06:25:59.085687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:19.618 [2024-12-09 06:25:59.085697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:120464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.618 [2024-12-09 06:25:59.085702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:19.618 [2024-12-09 06:25:59.085713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:120472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.618 [2024-12-09 06:25:59.085719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:19.618 [2024-12-09 06:25:59.085729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:120480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.618 [2024-12-09 06:25:59.085734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:19.618 [2024-12-09 06:25:59.085746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:120488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.618 [2024-12-09 06:25:59.085752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:19.618 [2024-12-09 06:25:59.085763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:120496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.618 [2024-12-09 06:25:59.085768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:19.618 [2024-12-09 06:25:59.085779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:120504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.618 [2024-12-09 06:25:59.085784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:19.618 [2024-12-09 06:25:59.085794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:120512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.618 [2024-12-09 06:25:59.085800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:19.618 [2024-12-09 06:25:59.085811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:120520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.618 [2024-12-09 06:25:59.085816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:19.618 [2024-12-09 06:25:59.085826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:120528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.618 [2024-12-09 06:25:59.085831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:19.618 [2024-12-09 06:25:59.085842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:120536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.618 [2024-12-09 06:25:59.085847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:19.618 [2024-12-09 06:25:59.085858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:120544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.618 [2024-12-09 06:25:59.085863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:19.618 [2024-12-09 06:25:59.085873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:120552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.618 [2024-12-09 06:25:59.085879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:19.618 [2024-12-09 06:25:59.085889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:120560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.618 [2024-12-09 06:25:59.085894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:19.618 [2024-12-09 06:25:59.085905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:120568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.618 [2024-12-09 06:25:59.085910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:19.618 [2024-12-09 06:25:59.085920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:120576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.618 [2024-12-09 06:25:59.085926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:19.618 [2024-12-09 06:25:59.085937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:120584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.618 [2024-12-09 06:25:59.085943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:19.618 [2024-12-09 06:25:59.085953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:120592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.618 [2024-12-09 06:25:59.085959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:19.618 [2024-12-09 06:25:59.085970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:120600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.618 [2024-12-09 06:25:59.085975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:19.618 [2024-12-09 06:25:59.085985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:120608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.618 [2024-12-09 06:25:59.085991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:19.618 [2024-12-09 06:25:59.086002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:120616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.618 [2024-12-09 06:25:59.086007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:19.618 [2024-12-09 06:25:59.086017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:120624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.618 [2024-12-09 06:25:59.086022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:19.618 [2024-12-09 06:25:59.086033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:120632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.618 [2024-12-09 06:25:59.086038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:19.618 [2024-12-09 06:25:59.086049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:120640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.618 [2024-12-09 06:25:59.086054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:19.618 [2024-12-09 06:25:59.086065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:120648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.618 [2024-12-09 06:25:59.086070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:19.618 [2024-12-09 06:25:59.086081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:120656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.618 [2024-12-09 06:25:59.086086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:19.618 [2024-12-09 06:25:59.086096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:120664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.618 [2024-12-09 06:25:59.086101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:19.618 [2024-12-09 06:25:59.086114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:120672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.618 [2024-12-09 06:25:59.086119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:19.618 [2024-12-09 06:25:59.086130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:120680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.618 [2024-12-09 06:25:59.086136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:19.618 [2024-12-09 06:25:59.086147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:120688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.619 [2024-12-09 06:25:59.086152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:19.619 [2024-12-09 06:25:59.086162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:120696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.619 [2024-12-09 06:25:59.086168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.619 [2024-12-09 06:25:59.086178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:120704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.619 [2024-12-09 06:25:59.086183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:19.619 [2024-12-09 06:25:59.086194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:120712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.619 [2024-12-09 06:25:59.086199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:19.619 [2024-12-09 06:25:59.086209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:120720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.619 [2024-12-09 06:25:59.086215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:19.619 [2024-12-09 06:25:59.086225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:120728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.619 [2024-12-09 06:25:59.086230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:19.619 [2024-12-09 06:25:59.086241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:120736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.619 [2024-12-09 06:25:59.086246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:19.619 [2024-12-09 06:25:59.086257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:120744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.619 [2024-12-09 06:25:59.086262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:19.619 [2024-12-09 06:25:59.086272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:120752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.619 [2024-12-09 06:25:59.086278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:19.619 [2024-12-09 06:25:59.086288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:120760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.619 [2024-12-09 06:25:59.086293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:19.619 [2024-12-09 06:25:59.086303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:120768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.619 [2024-12-09 06:25:59.086309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:19.619 [2024-12-09 06:25:59.086319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:120776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.619 [2024-12-09 06:25:59.086325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:19.619 [2024-12-09 06:25:59.086336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:120784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.619 [2024-12-09 06:25:59.086341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:19.619 [2024-12-09 06:25:59.086352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:120792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.619 [2024-12-09 06:25:59.086357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:19.619 [2024-12-09 06:25:59.086367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:120800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.619 [2024-12-09 06:25:59.086373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:19.619 [2024-12-09 06:25:59.086383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:120808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.619 [2024-12-09 06:25:59.086388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:19.619 [2024-12-09 06:25:59.086399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:120816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.619 [2024-12-09 06:25:59.086404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:19.619 [2024-12-09 06:25:59.086414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:120824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.619 [2024-12-09 06:25:59.086420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:19.619 [2024-12-09 06:25:59.086912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:120832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.619 [2024-12-09 06:25:59.086922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:19.619 [2024-12-09 06:25:59.086934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:120840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.619 [2024-12-09 06:25:59.086940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:19.619 [2024-12-09 06:25:59.086950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.619 [2024-12-09 06:25:59.086956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:19.619 [2024-12-09 06:25:59.086967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:120856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.619 [2024-12-09 06:25:59.086972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:19.619 [2024-12-09 06:25:59.086982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:120864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.619 [2024-12-09 06:25:59.086987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:19.619 [2024-12-09 06:25:59.086998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:120872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.619 [2024-12-09 06:25:59.087004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:19.619 [2024-12-09 06:25:59.087016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:120880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.619 [2024-12-09 06:25:59.087022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:19.619 [2024-12-09 06:25:59.087032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:120888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.619 [2024-12-09 06:25:59.087038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:19.619 [2024-12-09 06:25:59.087049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:119880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.619 [2024-12-09 06:25:59.087054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:19.619 [2024-12-09 06:25:59.087065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:119888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.619 [2024-12-09 06:25:59.087070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:19.619 [2024-12-09 06:25:59.087080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:119896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.619 [2024-12-09 06:25:59.087086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:19.619 [2024-12-09 06:25:59.087096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:119904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.619 [2024-12-09 06:25:59.087102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:19.619 [2024-12-09 06:25:59.087113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:119912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.619 [2024-12-09 06:25:59.087118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:19.619 [2024-12-09 06:25:59.087129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:119920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.619 [2024-12-09 06:25:59.087134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:19.619 [2024-12-09 06:25:59.087144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:119928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.619 [2024-12-09 06:25:59.087149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:19.619 [2024-12-09 06:25:59.087160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:119936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.619 [2024-12-09 06:25:59.087165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.619 [2024-12-09 06:25:59.087176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:119944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.619 [2024-12-09 06:25:59.087181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:19.619 [2024-12-09 06:25:59.087192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:119952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.619 [2024-12-09 06:25:59.087197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:19.619 [2024-12-09 06:25:59.087209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:119960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.619 [2024-12-09 06:25:59.087214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:19.619 [2024-12-09 06:25:59.087225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:119968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.619 [2024-12-09 06:25:59.087230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:19.619 [2024-12-09 06:25:59.087241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:119976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.619 [2024-12-09 06:25:59.087246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:19.619 [2024-12-09 06:25:59.087256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:119984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.620 [2024-12-09 06:25:59.087262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:19.620 [2024-12-09 06:25:59.087273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:119992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.620 [2024-12-09 06:25:59.087278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:19.620 [2024-12-09 06:25:59.087447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:120000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.620 [2024-12-09 06:25:59.087460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:19.620 [2024-12-09 06:25:59.087472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:120896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.620 [2024-12-09 06:25:59.087477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:19.620 [2024-12-09 06:25:59.087488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:120008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.620 [2024-12-09 06:25:59.087493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:19.620 [2024-12-09 06:25:59.087503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:120016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.620 [2024-12-09 06:25:59.087509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:19.620 [2024-12-09 06:25:59.087519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:120024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.620 [2024-12-09 06:25:59.087525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:19.620 [2024-12-09 06:25:59.087535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:120032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.620 [2024-12-09 06:25:59.087540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:19.620 [2024-12-09 06:25:59.087551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:120040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.620 [2024-12-09 06:25:59.087556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:19.620 [2024-12-09 06:25:59.087567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:120048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.620 [2024-12-09 06:25:59.087574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:19.620 [2024-12-09 06:25:59.087584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:120056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.620 [2024-12-09 06:25:59.087590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:19.620 [2024-12-09 06:25:59.087601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:120064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.620 [2024-12-09 06:25:59.087606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:19.620 [2024-12-09 06:25:59.087616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:120072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.620 [2024-12-09 06:25:59.087621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:19.620 [2024-12-09 06:25:59.087632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:120080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.620 [2024-12-09 06:25:59.087637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:19.620 [2024-12-09 06:25:59.087648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:120088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.620 [2024-12-09 06:25:59.087653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:19.620 [2024-12-09 06:25:59.087664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:120096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.620 [2024-12-09 06:25:59.094268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:19.620 [2024-12-09 06:25:59.094341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:120104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.620 [2024-12-09 06:25:59.094359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:19.620 [2024-12-09 06:25:59.094386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:120112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.620 [2024-12-09 06:25:59.094399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:19.620 [2024-12-09 06:25:59.094425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:120120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.620 [2024-12-09 06:25:59.094437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:19.620 [2024-12-09 06:25:59.094473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:120128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.620 [2024-12-09 06:25:59.094487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:19.620 [2024-12-09 06:25:59.094513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:120136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.620 [2024-12-09 06:25:59.094526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:19.620 [2024-12-09 06:25:59.094552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:120144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.620 [2024-12-09 06:25:59.094575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:19.620 [2024-12-09 06:25:59.094601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:120152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.620 [2024-12-09 06:25:59.094614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:19.620 [2024-12-09 06:25:59.094640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:120160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.620 [2024-12-09 06:25:59.094652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:19.620 [2024-12-09 06:25:59.094678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:120168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.620 [2024-12-09 06:25:59.094691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:19.620 [2024-12-09 06:25:59.094716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:120176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.620 [2024-12-09 06:25:59.094729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:19.620 [2024-12-09 06:25:59.094754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:120184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.620 [2024-12-09 06:25:59.094767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.620 [2024-12-09 06:25:59.094792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:120192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.620 [2024-12-09 06:25:59.094805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:19.620 [2024-12-09 06:25:59.094831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:120200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.620 [2024-12-09 06:25:59.094843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:19.620 [2024-12-09 06:25:59.094869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:120208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.620 [2024-12-09 06:25:59.094882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:19.620 [2024-12-09 06:25:59.094908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:120216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.620 [2024-12-09 06:25:59.094920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:19.620 [2024-12-09 06:25:59.094945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:120224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.620 [2024-12-09 06:25:59.094958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:19.620 [2024-12-09 06:25:59.094984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:120232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.620 [2024-12-09 06:25:59.094996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:19.620 [2024-12-09 06:25:59.095022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:120240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.620 [2024-12-09 06:25:59.095037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:19.620 [2024-12-09 06:25:59.095063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:120248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.620 [2024-12-09 06:25:59.095075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:19.620 [2024-12-09 06:25:59.095101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:120256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.620 [2024-12-09 06:25:59.095113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:19.620 [2024-12-09 06:25:59.095139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:120264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.620 [2024-12-09 06:25:59.095152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:19.620 [2024-12-09 06:25:59.095177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:120272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.620 [2024-12-09 06:25:59.095189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:19.620 [2024-12-09 06:25:59.095215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:120280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.621 [2024-12-09 06:25:59.095227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:19.621 [2024-12-09 06:25:59.095253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:120288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.621 [2024-12-09 06:25:59.095265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:19.621 [2024-12-09 06:25:59.095291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:120296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.621 [2024-12-09 06:25:59.095304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:19.621 [2024-12-09 06:25:59.095330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:120304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.621 [2024-12-09 06:25:59.095342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:19.621 [2024-12-09 06:25:59.096099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:120312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.621 [2024-12-09 06:25:59.096122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:19.621 [2024-12-09 06:25:59.096151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:120320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.621 [2024-12-09 06:25:59.096164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:19.621 [2024-12-09 06:25:59.096189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:120328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.621 [2024-12-09 06:25:59.096202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:19.621 [2024-12-09 06:25:59.096228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:120336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.621 [2024-12-09 06:25:59.096241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:19.621 [2024-12-09 06:25:59.096271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:120344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.621 [2024-12-09 06:25:59.096284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:19.621 [2024-12-09 06:25:59.096309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:120352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.621 [2024-12-09 06:25:59.096322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:19.621 [2024-12-09 06:25:59.096347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:120360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.621 [2024-12-09 06:25:59.096360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:19.621 [2024-12-09 06:25:59.096385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:120368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.621 [2024-12-09 06:25:59.096398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:19.621 [2024-12-09 06:25:59.096423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:120376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.621 [2024-12-09 06:25:59.096432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:19.621 [2024-12-09 06:25:59.096443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:120384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.621 [2024-12-09 06:25:59.096452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:19.621 [2024-12-09 06:25:59.096463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:120392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.621 [2024-12-09 06:25:59.096469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:19.621 [2024-12-09 06:25:59.096479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:120400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.621 [2024-12-09 06:25:59.096484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:19.621 [2024-12-09 06:25:59.096495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:120408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.621 [2024-12-09 06:25:59.096500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:19.621 [2024-12-09 06:25:59.096511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:120416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.621 [2024-12-09 06:25:59.096516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:19.621 [2024-12-09 06:25:59.096527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:120424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.621 [2024-12-09 06:25:59.096532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:19.621 [2024-12-09 06:25:59.096542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:120432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.621 [2024-12-09 06:25:59.096547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.621 [2024-12-09 06:25:59.096560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:120440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.621 [2024-12-09 06:25:59.096565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.621 [2024-12-09 06:25:59.096575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:120448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.621 [2024-12-09 06:25:59.096580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:19.621 [2024-12-09 06:25:59.096591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:120456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.621 [2024-12-09 06:25:59.096596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:19.621 [2024-12-09 06:25:59.096607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:120464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.621 [2024-12-09 06:25:59.096612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:19.621 [2024-12-09 06:25:59.096622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:120472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.621 [2024-12-09 06:25:59.096627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:19.621 [2024-12-09 06:25:59.096638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:120480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.621 [2024-12-09 06:25:59.096643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:19.621 [2024-12-09 06:25:59.096653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:120488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.621 [2024-12-09 06:25:59.096659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:19.621 [2024-12-09 06:25:59.096669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:120496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.621 [2024-12-09 06:25:59.096674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:19.621 [2024-12-09 06:25:59.096685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:120504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.621 [2024-12-09 06:25:59.096690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:19.621 [2024-12-09 06:25:59.096701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:120512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.621 [2024-12-09 06:25:59.096706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:19.621 [2024-12-09 06:25:59.096716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:120520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.621 [2024-12-09 06:25:59.096721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:19.621 [2024-12-09 06:25:59.096732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:120528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.621 [2024-12-09 06:25:59.096737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:19.621 [2024-12-09 06:25:59.096748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:120536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.621 [2024-12-09 06:25:59.096754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:19.621 [2024-12-09 06:25:59.096765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:120544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.621 [2024-12-09 06:25:59.096770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:19.621 [2024-12-09 06:25:59.096780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:120552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.621 [2024-12-09 06:25:59.096786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:19.621 [2024-12-09 06:25:59.096796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:120560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.621 [2024-12-09 06:25:59.096801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:19.621 [2024-12-09 06:25:59.096812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:120568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.621 [2024-12-09 06:25:59.096817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:19.621 [2024-12-09 06:25:59.096827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:120576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.621 [2024-12-09 06:25:59.096832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:19.621 [2024-12-09 06:25:59.096843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:120584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.621 [2024-12-09 06:25:59.096848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:19.621 [2024-12-09 06:25:59.096859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:120592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.622 [2024-12-09 06:25:59.096864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:19.622 [2024-12-09 06:25:59.096874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:120600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.622 [2024-12-09 06:25:59.096880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:19.622 [2024-12-09 06:25:59.096890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:120608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.622 [2024-12-09 06:25:59.096895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:19.622 [2024-12-09 06:25:59.096905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:120616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.622 [2024-12-09 06:25:59.096911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:19.622 [2024-12-09 06:25:59.096921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:120624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.622 [2024-12-09 06:25:59.096926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:19.622 [2024-12-09 06:25:59.096937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:120632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.622 [2024-12-09 06:25:59.096943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:19.622 [2024-12-09 06:25:59.096954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:120640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.622 [2024-12-09 06:25:59.096959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:19.622 [2024-12-09 06:25:59.096969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:120648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.622 [2024-12-09 06:25:59.096975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:19.622 [2024-12-09 06:25:59.096985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:120656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.622 [2024-12-09 06:25:59.096990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:19.622 [2024-12-09 06:25:59.097000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:120664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.622 [2024-12-09 06:25:59.097006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:19.622 [2024-12-09 06:25:59.097016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:120672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.622 [2024-12-09 06:25:59.097021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:19.622 [2024-12-09 06:25:59.097032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:120680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.622 [2024-12-09 06:25:59.097037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:19.622 [2024-12-09 06:25:59.097047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:120688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.622 [2024-12-09 06:25:59.097053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:19.622 [2024-12-09 06:25:59.097063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:120696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.622 [2024-12-09 06:25:59.097068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.622 [2024-12-09 06:25:59.097078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:120704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.622 [2024-12-09 06:25:59.097084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:19.622 [2024-12-09 06:25:59.097094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:120712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.622 [2024-12-09 06:25:59.097099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:19.622 [2024-12-09 06:25:59.097110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:120720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.622 [2024-12-09 06:25:59.097115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:19.622 [2024-12-09 06:25:59.097125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:120728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.622 [2024-12-09 06:25:59.097132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:19.622 [2024-12-09 06:25:59.097142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:120736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.622 [2024-12-09 06:25:59.097147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:19.622 [2024-12-09 06:25:59.097158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:120744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.622 [2024-12-09 06:25:59.097163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:19.622 [2024-12-09 06:25:59.097173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:120752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.622 [2024-12-09 06:25:59.097178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:19.622 [2024-12-09 06:25:59.097189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:120760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.622 [2024-12-09 06:25:59.097194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:19.622 [2024-12-09 06:25:59.097204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:120768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.622 [2024-12-09 06:25:59.097210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:19.622 [2024-12-09 06:25:59.097220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:120776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.622 [2024-12-09 06:25:59.097225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:19.622 [2024-12-09 06:25:59.097236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:120784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.622 [2024-12-09 06:25:59.097241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:19.622 [2024-12-09 06:25:59.097252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:120792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.622 [2024-12-09 06:25:59.097257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:19.622 [2024-12-09 06:25:59.097267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:120800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.622 [2024-12-09 06:25:59.097272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:19.622 [2024-12-09 06:25:59.097283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:120808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.622 [2024-12-09 06:25:59.097288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:19.622 [2024-12-09 06:25:59.097298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:120816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.622 [2024-12-09 06:25:59.097303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:19.622 [2024-12-09 06:25:59.097314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:120824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.622 [2024-12-09 06:25:59.097319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:19.622 [2024-12-09 06:25:59.097331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:120832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.622 [2024-12-09 06:25:59.097336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:19.622 [2024-12-09 06:25:59.097347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:120840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.622 [2024-12-09 06:25:59.097352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:19.622 [2024-12-09 06:25:59.097362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:120848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.622 [2024-12-09 06:25:59.097368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:19.622 [2024-12-09 06:25:59.097378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:120856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.622 [2024-12-09 06:25:59.097383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:19.622 [2024-12-09 06:25:59.097393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:120864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.622 [2024-12-09 06:25:59.097399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:19.623 [2024-12-09 06:25:59.097409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:120872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.623 [2024-12-09 06:25:59.097414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:19.623 [2024-12-09 06:25:59.097424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:120880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.623 [2024-12-09 06:25:59.097430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:19.623 [2024-12-09 06:25:59.097440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:120888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.623 [2024-12-09 06:25:59.097445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:19.623 [2024-12-09 06:25:59.097461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:119880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.623 [2024-12-09 06:25:59.097466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:19.623 [2024-12-09 06:25:59.097476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:119888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.623 [2024-12-09 06:25:59.097482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:19.623 [2024-12-09 06:25:59.097492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:119896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.623 [2024-12-09 06:25:59.097497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:19.623 [2024-12-09 06:25:59.097508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:119904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.623 [2024-12-09 06:25:59.097513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:19.623 [2024-12-09 06:25:59.097526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:119912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.623 [2024-12-09 06:25:59.097531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:19.623 [2024-12-09 06:25:59.097541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:119920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.623 [2024-12-09 06:25:59.097546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:19.623 [2024-12-09 06:25:59.097557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:119928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.623 [2024-12-09 06:25:59.097562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:19.623 [2024-12-09 06:25:59.097573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:119936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.623 [2024-12-09 06:25:59.097578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.623 [2024-12-09 06:25:59.097589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:119944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.623 [2024-12-09 06:25:59.097594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:19.623 [2024-12-09 06:25:59.097604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:119952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.623 [2024-12-09 06:25:59.097610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:19.623 [2024-12-09 06:25:59.097620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:119960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.623 [2024-12-09 06:25:59.097625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:19.623 [2024-12-09 06:25:59.097636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.623 [2024-12-09 06:25:59.097641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:19.623 [2024-12-09 06:25:59.097651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:119976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.623 [2024-12-09 06:25:59.097657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:19.623 [2024-12-09 06:25:59.097667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:119984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.623 [2024-12-09 06:25:59.097673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:19.623 [2024-12-09 06:25:59.098315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:119992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.623 [2024-12-09 06:25:59.098327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:19.623 [2024-12-09 06:25:59.098339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:120000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.623 [2024-12-09 06:25:59.098344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:19.623 [2024-12-09 06:25:59.098355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:120896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.623 [2024-12-09 06:25:59.098362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:19.623 [2024-12-09 06:25:59.098373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:120008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.623 [2024-12-09 06:25:59.098379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:19.623 [2024-12-09 06:25:59.098389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:120016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.623 [2024-12-09 06:25:59.098394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:19.623 [2024-12-09 06:25:59.098405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:120024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.623 [2024-12-09 06:25:59.098410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:19.623 [2024-12-09 06:25:59.098421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:120032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.623 [2024-12-09 06:25:59.098426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:19.623 [2024-12-09 06:25:59.098437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:120040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.623 [2024-12-09 06:25:59.098442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:19.623 [2024-12-09 06:25:59.098457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:120048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.623 [2024-12-09 06:25:59.098463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:19.623 [2024-12-09 06:25:59.098474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:120056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.623 [2024-12-09 06:25:59.098479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:19.623 [2024-12-09 06:25:59.098490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:120064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.623 [2024-12-09 06:25:59.098495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:19.623 [2024-12-09 06:25:59.098506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:120072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.623 [2024-12-09 06:25:59.098511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:19.623 [2024-12-09 06:25:59.098522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:120080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.623 [2024-12-09 06:25:59.098527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:19.623 [2024-12-09 06:25:59.098538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:120088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.623 [2024-12-09 06:25:59.098543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:19.623 [2024-12-09 06:25:59.098553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:120096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.623 [2024-12-09 06:25:59.098560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:19.623 [2024-12-09 06:25:59.098571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:120104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.623 [2024-12-09 06:25:59.098576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:19.623 [2024-12-09 06:25:59.098587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:120112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.623 [2024-12-09 06:25:59.098592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:19.623 [2024-12-09 06:25:59.098602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:120120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.623 [2024-12-09 06:25:59.098608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:19.623 [2024-12-09 06:25:59.098618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:120128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.623 [2024-12-09 06:25:59.098624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:19.623 [2024-12-09 06:25:59.098634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:120136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.623 [2024-12-09 06:25:59.098640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:19.623 [2024-12-09 06:25:59.098650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:120144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.623 [2024-12-09 06:25:59.098656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:19.623 [2024-12-09 06:25:59.098666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:120152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.624 [2024-12-09 06:25:59.098672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:19.624 [2024-12-09 06:25:59.098682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:120160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.624 [2024-12-09 06:25:59.098688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:19.624 [2024-12-09 06:25:59.098698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:120168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.624 [2024-12-09 06:25:59.098704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:19.624 [2024-12-09 06:25:59.098714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:120176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.624 [2024-12-09 06:25:59.098720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:19.624 [2024-12-09 06:25:59.098730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:120184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.624 [2024-12-09 06:25:59.098735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.624 [2024-12-09 06:25:59.098746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:120192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.624 [2024-12-09 06:25:59.098753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:19.624 [2024-12-09 06:25:59.098764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:120200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.624 [2024-12-09 06:25:59.098769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:19.624 [2024-12-09 06:25:59.098780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:120208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.624 [2024-12-09 06:25:59.098785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:19.624 [2024-12-09 06:25:59.098796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:120216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.624 [2024-12-09 06:25:59.098801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:19.624 [2024-12-09 06:25:59.098811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:120224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.624 [2024-12-09 06:25:59.098817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:19.624 [2024-12-09 06:25:59.098827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:120232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.624 [2024-12-09 06:25:59.098832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:19.624 [2024-12-09 06:25:59.098843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:120240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.624 [2024-12-09 06:25:59.098848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:19.624 [2024-12-09 06:25:59.098859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:120248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.624 [2024-12-09 06:25:59.098864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:19.624 [2024-12-09 06:25:59.098875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:120256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.624 [2024-12-09 06:25:59.098880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:19.624 [2024-12-09 06:25:59.098891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:120264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.624 [2024-12-09 06:25:59.098896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:19.624 [2024-12-09 06:25:59.098907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:120272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.624 [2024-12-09 06:25:59.098912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:19.624 [2024-12-09 06:25:59.098923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:120280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.624 [2024-12-09 06:25:59.098928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:19.624 [2024-12-09 06:25:59.098938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:120288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.624 [2024-12-09 06:25:59.098944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:19.624 [2024-12-09 06:25:59.098956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:120296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.624 [2024-12-09 06:25:59.098961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:19.624 [2024-12-09 06:25:59.099700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:120304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.624 [2024-12-09 06:25:59.099709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:19.624 [2024-12-09 06:25:59.099720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:120312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.624 [2024-12-09 06:25:59.099726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:19.624 [2024-12-09 06:25:59.099737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.624 [2024-12-09 06:25:59.099742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:19.624 [2024-12-09 06:25:59.099752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:120328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.624 [2024-12-09 06:25:59.099758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:19.624 [2024-12-09 06:25:59.099768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:120336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.624 [2024-12-09 06:25:59.099774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:19.624 [2024-12-09 06:25:59.099784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:120344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.624 [2024-12-09 06:25:59.099789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:19.624 [2024-12-09 06:25:59.099800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:120352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.624 [2024-12-09 06:25:59.099805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:19.624 [2024-12-09 06:25:59.099816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:120360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.624 [2024-12-09 06:25:59.099821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:19.624 [2024-12-09 06:25:59.099832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:120368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.624 [2024-12-09 06:25:59.099837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:19.624 [2024-12-09 06:25:59.099848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:120376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.624 [2024-12-09 06:25:59.099853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:19.624 [2024-12-09 06:25:59.099863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:120384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.624 [2024-12-09 06:25:59.099869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:19.624 [2024-12-09 06:25:59.099881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:120392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.624 [2024-12-09 06:25:59.099887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:19.624 [2024-12-09 06:25:59.099897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:120400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.624 [2024-12-09 06:25:59.099903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:19.624 [2024-12-09 06:25:59.099913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:120408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.624 [2024-12-09 06:25:59.099919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:19.624 [2024-12-09 06:25:59.099929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:120416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.624 [2024-12-09 06:25:59.099934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:19.624 [2024-12-09 06:25:59.099945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:120424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.624 [2024-12-09 06:25:59.099950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:19.624 [2024-12-09 06:25:59.099960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:120432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.624 [2024-12-09 06:25:59.099966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.624 [2024-12-09 06:25:59.099976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:120440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.624 [2024-12-09 06:25:59.099981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.624 [2024-12-09 06:25:59.099992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:120448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.624 [2024-12-09 06:25:59.099997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:19.624 [2024-12-09 06:25:59.100008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:120456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.624 [2024-12-09 06:25:59.100013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:19.625 [2024-12-09 06:25:59.100023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:120464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.625 [2024-12-09 06:25:59.100029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:19.625 [2024-12-09 06:25:59.100039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:120472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.625 [2024-12-09 06:25:59.100044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:19.625 [2024-12-09 06:25:59.100055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:120480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.625 [2024-12-09 06:25:59.100060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:19.625 [2024-12-09 06:25:59.100070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:120488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.625 [2024-12-09 06:25:59.100077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:19.625 [2024-12-09 06:25:59.100088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:120496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.625 [2024-12-09 06:25:59.100093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:19.625 [2024-12-09 06:25:59.100103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:120504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.625 [2024-12-09 06:25:59.100109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:19.625 [2024-12-09 06:25:59.100119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:120512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.625 [2024-12-09 06:25:59.100124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:19.625 [2024-12-09 06:25:59.100135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:120520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.625 [2024-12-09 06:25:59.100140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:19.625 [2024-12-09 06:25:59.100150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:120528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.625 [2024-12-09 06:25:59.100156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:19.625 [2024-12-09 06:25:59.100166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:120536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.625 [2024-12-09 06:25:59.100171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:19.625 [2024-12-09 06:25:59.100182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:120544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.625 [2024-12-09 06:25:59.100187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:19.625 [2024-12-09 06:25:59.100198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:120552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.625 [2024-12-09 06:25:59.100203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:19.625 [2024-12-09 06:25:59.100214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:120560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.625 [2024-12-09 06:25:59.100219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:19.625 [2024-12-09 06:25:59.100230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:120568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.625 [2024-12-09 06:25:59.100235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:19.625 [2024-12-09 06:25:59.100510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:120576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.625 [2024-12-09 06:25:59.100517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:19.625 [2024-12-09 06:25:59.100529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:120584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.625 [2024-12-09 06:25:59.100538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:19.625 [2024-12-09 06:25:59.100548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:120592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.625 [2024-12-09 06:25:59.100553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:19.625 [2024-12-09 06:25:59.100564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:120600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.625 [2024-12-09 06:25:59.100569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:19.625 [2024-12-09 06:25:59.100580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:120608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.625 [2024-12-09 06:25:59.100585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:19.625 [2024-12-09 06:25:59.100596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:120616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.625 [2024-12-09 06:25:59.100602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:19.625 [2024-12-09 06:25:59.100613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:120624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.625 [2024-12-09 06:25:59.100618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:19.625 [2024-12-09 06:25:59.100630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:120632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.625 [2024-12-09 06:25:59.100636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:19.625 [2024-12-09 06:25:59.100646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:120640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.625 [2024-12-09 06:25:59.100651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:19.625 [2024-12-09 06:25:59.100662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:120648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.625 [2024-12-09 06:25:59.100668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:19.625 [2024-12-09 06:25:59.100678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:120656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.625 [2024-12-09 06:25:59.100683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:19.625 [2024-12-09 06:25:59.100694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:120664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.625 [2024-12-09 06:25:59.100699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:19.625 [2024-12-09 06:25:59.100710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:120672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.625 [2024-12-09 06:25:59.100715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:19.625 [2024-12-09 06:25:59.100726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:120680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.625 [2024-12-09 06:25:59.100732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:19.625 [2024-12-09 06:25:59.100743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:120688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.625 [2024-12-09 06:25:59.100748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:19.625 [2024-12-09 06:25:59.100759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:120696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.625 [2024-12-09 06:25:59.100764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.625 [2024-12-09 06:25:59.100774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:120704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.625 [2024-12-09 06:25:59.100780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:19.625 [2024-12-09 06:25:59.100790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:120712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.625 [2024-12-09 06:25:59.100795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:19.625 [2024-12-09 06:25:59.100806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:120720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.625 [2024-12-09 06:25:59.100811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:19.625 [2024-12-09 06:25:59.100822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:120728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.625 [2024-12-09 06:25:59.100827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:19.625 [2024-12-09 06:25:59.100837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:120736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.625 [2024-12-09 06:25:59.100842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:19.625 [2024-12-09 06:25:59.100853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:120744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.625 [2024-12-09 06:25:59.100858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:19.625 [2024-12-09 06:25:59.100869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:120752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.625 [2024-12-09 06:25:59.100874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:19.625 [2024-12-09 06:25:59.100884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:120760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.625 [2024-12-09 06:25:59.100890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:19.625 [2024-12-09 06:25:59.100900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:120768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.625 [2024-12-09 06:25:59.100905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:19.626 [2024-12-09 06:25:59.100916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:120776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.626 [2024-12-09 06:25:59.100921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:19.626 [2024-12-09 06:25:59.100933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:120784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.626 [2024-12-09 06:25:59.100938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:19.626 [2024-12-09 06:25:59.100948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:120792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.626 [2024-12-09 06:25:59.100954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:19.626 [2024-12-09 06:25:59.100964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:120800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.626 [2024-12-09 06:25:59.100970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:19.626 [2024-12-09 06:25:59.100980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:120808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.626 [2024-12-09 06:25:59.100986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:19.626 [2024-12-09 06:25:59.101225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:120816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.626 [2024-12-09 06:25:59.101233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:19.626 [2024-12-09 06:25:59.101245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:120824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.626 [2024-12-09 06:25:59.101250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:19.626 [2024-12-09 06:25:59.101260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:120832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.626 [2024-12-09 06:25:59.101265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:19.626 [2024-12-09 06:25:59.101276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:120840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.626 [2024-12-09 06:25:59.101281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:19.626 [2024-12-09 06:25:59.101292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:120848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.626 [2024-12-09 06:25:59.101297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:19.626 [2024-12-09 06:25:59.101307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:120856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.626 [2024-12-09 06:25:59.101313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:19.626 [2024-12-09 06:25:59.101323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:120864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.626 [2024-12-09 06:25:59.101328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:19.626 [2024-12-09 06:25:59.101339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:120872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.626 [2024-12-09 06:25:59.101344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:19.626 [2024-12-09 06:25:59.101356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:120880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.626 [2024-12-09 06:25:59.101361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:19.626 [2024-12-09 06:25:59.101372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:120888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.626 [2024-12-09 06:25:59.101377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:19.626 [2024-12-09 06:25:59.101388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:119880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.626 [2024-12-09 06:25:59.101393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:19.626 [2024-12-09 06:25:59.101403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:119888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.626 [2024-12-09 06:25:59.101409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:19.626 [2024-12-09 06:25:59.101420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:119896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.626 [2024-12-09 06:25:59.101425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:19.626 [2024-12-09 06:25:59.101436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:119904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.626 [2024-12-09 06:25:59.101442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:19.626 [2024-12-09 06:25:59.101458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:119912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.626 [2024-12-09 06:25:59.101464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:19.626 [2024-12-09 06:25:59.101475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:119920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.626 [2024-12-09 06:25:59.101483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:19.626 [2024-12-09 06:25:59.101494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:119928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.626 [2024-12-09 06:25:59.101499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:19.626 [2024-12-09 06:25:59.101511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:119936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.626 [2024-12-09 06:25:59.101517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.626 [2024-12-09 06:25:59.101527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:119944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.626 [2024-12-09 06:25:59.101533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:19.626 [2024-12-09 06:25:59.101545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:119952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.626 [2024-12-09 06:25:59.101551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:19.626 [2024-12-09 06:25:59.101562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:119960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.626 [2024-12-09 06:25:59.101569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:19.626 [2024-12-09 06:25:59.101580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:119968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.626 [2024-12-09 06:25:59.101585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:19.626 [2024-12-09 06:25:59.101596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:119976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.626 [2024-12-09 06:25:59.101601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:19.626 [2024-12-09 06:25:59.101612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:119984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.626 [2024-12-09 06:25:59.101618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:19.626 [2024-12-09 06:25:59.101628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:119992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.626 [2024-12-09 06:25:59.101634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:19.626 [2024-12-09 06:25:59.101644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:120000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.626 [2024-12-09 06:25:59.101650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:19.626 [2024-12-09 06:25:59.101660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:120896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.626 [2024-12-09 06:25:59.101666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:19.626 [2024-12-09 06:25:59.101676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:120008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.626 [2024-12-09 06:25:59.101681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:19.626 [2024-12-09 06:25:59.101692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:120016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.626 [2024-12-09 06:25:59.101698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:19.626 [2024-12-09 06:25:59.101708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:120024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.626 [2024-12-09 06:25:59.101714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:19.626 [2024-12-09 06:25:59.101724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:120032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.626 [2024-12-09 06:25:59.101730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:19.626 [2024-12-09 06:25:59.101740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:120040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.626 [2024-12-09 06:25:59.101746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:19.626 [2024-12-09 06:25:59.101756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:120048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.626 [2024-12-09 06:25:59.101763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:19.626 [2024-12-09 06:25:59.101774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:120056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.626 [2024-12-09 06:25:59.101779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:19.627 [2024-12-09 06:25:59.101790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:120064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.627 [2024-12-09 06:25:59.101795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:19.627 [2024-12-09 06:25:59.101806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:120072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.627 [2024-12-09 06:25:59.101811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:19.627 [2024-12-09 06:25:59.101822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:120080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.627 [2024-12-09 06:25:59.101827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:19.627 [2024-12-09 06:25:59.101838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:120088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.627 [2024-12-09 06:25:59.101843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:19.627 [2024-12-09 06:25:59.101854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:120096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.627 [2024-12-09 06:25:59.101859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:19.627 [2024-12-09 06:25:59.101870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:120104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.627 [2024-12-09 06:25:59.101875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:19.627 [2024-12-09 06:25:59.101885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:120112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.627 [2024-12-09 06:25:59.101891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:19.627 [2024-12-09 06:25:59.101901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:120120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.627 [2024-12-09 06:25:59.101906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:19.627 [2024-12-09 06:25:59.101917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:120128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.627 [2024-12-09 06:25:59.101922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:19.627 [2024-12-09 06:25:59.101933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:120136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.627 [2024-12-09 06:25:59.101938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:19.627 [2024-12-09 06:25:59.101949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:120144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.627 [2024-12-09 06:25:59.101955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:19.627 [2024-12-09 06:25:59.101966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:120152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.627 [2024-12-09 06:25:59.101971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:19.627 [2024-12-09 06:25:59.101982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:120160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.627 [2024-12-09 06:25:59.101987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:19.627 [2024-12-09 06:25:59.101998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:120168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.627 [2024-12-09 06:25:59.102003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:19.627 [2024-12-09 06:25:59.102014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:120176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.627 [2024-12-09 06:25:59.102019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:19.627 [2024-12-09 06:25:59.102030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:120184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.627 [2024-12-09 06:25:59.102035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.627 [2024-12-09 06:25:59.102047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:120192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.627 [2024-12-09 06:25:59.102052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:19.627 [2024-12-09 06:25:59.102063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:120200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.627 [2024-12-09 06:25:59.102068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:19.627 [2024-12-09 06:25:59.102079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:120208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.627 [2024-12-09 06:25:59.102086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:19.627 [2024-12-09 06:25:59.102096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:120216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.627 [2024-12-09 06:25:59.102102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:19.627 [2024-12-09 06:25:59.102112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:120224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.627 [2024-12-09 06:25:59.102118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:19.627 [2024-12-09 06:25:59.102128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:120232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.627 [2024-12-09 06:25:59.102134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:19.627 [2024-12-09 06:25:59.102145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:120240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.627 [2024-12-09 06:25:59.102151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:19.627 [2024-12-09 06:25:59.102162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:120248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.627 [2024-12-09 06:25:59.102168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:19.627 [2024-12-09 06:25:59.102179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:120256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.627 [2024-12-09 06:25:59.102184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:19.627 [2024-12-09 06:25:59.102195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:120264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.627 [2024-12-09 06:25:59.102200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:19.627 [2024-12-09 06:25:59.102211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:120272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.627 [2024-12-09 06:25:59.102217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:19.627 [2024-12-09 06:25:59.102227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:120280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.627 [2024-12-09 06:25:59.102233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:19.627 [2024-12-09 06:25:59.102243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:120288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.627 [2024-12-09 06:25:59.102248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:19.627 [2024-12-09 06:25:59.102687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:120296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.627 [2024-12-09 06:25:59.102695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:19.627 [2024-12-09 06:25:59.102706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:120304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.627 [2024-12-09 06:25:59.102713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:19.627 [2024-12-09 06:25:59.102724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:120312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.627 [2024-12-09 06:25:59.102730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:19.627 [2024-12-09 06:25:59.102740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:120320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.627 [2024-12-09 06:25:59.102746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:19.627 [2024-12-09 06:25:59.102758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:120328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.627 [2024-12-09 06:25:59.102763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:19.627 [2024-12-09 06:25:59.102773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:120336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.627 [2024-12-09 06:25:59.102779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:19.628 [2024-12-09 06:25:59.102794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:120344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.628 [2024-12-09 06:25:59.102799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:19.628 [2024-12-09 06:25:59.102810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:120352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.628 [2024-12-09 06:25:59.102816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:19.628 [2024-12-09 06:25:59.102829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:120360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.628 [2024-12-09 06:25:59.102834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:19.628 [2024-12-09 06:25:59.102845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:120368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.628 [2024-12-09 06:25:59.102850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:19.628 [2024-12-09 06:25:59.102861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:120376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.628 [2024-12-09 06:25:59.102867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:19.628 [2024-12-09 06:25:59.102877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:120384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.628 [2024-12-09 06:25:59.102883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:19.628 [2024-12-09 06:25:59.102894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:120392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.628 [2024-12-09 06:25:59.102900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:19.628 [2024-12-09 06:25:59.102911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:120400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.628 [2024-12-09 06:25:59.102917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:19.628 [2024-12-09 06:25:59.102929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:120408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.628 [2024-12-09 06:25:59.102934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:19.628 [2024-12-09 06:25:59.102945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:120416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.628 [2024-12-09 06:25:59.102951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:19.628 [2024-12-09 06:25:59.102962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:120424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.628 [2024-12-09 06:25:59.102967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:19.628 [2024-12-09 06:25:59.102978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:120432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.628 [2024-12-09 06:25:59.102985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.628 [2024-12-09 06:25:59.102995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:120440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.628 [2024-12-09 06:25:59.103002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.628 [2024-12-09 06:25:59.103013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:120448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.628 [2024-12-09 06:25:59.103018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:19.628 [2024-12-09 06:25:59.103029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:120456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.628 [2024-12-09 06:25:59.103035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:19.628 [2024-12-09 06:25:59.103046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:120464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.628 [2024-12-09 06:25:59.103052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:19.628 [2024-12-09 06:25:59.103063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:120472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.628 [2024-12-09 06:25:59.103069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:19.628 [2024-12-09 06:25:59.103080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:120480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.628 [2024-12-09 06:25:59.103086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:19.628 [2024-12-09 06:25:59.103096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:120488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.628 [2024-12-09 06:25:59.103101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:19.628 [2024-12-09 06:25:59.103112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:120496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.628 [2024-12-09 06:25:59.103118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:19.628 [2024-12-09 06:25:59.103129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:120504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.628 [2024-12-09 06:25:59.103134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:19.628 [2024-12-09 06:25:59.103144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:120512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.628 [2024-12-09 06:25:59.103150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:19.628 [2024-12-09 06:25:59.103161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:120520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.628 [2024-12-09 06:25:59.103166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:19.628 [2024-12-09 06:25:59.103176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:120528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.628 [2024-12-09 06:25:59.103182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:19.628 [2024-12-09 06:25:59.103192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:120536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.628 [2024-12-09 06:25:59.103199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:19.628 [2024-12-09 06:25:59.103209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.628 [2024-12-09 06:25:59.103214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:19.628 [2024-12-09 06:25:59.103225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:120552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.628 [2024-12-09 06:25:59.106864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:19.628 [2024-12-09 06:25:59.106912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:120560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.628 [2024-12-09 06:25:59.106923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:19.628 [2024-12-09 06:25:59.106938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:120568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.628 [2024-12-09 06:25:59.106945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:19.628 [2024-12-09 06:25:59.106959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:120576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.628 [2024-12-09 06:25:59.106967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:19.628 [2024-12-09 06:25:59.106981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:120584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.628 [2024-12-09 06:25:59.106989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:19.628 [2024-12-09 06:25:59.107003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:120592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.628 [2024-12-09 06:25:59.107011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:19.628 [2024-12-09 06:25:59.107025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:120600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.628 [2024-12-09 06:25:59.107033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:19.628 [2024-12-09 06:25:59.107048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:120608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.628 [2024-12-09 06:25:59.107055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:19.628 [2024-12-09 06:25:59.107069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:120616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.628 [2024-12-09 06:25:59.107077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:19.628 [2024-12-09 06:25:59.107091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:120624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.628 [2024-12-09 06:25:59.107098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:19.628 [2024-12-09 06:25:59.107112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:120632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.628 [2024-12-09 06:25:59.107120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:19.628 [2024-12-09 06:25:59.107138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:120640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.628 [2024-12-09 06:25:59.107146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:19.628 [2024-12-09 06:25:59.107160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:120648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.628 [2024-12-09 06:25:59.107168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:19.629 [2024-12-09 06:25:59.107182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:120656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.629 [2024-12-09 06:25:59.107189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:19.629 [2024-12-09 06:25:59.107204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:120664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.629 [2024-12-09 06:25:59.107211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:19.629 [2024-12-09 06:25:59.107226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:120672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.629 [2024-12-09 06:25:59.107233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:19.629 [2024-12-09 06:25:59.107247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:120680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.629 [2024-12-09 06:25:59.107254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:19.629 [2024-12-09 06:25:59.107268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:120688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.629 [2024-12-09 06:25:59.107276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:19.629 [2024-12-09 06:25:59.107290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:120696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.629 [2024-12-09 06:25:59.107297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.629 [2024-12-09 06:25:59.107312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:120704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.629 [2024-12-09 06:25:59.107319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:19.629 [2024-12-09 06:25:59.107333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:120712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.629 [2024-12-09 06:25:59.107340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:19.629 [2024-12-09 06:25:59.107355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:120720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.629 [2024-12-09 06:25:59.107362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:19.629 [2024-12-09 06:25:59.107376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:120728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.629 [2024-12-09 06:25:59.107383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:19.629 [2024-12-09 06:25:59.107399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:120736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.629 [2024-12-09 06:25:59.107407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:19.629 [2024-12-09 06:25:59.107421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:120744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.629 [2024-12-09 06:25:59.107428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:19.629 [2024-12-09 06:25:59.107442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:120752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.629 [2024-12-09 06:25:59.107456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:19.629 [2024-12-09 06:25:59.107471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:120760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.629 [2024-12-09 06:25:59.107478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:19.629 [2024-12-09 06:25:59.107492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:120768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.629 [2024-12-09 06:25:59.107500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:19.629 [2024-12-09 06:25:59.107514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:120776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.629 [2024-12-09 06:25:59.107521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:19.629 [2024-12-09 06:25:59.107536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:120784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.629 [2024-12-09 06:25:59.107543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:19.629 [2024-12-09 06:25:59.107557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:120792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.629 [2024-12-09 06:25:59.107564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:19.629 [2024-12-09 06:25:59.107579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:120800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.629 [2024-12-09 06:25:59.107586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:19.629 [2024-12-09 06:25:59.108164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:120808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.629 [2024-12-09 06:25:59.108177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:19.629 [2024-12-09 06:25:59.108190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:120816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.629 [2024-12-09 06:25:59.108196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:19.629 [2024-12-09 06:25:59.108207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:120824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.629 [2024-12-09 06:25:59.108213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:19.629 [2024-12-09 06:25:59.108226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:120832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.629 [2024-12-09 06:25:59.108232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:19.629 [2024-12-09 06:25:59.108242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:120840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.629 [2024-12-09 06:25:59.108248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:19.629 [2024-12-09 06:25:59.108258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:120848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.629 [2024-12-09 06:25:59.108263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:19.629 [2024-12-09 06:25:59.108274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:120856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.629 [2024-12-09 06:25:59.108280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:19.629 [2024-12-09 06:25:59.108290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:120864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.629 [2024-12-09 06:25:59.108295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:19.629 [2024-12-09 06:25:59.108306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:120872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.629 [2024-12-09 06:25:59.108311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:19.629 [2024-12-09 06:25:59.108321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:120880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.629 [2024-12-09 06:25:59.108327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:19.629 [2024-12-09 06:25:59.108337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:120888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.629 [2024-12-09 06:25:59.108342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:19.629 [2024-12-09 06:25:59.108353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:119880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.629 [2024-12-09 06:25:59.108359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:19.629 [2024-12-09 06:25:59.108369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:119888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.629 [2024-12-09 06:25:59.108374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:19.629 [2024-12-09 06:25:59.108385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:119896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.629 [2024-12-09 06:25:59.108390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:19.629 [2024-12-09 06:25:59.108401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:119904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.629 [2024-12-09 06:25:59.108406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:19.629 [2024-12-09 06:25:59.108417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:119912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.629 [2024-12-09 06:25:59.108424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:19.629 [2024-12-09 06:25:59.108434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:119920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.629 [2024-12-09 06:25:59.108440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:19.629 [2024-12-09 06:25:59.108456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:119928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.629 [2024-12-09 06:25:59.108462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:19.629 [2024-12-09 06:25:59.108472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:119936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.629 [2024-12-09 06:25:59.108477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.629 [2024-12-09 06:25:59.108488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:119944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.629 [2024-12-09 06:25:59.108494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:19.630 [2024-12-09 06:25:59.108504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:119952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.630 [2024-12-09 06:25:59.108509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:19.630 [2024-12-09 06:25:59.108520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:119960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.630 [2024-12-09 06:25:59.108525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:19.630 [2024-12-09 06:25:59.108536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:119968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.630 [2024-12-09 06:25:59.108541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:19.630 [2024-12-09 06:25:59.108552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:119976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.630 [2024-12-09 06:25:59.108557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:19.630 [2024-12-09 06:25:59.108568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:119984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.630 [2024-12-09 06:25:59.108573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:19.630 [2024-12-09 06:25:59.108584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:119992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.630 [2024-12-09 06:25:59.108589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:19.630 [2024-12-09 06:25:59.108600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:120000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.630 [2024-12-09 06:25:59.108605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:19.630 [2024-12-09 06:25:59.108616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:120896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.630 [2024-12-09 06:25:59.108625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:19.630 [2024-12-09 06:25:59.108635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:120008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.630 [2024-12-09 06:25:59.108641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:19.630 [2024-12-09 06:25:59.108651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:120016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.630 [2024-12-09 06:25:59.108657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:19.630 [2024-12-09 06:25:59.108667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:120024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.630 [2024-12-09 06:25:59.108672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:19.630 [2024-12-09 06:25:59.108683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:120032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.630 [2024-12-09 06:25:59.108688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:19.630 [2024-12-09 06:25:59.108699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:120040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.630 [2024-12-09 06:25:59.108704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:19.630 [2024-12-09 06:25:59.108715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:120048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.630 [2024-12-09 06:25:59.108720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:19.630 [2024-12-09 06:25:59.108731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:120056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.630 [2024-12-09 06:25:59.108736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:19.630 [2024-12-09 06:25:59.108747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:120064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.630 [2024-12-09 06:25:59.108752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:19.630 [2024-12-09 06:25:59.108763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:120072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.630 [2024-12-09 06:25:59.108768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:19.630 [2024-12-09 06:25:59.108779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:120080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.630 [2024-12-09 06:25:59.108784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:19.630 [2024-12-09 06:25:59.108795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:120088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.630 [2024-12-09 06:25:59.108800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:19.630 [2024-12-09 06:25:59.108811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:120096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.630 [2024-12-09 06:25:59.108816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:19.630 [2024-12-09 06:25:59.108828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:120104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.630 [2024-12-09 06:25:59.108834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:19.630 [2024-12-09 06:25:59.108844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:120112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.630 [2024-12-09 06:25:59.108850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:19.630 [2024-12-09 06:25:59.108860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:120120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.630 [2024-12-09 06:25:59.108865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:19.630 [2024-12-09 06:25:59.108876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:120128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.630 [2024-12-09 06:25:59.108881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:19.630 [2024-12-09 06:25:59.108892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:120136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.630 [2024-12-09 06:25:59.108897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:19.630 [2024-12-09 06:25:59.108908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:120144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.630 [2024-12-09 06:25:59.108913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:19.630 [2024-12-09 06:25:59.108924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:120152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.630 [2024-12-09 06:25:59.108929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:19.630 [2024-12-09 06:25:59.108940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:120160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.630 [2024-12-09 06:25:59.108945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:19.630 [2024-12-09 06:25:59.108956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:120168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.630 [2024-12-09 06:25:59.108961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:19.630 [2024-12-09 06:25:59.108972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:120176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.630 [2024-12-09 06:25:59.108977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:19.630 [2024-12-09 06:25:59.108988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:120184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.630 [2024-12-09 06:25:59.108993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.630 [2024-12-09 06:25:59.109004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:120192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.630 [2024-12-09 06:25:59.109009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:19.630 [2024-12-09 06:25:59.109021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:120200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.630 [2024-12-09 06:25:59.109026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:19.630 [2024-12-09 06:25:59.109037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:120208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.630 [2024-12-09 06:25:59.109042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:19.630 [2024-12-09 06:25:59.109053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:120216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.630 [2024-12-09 06:25:59.109058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:19.630 [2024-12-09 06:25:59.109069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:120224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.630 [2024-12-09 06:25:59.109074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:19.630 [2024-12-09 06:25:59.109085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:120232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.630 [2024-12-09 06:25:59.109090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:19.630 [2024-12-09 06:25:59.109101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:120240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.630 [2024-12-09 06:25:59.109106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:19.630 [2024-12-09 06:25:59.109116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:120248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.631 [2024-12-09 06:25:59.109122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:19.631 [2024-12-09 06:25:59.109133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:120256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.631 [2024-12-09 06:25:59.109138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:19.631 [2024-12-09 06:25:59.109149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:120264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.631 [2024-12-09 06:25:59.109154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:19.631 [2024-12-09 06:25:59.109164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:120272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.631 [2024-12-09 06:25:59.109169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:19.631 [2024-12-09 06:25:59.109180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:120280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.631 [2024-12-09 06:25:59.109186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:19.631 [2024-12-09 06:25:59.109617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:120288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.631 [2024-12-09 06:25:59.109625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:19.631 [2024-12-09 06:25:59.109639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:120296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.631 [2024-12-09 06:25:59.109644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:19.631 [2024-12-09 06:25:59.109655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:120304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.631 [2024-12-09 06:25:59.109660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:19.631 [2024-12-09 06:25:59.109671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:120312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.631 [2024-12-09 06:25:59.109676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:19.631 [2024-12-09 06:25:59.109686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:120320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.631 [2024-12-09 06:25:59.109692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:19.631 [2024-12-09 06:25:59.109702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:120328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.631 [2024-12-09 06:25:59.109707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:19.631 [2024-12-09 06:25:59.109718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:120336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.631 [2024-12-09 06:25:59.109723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:19.631 [2024-12-09 06:25:59.109734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:120344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.631 [2024-12-09 06:25:59.109739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:19.631 [2024-12-09 06:25:59.109749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:120352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.631 [2024-12-09 06:25:59.109755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:19.631 [2024-12-09 06:25:59.109765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:120360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.631 [2024-12-09 06:25:59.109770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:19.631 [2024-12-09 06:25:59.109781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:120368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.631 [2024-12-09 06:25:59.109786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:19.631 [2024-12-09 06:25:59.109797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:120376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.631 [2024-12-09 06:25:59.109802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:19.631 [2024-12-09 06:25:59.109813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:120384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.631 [2024-12-09 06:25:59.109818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:19.631 [2024-12-09 06:25:59.109828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:120392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.631 [2024-12-09 06:25:59.109835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:19.631 [2024-12-09 06:25:59.109845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:120400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.631 [2024-12-09 06:25:59.109851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:19.631 [2024-12-09 06:25:59.109862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:120408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.631 [2024-12-09 06:25:59.109867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:19.631 [2024-12-09 06:25:59.109878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:120416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.631 [2024-12-09 06:25:59.109883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:19.631 [2024-12-09 06:25:59.109894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:120424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.631 [2024-12-09 06:25:59.109899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:19.631 [2024-12-09 06:25:59.109909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:120432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.631 [2024-12-09 06:25:59.109915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.631 [2024-12-09 06:25:59.109925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:120440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.631 [2024-12-09 06:25:59.109930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.631 [2024-12-09 06:25:59.109941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:120448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.631 [2024-12-09 06:25:59.109946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:19.631 [2024-12-09 06:25:59.109957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:120456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.631 [2024-12-09 06:25:59.109962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:19.631 [2024-12-09 06:25:59.109973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:120464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.631 [2024-12-09 06:25:59.109978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:19.631 [2024-12-09 06:25:59.109989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:120472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.631 [2024-12-09 06:25:59.109994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:19.631 [2024-12-09 06:25:59.110005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:120480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.631 [2024-12-09 06:25:59.110010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:19.631 [2024-12-09 06:25:59.110020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:120488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.631 [2024-12-09 06:25:59.110027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:19.631 [2024-12-09 06:25:59.110038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:120496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.631 [2024-12-09 06:25:59.110043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:19.631 [2024-12-09 06:25:59.110053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:120504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.631 [2024-12-09 06:25:59.110059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:19.631 [2024-12-09 06:25:59.110069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:120512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.631 [2024-12-09 06:25:59.110074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:19.631 [2024-12-09 06:25:59.110085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:120520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.631 [2024-12-09 06:25:59.110090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:19.631 [2024-12-09 06:25:59.110100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:120528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.632 [2024-12-09 06:25:59.110105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:19.632 [2024-12-09 06:25:59.110116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:120536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.632 [2024-12-09 06:25:59.110121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:19.632 [2024-12-09 06:25:59.110132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.632 [2024-12-09 06:25:59.110137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:19.632 [2024-12-09 06:25:59.110148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:120552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.632 [2024-12-09 06:25:59.110153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:19.632 [2024-12-09 06:25:59.110164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:120560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.632 [2024-12-09 06:25:59.110169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:19.632 [2024-12-09 06:25:59.110179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:120568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.632 [2024-12-09 06:25:59.110184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:19.632 [2024-12-09 06:25:59.110195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:120576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.632 [2024-12-09 06:25:59.110200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:19.632 [2024-12-09 06:25:59.110211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:120584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.632 [2024-12-09 06:25:59.110216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:19.632 [2024-12-09 06:25:59.110228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:120592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.632 [2024-12-09 06:25:59.110233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:19.632 [2024-12-09 06:25:59.110243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:120600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.632 [2024-12-09 06:25:59.110249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:19.632 [2024-12-09 06:25:59.110259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:120608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.632 [2024-12-09 06:25:59.110264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:19.632 [2024-12-09 06:25:59.110275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:120616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.632 [2024-12-09 06:25:59.110280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:19.632 [2024-12-09 06:25:59.110290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:120624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.632 [2024-12-09 06:25:59.110296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:19.632 [2024-12-09 06:25:59.110306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:120632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.632 [2024-12-09 06:25:59.110311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:19.632 [2024-12-09 06:25:59.110322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:120640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.632 [2024-12-09 06:25:59.110327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:19.632 [2024-12-09 06:25:59.110337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:120648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.632 [2024-12-09 06:25:59.110343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:19.632 [2024-12-09 06:25:59.110353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:120656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.632 [2024-12-09 06:25:59.110358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:19.632 [2024-12-09 06:25:59.110369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:120664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.632 [2024-12-09 06:25:59.110375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:19.632 [2024-12-09 06:25:59.110385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:120672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.632 [2024-12-09 06:25:59.110391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:19.632 [2024-12-09 06:25:59.110401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:120680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.632 [2024-12-09 06:25:59.110406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:19.632 [2024-12-09 06:25:59.110418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:120688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.632 [2024-12-09 06:25:59.110423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:19.632 [2024-12-09 06:25:59.110434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:120696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.632 [2024-12-09 06:25:59.110439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.632 [2024-12-09 06:25:59.110454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:120704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.632 [2024-12-09 06:25:59.110460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:19.632 [2024-12-09 06:25:59.110470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:120712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.632 [2024-12-09 06:25:59.110476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:19.632 [2024-12-09 06:25:59.110486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:120720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.632 [2024-12-09 06:25:59.110492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:19.632 [2024-12-09 06:25:59.110502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:120728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.632 [2024-12-09 06:25:59.110508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:19.632 [2024-12-09 06:25:59.110518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:120736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.632 [2024-12-09 06:25:59.110523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:19.632 [2024-12-09 06:25:59.110534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:120744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.632 [2024-12-09 06:25:59.110539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:19.632 [2024-12-09 06:25:59.110550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:120752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.632 [2024-12-09 06:25:59.110555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:19.632 [2024-12-09 06:25:59.110565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:120760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.632 [2024-12-09 06:25:59.110571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:19.632 [2024-12-09 06:25:59.110581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:120768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.632 [2024-12-09 06:25:59.110586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:19.632 [2024-12-09 06:25:59.110597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:120776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.632 [2024-12-09 06:25:59.110602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:19.632 [2024-12-09 06:25:59.110614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:120784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.632 [2024-12-09 06:25:59.110619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:19.632 [2024-12-09 06:25:59.110630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:120792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.632 [2024-12-09 06:25:59.110635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:19.632 [2024-12-09 06:25:59.111147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:120800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.632 [2024-12-09 06:25:59.111157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:19.632 [2024-12-09 06:25:59.111168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:120808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.632 [2024-12-09 06:25:59.111174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:19.632 [2024-12-09 06:25:59.111185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:120816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.632 [2024-12-09 06:25:59.111191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:19.632 [2024-12-09 06:25:59.111201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:120824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.632 [2024-12-09 06:25:59.111206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:19.632 [2024-12-09 06:25:59.111217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:120832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.633 [2024-12-09 06:25:59.111222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:19.633 [2024-12-09 06:25:59.111232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:120840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.633 [2024-12-09 06:25:59.111238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:19.633 [2024-12-09 06:25:59.111248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:120848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.633 [2024-12-09 06:25:59.111254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:19.633 [2024-12-09 06:25:59.111264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:120856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.633 [2024-12-09 06:25:59.111270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:19.633 [2024-12-09 06:25:59.111281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:120864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.633 [2024-12-09 06:25:59.111286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:19.633 [2024-12-09 06:25:59.111297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:120872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.633 [2024-12-09 06:25:59.111302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:19.633 [2024-12-09 06:25:59.111312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:120880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.633 [2024-12-09 06:25:59.111320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:19.633 [2024-12-09 06:25:59.111330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:120888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.633 [2024-12-09 06:25:59.111336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:19.633 [2024-12-09 06:25:59.111346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:119880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.633 [2024-12-09 06:25:59.111351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:19.633 [2024-12-09 06:25:59.111362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:119888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.633 [2024-12-09 06:25:59.111367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:19.633 [2024-12-09 06:25:59.111378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:119896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.633 [2024-12-09 06:25:59.111383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:19.633 [2024-12-09 06:25:59.111394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:119904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.633 [2024-12-09 06:25:59.111399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:19.633 [2024-12-09 06:25:59.111410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:119912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.633 [2024-12-09 06:25:59.111415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:19.633 [2024-12-09 06:25:59.111425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:119920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.633 [2024-12-09 06:25:59.111431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:19.633 [2024-12-09 06:25:59.111441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:119928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.633 [2024-12-09 06:25:59.111447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:19.633 [2024-12-09 06:25:59.111463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:119936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.633 [2024-12-09 06:25:59.111468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.633 [2024-12-09 06:25:59.111478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:119944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.633 [2024-12-09 06:25:59.111484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:19.633 [2024-12-09 06:25:59.111494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:119952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.633 [2024-12-09 06:25:59.111500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:19.633 [2024-12-09 06:25:59.111510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:119960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.633 [2024-12-09 06:25:59.111517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:19.633 [2024-12-09 06:25:59.111528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:119968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.633 [2024-12-09 06:25:59.111533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:19.633 [2024-12-09 06:25:59.111544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:119976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.633 [2024-12-09 06:25:59.111549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:19.633 [2024-12-09 06:25:59.111559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:119984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.633 [2024-12-09 06:25:59.111565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:19.633 [2024-12-09 06:25:59.111575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:119992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.633 [2024-12-09 06:25:59.111581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:19.633 [2024-12-09 06:25:59.111591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:120000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.633 [2024-12-09 06:25:59.111596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:19.633 [2024-12-09 06:25:59.111607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:120896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.633 [2024-12-09 06:25:59.111612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:19.633 [2024-12-09 06:25:59.111622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:120008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.633 [2024-12-09 06:25:59.111628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:19.633 [2024-12-09 06:25:59.111638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:120016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.633 [2024-12-09 06:25:59.111643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:19.633 [2024-12-09 06:25:59.111654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:120024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.633 [2024-12-09 06:25:59.111659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:19.633 [2024-12-09 06:25:59.111670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:120032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.633 [2024-12-09 06:25:59.111675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:19.633 [2024-12-09 06:25:59.111686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:120040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.633 [2024-12-09 06:25:59.111691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:19.633 [2024-12-09 06:25:59.111701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:120048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.633 [2024-12-09 06:25:59.111707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:19.633 [2024-12-09 06:25:59.111719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:120056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.633 [2024-12-09 06:25:59.111724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:19.633 [2024-12-09 06:25:59.111734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:120064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.633 [2024-12-09 06:25:59.111740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:19.633 [2024-12-09 06:25:59.111750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:120072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.633 [2024-12-09 06:25:59.111756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:19.633 [2024-12-09 06:25:59.111767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:120080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.633 [2024-12-09 06:25:59.111772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:19.633 [2024-12-09 06:25:59.111782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:120088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.633 [2024-12-09 06:25:59.111788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:19.633 [2024-12-09 06:25:59.111798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:120096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.633 [2024-12-09 06:25:59.111803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:19.633 [2024-12-09 06:25:59.111814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:120104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.633 [2024-12-09 06:25:59.111819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:19.634 [2024-12-09 06:25:59.111830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:120112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.634 [2024-12-09 06:25:59.111835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:19.634 [2024-12-09 06:25:59.111845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:120120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.634 [2024-12-09 06:25:59.111850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:19.634 [2024-12-09 06:25:59.111861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:120128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.634 [2024-12-09 06:25:59.111866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:19.634 [2024-12-09 06:25:59.111877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:120136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.634 [2024-12-09 06:25:59.111882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:19.634 [2024-12-09 06:25:59.111893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:120144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.634 [2024-12-09 06:25:59.111899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:19.634 [2024-12-09 06:25:59.111910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:120152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.634 [2024-12-09 06:25:59.111916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:19.634 [2024-12-09 06:25:59.111926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:120160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.634 [2024-12-09 06:25:59.111932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:19.634 [2024-12-09 06:25:59.111942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:120168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.634 [2024-12-09 06:25:59.111947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:19.634 [2024-12-09 06:25:59.111958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:120176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.634 [2024-12-09 06:25:59.111963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:19.634 [2024-12-09 06:25:59.111973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:120184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.634 [2024-12-09 06:25:59.111979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.634 [2024-12-09 06:25:59.111989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:120192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.634 [2024-12-09 06:25:59.111994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:19.634 [2024-12-09 06:25:59.112005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:120200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.634 [2024-12-09 06:25:59.112010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:19.634 [2024-12-09 06:25:59.112021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:120208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.634 [2024-12-09 06:25:59.112026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:19.634 [2024-12-09 06:25:59.112036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:120216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.634 [2024-12-09 06:25:59.112041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:19.634 [2024-12-09 06:25:59.112052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:120224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.634 [2024-12-09 06:25:59.112057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:19.634 [2024-12-09 06:25:59.112068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:120232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.634 [2024-12-09 06:25:59.112073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:19.634 [2024-12-09 06:25:59.112084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:120240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.634 [2024-12-09 06:25:59.112089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:19.634 [2024-12-09 06:25:59.112101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:120248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.634 [2024-12-09 06:25:59.112106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:19.634 [2024-12-09 06:25:59.112117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:120256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.634 [2024-12-09 06:25:59.112122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:19.634 [2024-12-09 06:25:59.112132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:120264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.634 [2024-12-09 06:25:59.112138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:19.634 [2024-12-09 06:25:59.112148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:120272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.634 [2024-12-09 06:25:59.112153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:19.634 [2024-12-09 06:25:59.112606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:120280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.634 [2024-12-09 06:25:59.112615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:19.634 [2024-12-09 06:25:59.112627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:120288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.634 [2024-12-09 06:25:59.112632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:19.634 [2024-12-09 06:25:59.112643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:120296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.634 [2024-12-09 06:25:59.112648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:19.634 [2024-12-09 06:25:59.112659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:120304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.634 [2024-12-09 06:25:59.112664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:19.634 [2024-12-09 06:25:59.112675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:120312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.634 [2024-12-09 06:25:59.112680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:19.634 [2024-12-09 06:25:59.112690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:120320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.634 [2024-12-09 06:25:59.112696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:19.634 [2024-12-09 06:25:59.112706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:120328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.634 [2024-12-09 06:25:59.112712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:19.634 [2024-12-09 06:25:59.112722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:120336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.634 [2024-12-09 06:25:59.112728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:19.634 [2024-12-09 06:25:59.112738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:120344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.634 [2024-12-09 06:25:59.112746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:19.634 [2024-12-09 06:25:59.112757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:120352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.634 [2024-12-09 06:25:59.112762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:19.634 [2024-12-09 06:25:59.112773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:120360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.634 [2024-12-09 06:25:59.112778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:19.634 [2024-12-09 06:25:59.112788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:120368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.634 [2024-12-09 06:25:59.112794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:19.634 [2024-12-09 06:25:59.112804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:120376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.634 [2024-12-09 06:25:59.112810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:19.634 [2024-12-09 06:25:59.112820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:120384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.634 [2024-12-09 06:25:59.112826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:19.634 [2024-12-09 06:25:59.112837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:120392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.634 [2024-12-09 06:25:59.112842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:19.634 [2024-12-09 06:25:59.112853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:120400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.634 [2024-12-09 06:25:59.112858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:19.634 [2024-12-09 06:25:59.112868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:120408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.634 [2024-12-09 06:25:59.112874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:19.635 [2024-12-09 06:25:59.112884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:120416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.635 [2024-12-09 06:25:59.112890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:19.635 [2024-12-09 06:25:59.112900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:120424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.635 [2024-12-09 06:25:59.112906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:19.635 [2024-12-09 06:25:59.112916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:120432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.635 [2024-12-09 06:25:59.112922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.635 [2024-12-09 06:25:59.112932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:120440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.635 [2024-12-09 06:25:59.112941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.635 [2024-12-09 06:25:59.112951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:120448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.635 [2024-12-09 06:25:59.112957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:19.635 [2024-12-09 06:25:59.112967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:120456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.635 [2024-12-09 06:25:59.112973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:19.635 [2024-12-09 06:25:59.112983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:120464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.635 [2024-12-09 06:25:59.112988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:19.635 [2024-12-09 06:25:59.112999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:120472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.635 [2024-12-09 06:25:59.113004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:19.635 [2024-12-09 06:25:59.113015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:120480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.635 [2024-12-09 06:25:59.113020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:19.635 [2024-12-09 06:25:59.113031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:120488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.635 [2024-12-09 06:25:59.113036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:19.635 [2024-12-09 06:25:59.113046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:120496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.635 [2024-12-09 06:25:59.113052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:19.635 [2024-12-09 06:25:59.113062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:120504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.635 [2024-12-09 06:25:59.113067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:19.635 [2024-12-09 06:25:59.113078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:120512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.635 [2024-12-09 06:25:59.113084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:19.635 [2024-12-09 06:25:59.113094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:120520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.635 [2024-12-09 06:25:59.113099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:19.635 [2024-12-09 06:25:59.113110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.635 [2024-12-09 06:25:59.113115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:19.635 [2024-12-09 06:25:59.113126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:120536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.635 [2024-12-09 06:25:59.113131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:19.635 [2024-12-09 06:25:59.113143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:120544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.635 [2024-12-09 06:25:59.113149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:19.635 [2024-12-09 06:25:59.113159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:120552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.635 [2024-12-09 06:25:59.113165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:19.635 [2024-12-09 06:25:59.113175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:120560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.635 [2024-12-09 06:25:59.113181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:19.635 [2024-12-09 06:25:59.113191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:120568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.635 [2024-12-09 06:25:59.113196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:19.635 [2024-12-09 06:25:59.113207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:120576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.635 [2024-12-09 06:25:59.113212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:19.635 [2024-12-09 06:25:59.113223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:120584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.635 [2024-12-09 06:25:59.113228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:19.635 [2024-12-09 06:25:59.113239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:120592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.635 [2024-12-09 06:25:59.113244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:19.635 [2024-12-09 06:25:59.113254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:120600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.635 [2024-12-09 06:25:59.113260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:19.635 [2024-12-09 06:25:59.113270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:120608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.635 [2024-12-09 06:25:59.113275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:19.635 [2024-12-09 06:25:59.113286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:120616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.635 [2024-12-09 06:25:59.113292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:19.635 [2024-12-09 06:25:59.113302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:120624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.635 [2024-12-09 06:25:59.113308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:19.635 [2024-12-09 06:25:59.113318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:120632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.635 [2024-12-09 06:25:59.113323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:19.635 [2024-12-09 06:25:59.113335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:120640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.635 [2024-12-09 06:25:59.113341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:19.635 [2024-12-09 06:25:59.113351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:120648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.635 [2024-12-09 06:25:59.113357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:19.635 [2024-12-09 06:25:59.113367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:120656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.635 [2024-12-09 06:25:59.113373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:19.635 [2024-12-09 06:25:59.113384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:120664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.635 [2024-12-09 06:25:59.113389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:19.635 [2024-12-09 06:25:59.113399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:120672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.635 [2024-12-09 06:25:59.113405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:19.635 [2024-12-09 06:25:59.113415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:120680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.635 [2024-12-09 06:25:59.113421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:19.635 [2024-12-09 06:25:59.113832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:120688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.635 [2024-12-09 06:25:59.113840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:19.635 [2024-12-09 06:25:59.113852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:120696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.635 [2024-12-09 06:25:59.113858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.635 [2024-12-09 06:25:59.113868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:120704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.635 [2024-12-09 06:25:59.113874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:19.635 [2024-12-09 06:25:59.113884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:120712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.636 [2024-12-09 06:25:59.113889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:19.636 [2024-12-09 06:25:59.113900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:120720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.636 [2024-12-09 06:25:59.113905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:19.636 [2024-12-09 06:25:59.113916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:120728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.636 [2024-12-09 06:25:59.113921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:19.636 [2024-12-09 06:25:59.113934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:120736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.636 [2024-12-09 06:25:59.113939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:19.636 [2024-12-09 06:25:59.113950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:120744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.636 [2024-12-09 06:25:59.113955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:19.636 [2024-12-09 06:25:59.113965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:120752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.636 [2024-12-09 06:25:59.113971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:19.636 [2024-12-09 06:25:59.113981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:120760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.636 [2024-12-09 06:25:59.113986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:19.636 [2024-12-09 06:25:59.113997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:120768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.636 [2024-12-09 06:25:59.114002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:19.636 [2024-12-09 06:25:59.114013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:120776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.636 [2024-12-09 06:25:59.114018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:19.636 [2024-12-09 06:25:59.114028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:120784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.636 [2024-12-09 06:25:59.114034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:19.636 [2024-12-09 06:25:59.114153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:120792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.636 [2024-12-09 06:25:59.114161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:19.636 [2024-12-09 06:25:59.114172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:120800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.636 [2024-12-09 06:25:59.114177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:19.636 [2024-12-09 06:25:59.114188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:120808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.636 [2024-12-09 06:25:59.114193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:19.636 [2024-12-09 06:25:59.114204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:120816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.636 [2024-12-09 06:25:59.114209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:19.636 [2024-12-09 06:25:59.114219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:120824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.636 [2024-12-09 06:25:59.114225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:19.636 [2024-12-09 06:25:59.114235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:120832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.636 [2024-12-09 06:25:59.114242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:19.636 [2024-12-09 06:25:59.114253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:120840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.636 [2024-12-09 06:25:59.114258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:19.636 [2024-12-09 06:25:59.114269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:120848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.636 [2024-12-09 06:25:59.114274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:19.636 [2024-12-09 06:25:59.114285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:120856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.636 [2024-12-09 06:25:59.114290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:19.636 [2024-12-09 06:25:59.114300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:120864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.636 [2024-12-09 06:25:59.114305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:19.636 [2024-12-09 06:25:59.114316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:120872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.636 [2024-12-09 06:25:59.114322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:19.636 [2024-12-09 06:25:59.114332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:120880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.636 [2024-12-09 06:25:59.114337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:19.636 [2024-12-09 06:25:59.114348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:120888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.636 [2024-12-09 06:25:59.114353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:19.636 [2024-12-09 06:25:59.114364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:119880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.636 [2024-12-09 06:25:59.114369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:19.636 [2024-12-09 06:25:59.114380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:119888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.636 [2024-12-09 06:25:59.114385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:19.636 [2024-12-09 06:25:59.114396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:119896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.636 [2024-12-09 06:25:59.114402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:19.636 [2024-12-09 06:25:59.114412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:119904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.636 [2024-12-09 06:25:59.114418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:19.636 [2024-12-09 06:25:59.114428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:119912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.636 [2024-12-09 06:25:59.114435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:19.636 [2024-12-09 06:25:59.114446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:119920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.636 [2024-12-09 06:25:59.114455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:19.636 [2024-12-09 06:25:59.114466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:119928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.636 [2024-12-09 06:25:59.114471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:19.636 [2024-12-09 06:25:59.114482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:119936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.636 [2024-12-09 06:25:59.114487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.636 [2024-12-09 06:25:59.114498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:119944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.636 [2024-12-09 06:25:59.114503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:19.636 [2024-12-09 06:25:59.114514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:119952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.636 [2024-12-09 06:25:59.114519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:19.636 [2024-12-09 06:25:59.114530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:119960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.636 [2024-12-09 06:25:59.114535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:19.636 [2024-12-09 06:25:59.114546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:119968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.637 [2024-12-09 06:25:59.114551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:19.637 [2024-12-09 06:25:59.114561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:119976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.637 [2024-12-09 06:25:59.114567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:19.637 [2024-12-09 06:25:59.114578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:119984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.637 [2024-12-09 06:25:59.114583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:19.637 [2024-12-09 06:25:59.114594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:119992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.637 [2024-12-09 06:25:59.114599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:19.637 [2024-12-09 06:25:59.114610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:120000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.637 [2024-12-09 06:25:59.114615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:19.637 [2024-12-09 06:25:59.114626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:120896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.637 [2024-12-09 06:25:59.114631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:19.637 [2024-12-09 06:25:59.114643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:120008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.637 [2024-12-09 06:25:59.114648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:19.637 [2024-12-09 06:25:59.114659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:120016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.637 [2024-12-09 06:25:59.114664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:19.637 [2024-12-09 06:25:59.114675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:120024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.637 [2024-12-09 06:25:59.114680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:19.637 [2024-12-09 06:25:59.114691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:120032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.637 [2024-12-09 06:25:59.114696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:19.637 [2024-12-09 06:25:59.114707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:120040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.637 [2024-12-09 06:25:59.114712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:19.637 [2024-12-09 06:25:59.114722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:120048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.637 [2024-12-09 06:25:59.114728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:19.637 [2024-12-09 06:25:59.114738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:120056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.637 [2024-12-09 06:25:59.114744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:19.637 [2024-12-09 06:25:59.114754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:120064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.637 [2024-12-09 06:25:59.114760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:19.637 [2024-12-09 06:25:59.114770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:120072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.637 [2024-12-09 06:25:59.114776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:19.637 [2024-12-09 06:25:59.114786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:120080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.637 [2024-12-09 06:25:59.114792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:19.637 [2024-12-09 06:25:59.114802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:120088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.637 [2024-12-09 06:25:59.114808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:19.637 [2024-12-09 06:25:59.114818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:120096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.637 [2024-12-09 06:25:59.114824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:19.637 [2024-12-09 06:25:59.114836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:120104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.637 [2024-12-09 06:25:59.114841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:19.637 [2024-12-09 06:25:59.114852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:120112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.637 [2024-12-09 06:25:59.114857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:19.637 [2024-12-09 06:25:59.114868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:120120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.637 [2024-12-09 06:25:59.114873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:19.637 [2024-12-09 06:25:59.114884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:120128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.637 [2024-12-09 06:25:59.114889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:19.637 [2024-12-09 06:25:59.114900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:120136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.637 [2024-12-09 06:25:59.114905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:19.637 [2024-12-09 06:25:59.114916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:120144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.637 [2024-12-09 06:25:59.114921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:19.637 [2024-12-09 06:25:59.114932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:120152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.637 [2024-12-09 06:25:59.114937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:19.637 [2024-12-09 06:25:59.114948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:120160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.637 [2024-12-09 06:25:59.114953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:19.637 [2024-12-09 06:25:59.114964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:120168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.637 [2024-12-09 06:25:59.114970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:19.637 [2024-12-09 06:25:59.114980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:120176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.637 [2024-12-09 06:25:59.114986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:19.637 [2024-12-09 06:25:59.114996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:120184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.637 [2024-12-09 06:25:59.115002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.637 [2024-12-09 06:25:59.115013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:120192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.637 [2024-12-09 06:25:59.115018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:19.637 [2024-12-09 06:25:59.115031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:120200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.637 [2024-12-09 06:25:59.115036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:19.637 [2024-12-09 06:25:59.115047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:120208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.637 [2024-12-09 06:25:59.115052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:19.637 [2024-12-09 06:25:59.118805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:120216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.637 [2024-12-09 06:25:59.118830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:19.637 [2024-12-09 06:25:59.118846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:120224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.637 [2024-12-09 06:25:59.118855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:19.637 [2024-12-09 06:25:59.118872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:120232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.637 [2024-12-09 06:25:59.118882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:19.637 [2024-12-09 06:25:59.118901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:120240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.637 [2024-12-09 06:25:59.118910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:19.637 [2024-12-09 06:25:59.118929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:120248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.637 [2024-12-09 06:25:59.118939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:19.637 [2024-12-09 06:25:59.118960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:120256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.637 [2024-12-09 06:25:59.118969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:19.638 [2024-12-09 06:25:59.119801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:120264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.638 [2024-12-09 06:25:59.119819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:19.638 [2024-12-09 06:25:59.119842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:120272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.638 [2024-12-09 06:25:59.119852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:19.638 [2024-12-09 06:25:59.119871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:120280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.638 [2024-12-09 06:25:59.119880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:19.638 [2024-12-09 06:25:59.119900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:120288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.638 [2024-12-09 06:25:59.119909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:19.638 [2024-12-09 06:25:59.119928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:120296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.638 [2024-12-09 06:25:59.119942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:19.638 [2024-12-09 06:25:59.119961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:120304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.638 [2024-12-09 06:25:59.119971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:19.638 [2024-12-09 06:25:59.119990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:120312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.638 [2024-12-09 06:25:59.119999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:19.638 [2024-12-09 06:25:59.120018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:120320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.638 [2024-12-09 06:25:59.120027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:19.638 [2024-12-09 06:25:59.120046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:120328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.638 [2024-12-09 06:25:59.120056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:19.638 [2024-12-09 06:25:59.120075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:120336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.638 [2024-12-09 06:25:59.120084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:19.638 [2024-12-09 06:25:59.120103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:120344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.638 [2024-12-09 06:25:59.120113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:19.638 [2024-12-09 06:25:59.120131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:120352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.638 [2024-12-09 06:25:59.120141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:19.638 [2024-12-09 06:25:59.120160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:120360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.638 [2024-12-09 06:25:59.120169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:19.638 [2024-12-09 06:25:59.120188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:120368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.638 [2024-12-09 06:25:59.120197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:19.638 [2024-12-09 06:25:59.120216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:120376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.638 [2024-12-09 06:25:59.120225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:19.638 [2024-12-09 06:25:59.120244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:120384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.638 [2024-12-09 06:25:59.120254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:19.638 [2024-12-09 06:25:59.120273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:120392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.638 [2024-12-09 06:25:59.120285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:19.638 [2024-12-09 06:25:59.120304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:120400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.638 [2024-12-09 06:25:59.120313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:19.638 [2024-12-09 06:25:59.120332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:120408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.638 [2024-12-09 06:25:59.120342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:19.638 [2024-12-09 06:25:59.120361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:120416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.638 [2024-12-09 06:25:59.120370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:19.638 [2024-12-09 06:25:59.120389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.638 [2024-12-09 06:25:59.120398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:19.638 [2024-12-09 06:25:59.120418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:120432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.638 [2024-12-09 06:25:59.120427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.638 [2024-12-09 06:25:59.120446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:120440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.638 [2024-12-09 06:25:59.120462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.638 [2024-12-09 06:25:59.120481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:120448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.638 [2024-12-09 06:25:59.120491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:19.638 [2024-12-09 06:25:59.120509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:120456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.638 [2024-12-09 06:25:59.120519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:19.638 [2024-12-09 06:25:59.120538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:120464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.638 [2024-12-09 06:25:59.120547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:19.638 [2024-12-09 06:25:59.120566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:120472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.638 [2024-12-09 06:25:59.120576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:19.638 [2024-12-09 06:25:59.120594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:120480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.638 [2024-12-09 06:25:59.120604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:19.638 [2024-12-09 06:25:59.120623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:120488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.638 [2024-12-09 06:25:59.120632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:19.638 [2024-12-09 06:25:59.120652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:120496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.638 [2024-12-09 06:25:59.120658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:19.638 [2024-12-09 06:25:59.120668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:120504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.638 [2024-12-09 06:25:59.120673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:19.638 [2024-12-09 06:25:59.120684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:120512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.638 [2024-12-09 06:25:59.120689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:19.638 [2024-12-09 06:25:59.120700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:120520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.638 [2024-12-09 06:25:59.120705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:19.638 [2024-12-09 06:25:59.120716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:120528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.638 [2024-12-09 06:25:59.120721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:19.638 [2024-12-09 06:25:59.120732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:120536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.638 [2024-12-09 06:25:59.120737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:19.638 [2024-12-09 06:25:59.120748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:120544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.638 [2024-12-09 06:25:59.120753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:19.638 [2024-12-09 06:25:59.120764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:120552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.638 [2024-12-09 06:25:59.120769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:19.638 [2024-12-09 06:25:59.120780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:120560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.638 [2024-12-09 06:25:59.120785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:19.639 [2024-12-09 06:25:59.120796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:120568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.639 [2024-12-09 06:25:59.120801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:19.639 [2024-12-09 06:25:59.120811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:120576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.639 [2024-12-09 06:25:59.120817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:19.639 [2024-12-09 06:25:59.120827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:120584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.639 [2024-12-09 06:25:59.120832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:19.639 [2024-12-09 06:25:59.120844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:120592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.639 [2024-12-09 06:25:59.120850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:19.639 [2024-12-09 06:25:59.120860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:120600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.639 [2024-12-09 06:25:59.120866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:19.639 [2024-12-09 06:25:59.120876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:120608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.639 [2024-12-09 06:25:59.120881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:19.639 [2024-12-09 06:25:59.120892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:120616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.639 [2024-12-09 06:25:59.120897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:19.639 [2024-12-09 06:25:59.120908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:120624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.639 [2024-12-09 06:25:59.120913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:19.639 [2024-12-09 06:25:59.120924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:120632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.639 [2024-12-09 06:25:59.120929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:19.639 [2024-12-09 06:25:59.120940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:120640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.639 [2024-12-09 06:25:59.120945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:19.639 [2024-12-09 06:25:59.120956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:120648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.639 [2024-12-09 06:25:59.120961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:19.639 [2024-12-09 06:25:59.120972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:120656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.639 [2024-12-09 06:25:59.120977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:19.639 [2024-12-09 06:25:59.120988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:120664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.639 [2024-12-09 06:25:59.120993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:19.639 [2024-12-09 06:25:59.121004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:120672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.639 [2024-12-09 06:25:59.121009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:19.639 [2024-12-09 06:25:59.121019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:120680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.639 [2024-12-09 06:25:59.121025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:19.639 [2024-12-09 06:25:59.121036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:120688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.639 [2024-12-09 06:25:59.121042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:19.639 [2024-12-09 06:25:59.121053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:120696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.639 [2024-12-09 06:25:59.121058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.639 [2024-12-09 06:25:59.121069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:120704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.639 [2024-12-09 06:25:59.121074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:19.639 [2024-12-09 06:25:59.121085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:120712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.639 [2024-12-09 06:25:59.121090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:19.639 [2024-12-09 06:25:59.121100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:120720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.639 [2024-12-09 06:25:59.121106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:19.639 [2024-12-09 06:25:59.121117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:120728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.639 [2024-12-09 06:25:59.121122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:19.639 [2024-12-09 06:25:59.121133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:120736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.639 [2024-12-09 06:25:59.121139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:19.639 [2024-12-09 06:25:59.121149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:120744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.639 [2024-12-09 06:25:59.121154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:19.639 [2024-12-09 06:25:59.121165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:120752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.639 [2024-12-09 06:25:59.121170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:19.639 [2024-12-09 06:25:59.121181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:120760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.639 [2024-12-09 06:25:59.121186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:19.639 [2024-12-09 06:25:59.121197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:120768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.639 [2024-12-09 06:25:59.121202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:19.639 [2024-12-09 06:25:59.121213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:120776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.639 [2024-12-09 06:25:59.121219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:19.639 [2024-12-09 06:25:59.121742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:120784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.639 [2024-12-09 06:25:59.121756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:19.639 [2024-12-09 06:25:59.121768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:120792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.639 [2024-12-09 06:25:59.121773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:19.639 [2024-12-09 06:25:59.121784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:120800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.639 [2024-12-09 06:25:59.121790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:19.639 [2024-12-09 06:25:59.121800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:120808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.639 [2024-12-09 06:25:59.121806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:19.639 [2024-12-09 06:25:59.121816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:120816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.639 [2024-12-09 06:25:59.121822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:19.639 [2024-12-09 06:25:59.121832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:120824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.639 [2024-12-09 06:25:59.121838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:19.639 [2024-12-09 06:25:59.121848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:120832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.639 [2024-12-09 06:25:59.121853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:19.639 [2024-12-09 06:25:59.121864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:120840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.639 [2024-12-09 06:25:59.121869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:19.639 [2024-12-09 06:25:59.121880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:120848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.639 [2024-12-09 06:25:59.121885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:19.639 [2024-12-09 06:25:59.121896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:120856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.639 [2024-12-09 06:25:59.121901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:19.639 [2024-12-09 06:25:59.121911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:120864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.639 [2024-12-09 06:25:59.121917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:19.640 [2024-12-09 06:25:59.121927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:120872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.640 [2024-12-09 06:25:59.121932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:19.640 [2024-12-09 06:25:59.121943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:120880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.640 [2024-12-09 06:25:59.121951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:19.640 [2024-12-09 06:25:59.121962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:120888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.640 [2024-12-09 06:25:59.121968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:19.640 [2024-12-09 06:25:59.121978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:119880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.640 [2024-12-09 06:25:59.121984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:19.640 [2024-12-09 06:25:59.121994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:119888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.640 [2024-12-09 06:25:59.122000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:19.640 [2024-12-09 06:25:59.122010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:119896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.640 [2024-12-09 06:25:59.122016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:19.640 [2024-12-09 06:25:59.122026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:119904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.640 [2024-12-09 06:25:59.122031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:19.640 [2024-12-09 06:25:59.122042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:119912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.640 [2024-12-09 06:25:59.122047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:19.640 [2024-12-09 06:25:59.122058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:119920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.640 [2024-12-09 06:25:59.122063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:19.640 [2024-12-09 06:25:59.122074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:119928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.640 [2024-12-09 06:25:59.122079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:19.640 [2024-12-09 06:25:59.122090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:119936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.640 [2024-12-09 06:25:59.122095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.640 [2024-12-09 06:25:59.122106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:119944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.640 [2024-12-09 06:25:59.122111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:19.640 [2024-12-09 06:25:59.122122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:119952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.640 [2024-12-09 06:25:59.122127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:19.640 [2024-12-09 06:25:59.122138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:119960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.640 [2024-12-09 06:25:59.122143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:19.640 [2024-12-09 06:25:59.122155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:119968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.640 [2024-12-09 06:25:59.122160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:19.640 [2024-12-09 06:25:59.122171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:119976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.640 [2024-12-09 06:25:59.122177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:19.640 [2024-12-09 06:25:59.122187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:119984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.640 [2024-12-09 06:25:59.122193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:19.640 [2024-12-09 06:25:59.122204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:119992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.640 [2024-12-09 06:25:59.122209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:19.640 [2024-12-09 06:25:59.122220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:120000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.640 [2024-12-09 06:25:59.122225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:19.640 [2024-12-09 06:25:59.122236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:120896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.640 [2024-12-09 06:25:59.122241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:19.640 [2024-12-09 06:25:59.122251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:120008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.640 [2024-12-09 06:25:59.122257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:19.640 [2024-12-09 06:25:59.122267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:120016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.640 [2024-12-09 06:25:59.122273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:19.640 [2024-12-09 06:25:59.122283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:120024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.640 [2024-12-09 06:25:59.122289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:19.640 [2024-12-09 06:25:59.122299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:120032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.640 [2024-12-09 06:25:59.122304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:19.640 [2024-12-09 06:25:59.122315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:120040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.640 [2024-12-09 06:25:59.122320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:19.640 [2024-12-09 06:25:59.122331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:120048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.640 [2024-12-09 06:25:59.122336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:19.640 [2024-12-09 06:25:59.122348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:120056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.640 [2024-12-09 06:25:59.122353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:19.640 [2024-12-09 06:25:59.122364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:120064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.640 [2024-12-09 06:25:59.122369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:19.640 [2024-12-09 06:25:59.122379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:120072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.640 [2024-12-09 06:25:59.122385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:19.640 [2024-12-09 06:25:59.122395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:120080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.640 [2024-12-09 06:25:59.122401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:19.640 [2024-12-09 06:25:59.122411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:120088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.640 [2024-12-09 06:25:59.122416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:19.640 [2024-12-09 06:25:59.122427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:120096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.640 [2024-12-09 06:25:59.122432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:19.640 [2024-12-09 06:25:59.122443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:120104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.640 [2024-12-09 06:25:59.122454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:19.640 [2024-12-09 06:25:59.122465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:120112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.640 [2024-12-09 06:25:59.122470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:19.640 [2024-12-09 06:25:59.122480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:120120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.641 [2024-12-09 06:25:59.122485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:19.641 [2024-12-09 06:25:59.122496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:120128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.641 [2024-12-09 06:25:59.122501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:19.641 [2024-12-09 06:25:59.122512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:120136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.641 [2024-12-09 06:25:59.122518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:19.641 [2024-12-09 06:25:59.122528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:120144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.641 [2024-12-09 06:25:59.122534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:19.641 [2024-12-09 06:25:59.122544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:120152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.641 [2024-12-09 06:25:59.122551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:19.641 [2024-12-09 06:25:59.122561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:120160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.641 [2024-12-09 06:25:59.122567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:19.641 [2024-12-09 06:25:59.122578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:120168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.641 [2024-12-09 06:25:59.122583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:19.641 [2024-12-09 06:25:59.122593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:120176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.641 [2024-12-09 06:25:59.122599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:19.641 [2024-12-09 06:25:59.122609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:120184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.641 [2024-12-09 06:25:59.122615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.641 [2024-12-09 06:25:59.122625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:120192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.641 [2024-12-09 06:25:59.122631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:19.641 [2024-12-09 06:25:59.122641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:120200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.641 [2024-12-09 06:25:59.122646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:19.641 [2024-12-09 06:25:59.122657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:120208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.641 [2024-12-09 06:25:59.122662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:19.641 [2024-12-09 06:25:59.122673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:120216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.641 [2024-12-09 06:25:59.122678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:19.641 [2024-12-09 06:25:59.122689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:120224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.641 [2024-12-09 06:25:59.122694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:19.641 [2024-12-09 06:25:59.122705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:120232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.641 [2024-12-09 06:25:59.122710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:19.641 [2024-12-09 06:25:59.122721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:120240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.641 [2024-12-09 06:25:59.122726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:19.641 [2024-12-09 06:25:59.122737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:120248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.641 [2024-12-09 06:25:59.122744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:19.641 [2024-12-09 06:25:59.123170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:120256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.641 [2024-12-09 06:25:59.123177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:19.641 [2024-12-09 06:25:59.123188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:120264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.641 [2024-12-09 06:25:59.123194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:19.641 [2024-12-09 06:25:59.123205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:120272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.641 [2024-12-09 06:25:59.123210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:19.641 [2024-12-09 06:25:59.123221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:120280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.641 [2024-12-09 06:25:59.123226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:19.641 [2024-12-09 06:25:59.123236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:120288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.641 [2024-12-09 06:25:59.123241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:19.641 [2024-12-09 06:25:59.123252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:120296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.641 [2024-12-09 06:25:59.123258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:19.641 [2024-12-09 06:25:59.123268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:120304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.641 [2024-12-09 06:25:59.123273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:19.641 [2024-12-09 06:25:59.123284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:120312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.641 [2024-12-09 06:25:59.123289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:19.641 [2024-12-09 06:25:59.123299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:120320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.641 [2024-12-09 06:25:59.123305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:19.641 [2024-12-09 06:25:59.123315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:120328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.641 [2024-12-09 06:25:59.123320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:19.641 [2024-12-09 06:25:59.123331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:120336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.641 [2024-12-09 06:25:59.123336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:19.641 [2024-12-09 06:25:59.123347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:120344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.641 [2024-12-09 06:25:59.123354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:19.641 [2024-12-09 06:25:59.123365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:120352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.641 [2024-12-09 06:25:59.123370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:19.641 [2024-12-09 06:25:59.123381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:120360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.641 [2024-12-09 06:25:59.123386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:19.641 [2024-12-09 06:25:59.123396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:120368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.641 [2024-12-09 06:25:59.123401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:19.641 [2024-12-09 06:25:59.123412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:120376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.641 [2024-12-09 06:25:59.123417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:19.641 [2024-12-09 06:25:59.123428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:120384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.641 [2024-12-09 06:25:59.123433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:19.641 [2024-12-09 06:25:59.123443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:120392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.641 [2024-12-09 06:25:59.123453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:19.641 [2024-12-09 06:25:59.123463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:120400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.641 [2024-12-09 06:25:59.123469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:19.641 [2024-12-09 06:25:59.123480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:120408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.641 [2024-12-09 06:25:59.123485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:19.641 [2024-12-09 06:25:59.123495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:120416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.641 [2024-12-09 06:25:59.123501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:19.641 [2024-12-09 06:25:59.123511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:120424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.641 [2024-12-09 06:25:59.123516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:19.642 [2024-12-09 06:25:59.123527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:120432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.642 [2024-12-09 06:25:59.123532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.642 [2024-12-09 06:25:59.123543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:120440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.642 [2024-12-09 06:25:59.123548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.642 [2024-12-09 06:25:59.123560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:120448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.642 [2024-12-09 06:25:59.123565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:19.642 [2024-12-09 06:25:59.123576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:120456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.642 [2024-12-09 06:25:59.123581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:19.642 [2024-12-09 06:25:59.123592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:120464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.642 [2024-12-09 06:25:59.123597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:19.642 [2024-12-09 06:25:59.123607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:120472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.642 [2024-12-09 06:25:59.123613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:19.642 [2024-12-09 06:25:59.123623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:120480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.642 [2024-12-09 06:25:59.123629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:19.642 [2024-12-09 06:25:59.123639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:120488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.642 [2024-12-09 06:25:59.123645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:19.642 [2024-12-09 06:25:59.123655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:120496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.642 [2024-12-09 06:25:59.123660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:19.642 [2024-12-09 06:25:59.123671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:120504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.642 [2024-12-09 06:25:59.123677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:19.642 [2024-12-09 06:25:59.123687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:120512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.642 [2024-12-09 06:25:59.123693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:19.642 [2024-12-09 06:25:59.123703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:120520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.642 [2024-12-09 06:25:59.123708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:19.642 [2024-12-09 06:25:59.123719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:120528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.642 [2024-12-09 06:25:59.123724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:19.642 [2024-12-09 06:25:59.123734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:120536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.642 [2024-12-09 06:25:59.123740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:19.642 [2024-12-09 06:25:59.123751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:120544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.642 [2024-12-09 06:25:59.123757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:19.642 [2024-12-09 06:25:59.123767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:120552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.642 [2024-12-09 06:25:59.123772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:19.642 [2024-12-09 06:25:59.123783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:120560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.642 [2024-12-09 06:25:59.123788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:19.642 [2024-12-09 06:25:59.123799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:120568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.642 [2024-12-09 06:25:59.123804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:19.642 [2024-12-09 06:25:59.123815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:120576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.642 [2024-12-09 06:25:59.123820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:19.642 [2024-12-09 06:25:59.123830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:120584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.642 [2024-12-09 06:25:59.123835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:19.642 [2024-12-09 06:25:59.123846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:120592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.642 [2024-12-09 06:25:59.123851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:19.642 [2024-12-09 06:25:59.123861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:120600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.642 [2024-12-09 06:25:59.123867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:19.642 [2024-12-09 06:25:59.123878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.642 [2024-12-09 06:25:59.123883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:19.642 [2024-12-09 06:25:59.123894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:120616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.642 [2024-12-09 06:25:59.123899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:19.642 [2024-12-09 06:25:59.123910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:120624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.642 [2024-12-09 06:25:59.123915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:19.642 [2024-12-09 06:25:59.123926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:120632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.642 [2024-12-09 06:25:59.123931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:19.642 [2024-12-09 06:25:59.123942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:120640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.642 [2024-12-09 06:25:59.123948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:19.642 [2024-12-09 06:25:59.123959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:120648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.642 [2024-12-09 06:25:59.123964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:19.642 [2024-12-09 06:25:59.123975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:120656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.642 [2024-12-09 06:25:59.123980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:19.642 [2024-12-09 06:25:59.123991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:120664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.642 [2024-12-09 06:25:59.123996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:19.642 [2024-12-09 06:25:59.124007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:120672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.642 [2024-12-09 06:25:59.124012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:19.642 [2024-12-09 06:25:59.124022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:120680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.642 [2024-12-09 06:25:59.124027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:19.642 [2024-12-09 06:25:59.124038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:120688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.642 [2024-12-09 06:25:59.124043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:19.642 [2024-12-09 06:25:59.124054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:120696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.642 [2024-12-09 06:25:59.124060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.642 [2024-12-09 06:25:59.124517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:120704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.642 [2024-12-09 06:25:59.124528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:19.642 [2024-12-09 06:25:59.124540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:120712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.642 [2024-12-09 06:25:59.124546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:19.642 [2024-12-09 06:25:59.124557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:120720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.642 [2024-12-09 06:25:59.124562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:19.642 [2024-12-09 06:25:59.124573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:120728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.642 [2024-12-09 06:25:59.124578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:19.642 [2024-12-09 06:25:59.124588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:120736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.643 [2024-12-09 06:25:59.124597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:19.643 [2024-12-09 06:25:59.124607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:120744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.643 [2024-12-09 06:25:59.124612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:19.643 [2024-12-09 06:25:59.124623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:120752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.643 [2024-12-09 06:25:59.124629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:19.643 [2024-12-09 06:25:59.124639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:120760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.643 [2024-12-09 06:25:59.124644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:19.643 [2024-12-09 06:25:59.124655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:120768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.643 [2024-12-09 06:25:59.124660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:19.643 [2024-12-09 06:25:59.124671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:120776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.643 [2024-12-09 06:25:59.124676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:19.643 [2024-12-09 06:25:59.124687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:120784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.643 [2024-12-09 06:25:59.124692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:19.643 [2024-12-09 06:25:59.124702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:120792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.643 [2024-12-09 06:25:59.124708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:19.643 [2024-12-09 06:25:59.124718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:120800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.643 [2024-12-09 06:25:59.124724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:19.643 [2024-12-09 06:25:59.124734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:120808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.643 [2024-12-09 06:25:59.124739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:19.643 [2024-12-09 06:25:59.124750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:120816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.643 [2024-12-09 06:25:59.124756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:19.643 [2024-12-09 06:25:59.124766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:120824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.643 [2024-12-09 06:25:59.124771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:19.643 [2024-12-09 06:25:59.124782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:120832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.643 [2024-12-09 06:25:59.124789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:19.643 [2024-12-09 06:25:59.124799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:120840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.643 [2024-12-09 06:25:59.124804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:19.643 [2024-12-09 06:25:59.124815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:120848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.643 [2024-12-09 06:25:59.124820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:19.643 [2024-12-09 06:25:59.124831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:120856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.643 [2024-12-09 06:25:59.124836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:19.643 [2024-12-09 06:25:59.124846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:120864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.643 [2024-12-09 06:25:59.124852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:19.643 [2024-12-09 06:25:59.124862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:120872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.643 [2024-12-09 06:25:59.124868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:19.643 [2024-12-09 06:25:59.124878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:120880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.643 [2024-12-09 06:25:59.124884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:19.643 [2024-12-09 06:25:59.124895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:120888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.643 [2024-12-09 06:25:59.124900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:19.643 [2024-12-09 06:25:59.124910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:119880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.643 [2024-12-09 06:25:59.124916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:19.643 [2024-12-09 06:25:59.124926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:119888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.643 [2024-12-09 06:25:59.124931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:19.643 [2024-12-09 06:25:59.124942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:119896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.643 [2024-12-09 06:25:59.124947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:19.643 [2024-12-09 06:25:59.124958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:119904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.643 [2024-12-09 06:25:59.124963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:19.643 [2024-12-09 06:25:59.124974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:119912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.643 [2024-12-09 06:25:59.124979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:19.643 [2024-12-09 06:25:59.124991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:119920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.643 [2024-12-09 06:25:59.124997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:19.643 [2024-12-09 06:25:59.125007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:119928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.643 [2024-12-09 06:25:59.125012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:19.643 [2024-12-09 06:25:59.125023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:119936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.643 [2024-12-09 06:25:59.125028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.643 [2024-12-09 06:25:59.125039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:119944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.643 [2024-12-09 06:25:59.125044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:19.643 [2024-12-09 06:25:59.125055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:119952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.643 [2024-12-09 06:25:59.125060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:19.643 [2024-12-09 06:25:59.125071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:119960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.643 [2024-12-09 06:25:59.125076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:19.643 [2024-12-09 06:25:59.125086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:119968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.643 [2024-12-09 06:25:59.125092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:19.643 [2024-12-09 06:25:59.125102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:119976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.643 [2024-12-09 06:25:59.125108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:19.643 [2024-12-09 06:25:59.125119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:119984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.643 [2024-12-09 06:25:59.125124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:19.643 [2024-12-09 06:25:59.125134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:119992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.643 [2024-12-09 06:25:59.125140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:19.643 [2024-12-09 06:25:59.125150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:120000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.643 [2024-12-09 06:25:59.125156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:19.643 [2024-12-09 06:25:59.125454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:120896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.643 [2024-12-09 06:25:59.125462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:19.643 [2024-12-09 06:25:59.125476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:120008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.643 [2024-12-09 06:25:59.125482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:19.643 [2024-12-09 06:25:59.125493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:120016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.643 [2024-12-09 06:25:59.125498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:19.643 [2024-12-09 06:25:59.125509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:120024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.644 [2024-12-09 06:25:59.125514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:19.644 [2024-12-09 06:25:59.125525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:120032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.644 [2024-12-09 06:25:59.125530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:19.644 [2024-12-09 06:25:59.125541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:120040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.644 [2024-12-09 06:25:59.125546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:19.644 [2024-12-09 06:25:59.125556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:120048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.644 [2024-12-09 06:25:59.125562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:19.644 [2024-12-09 06:25:59.125572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:120056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.644 [2024-12-09 06:25:59.125578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:19.644 [2024-12-09 06:25:59.125588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:120064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.644 [2024-12-09 06:25:59.125593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:19.644 [2024-12-09 06:25:59.125604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:120072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.644 [2024-12-09 06:25:59.125609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:19.644 [2024-12-09 06:25:59.125620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:120080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.644 [2024-12-09 06:25:59.125625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:19.644 [2024-12-09 06:25:59.125636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:120088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.644 [2024-12-09 06:25:59.125641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:19.644 [2024-12-09 06:25:59.125652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:120096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.644 [2024-12-09 06:25:59.125657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:19.644 [2024-12-09 06:25:59.125668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:120104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.644 [2024-12-09 06:25:59.125674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:19.644 [2024-12-09 06:25:59.125685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:120112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.644 [2024-12-09 06:25:59.125690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:19.644 [2024-12-09 06:25:59.125701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:120120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.644 [2024-12-09 06:25:59.125706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:19.644 [2024-12-09 06:25:59.125717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:120128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.644 [2024-12-09 06:25:59.125722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:19.644 [2024-12-09 06:25:59.125733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:120136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.644 [2024-12-09 06:25:59.125738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:19.644 [2024-12-09 06:25:59.125748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:120144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.644 [2024-12-09 06:25:59.125754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:19.644 [2024-12-09 06:25:59.125764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:120152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.644 [2024-12-09 06:25:59.125770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:19.644 [2024-12-09 06:25:59.125780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:120160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.644 [2024-12-09 06:25:59.125785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:19.644 [2024-12-09 06:25:59.125796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:120168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.644 [2024-12-09 06:25:59.125801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:19.644 [2024-12-09 06:25:59.125812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:120176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.644 [2024-12-09 06:25:59.125817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:19.644 [2024-12-09 06:25:59.125828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:120184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.644 [2024-12-09 06:25:59.125834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.644 [2024-12-09 06:25:59.125844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:120192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.644 [2024-12-09 06:25:59.125849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:19.644 [2024-12-09 06:25:59.125860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:120200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.644 [2024-12-09 06:25:59.125866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:19.644 [2024-12-09 06:25:59.125877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:120208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.644 [2024-12-09 06:25:59.125883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:19.644 [2024-12-09 06:25:59.125893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:120216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.644 [2024-12-09 06:25:59.125899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:19.644 [2024-12-09 06:25:59.125909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:120224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.644 [2024-12-09 06:25:59.125915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:19.644 [2024-12-09 06:25:59.125926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:120232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.644 [2024-12-09 06:25:59.125932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:19.644 [2024-12-09 06:25:59.125942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:120240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.644 [2024-12-09 06:25:59.125948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:19.644 [2024-12-09 06:25:59.125958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:120248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.644 [2024-12-09 06:25:59.125963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:19.644 [2024-12-09 06:25:59.125974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:120256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.644 [2024-12-09 06:25:59.125979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:19.644 [2024-12-09 06:25:59.125990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:120264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.644 [2024-12-09 06:25:59.125996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:19.644 [2024-12-09 06:25:59.126006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:120272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.644 [2024-12-09 06:25:59.126011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:19.644 [2024-12-09 06:25:59.126022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:120280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.644 [2024-12-09 06:25:59.126027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:19.644 [2024-12-09 06:25:59.126038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:120288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.644 [2024-12-09 06:25:59.126043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:19.644 [2024-12-09 06:25:59.126053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:120296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.644 [2024-12-09 06:25:59.126062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:19.644 [2024-12-09 06:25:59.126073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:120304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.644 [2024-12-09 06:25:59.126078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:19.644 [2024-12-09 06:25:59.126089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:120312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.644 [2024-12-09 06:25:59.126094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:19.644 [2024-12-09 06:25:59.126104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:120320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.644 [2024-12-09 06:25:59.126110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:19.644 [2024-12-09 06:25:59.126120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:120328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.644 [2024-12-09 06:25:59.126125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:19.645 [2024-12-09 06:25:59.126136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.645 [2024-12-09 06:25:59.126141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:19.645 [2024-12-09 06:25:59.126152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:120344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.645 [2024-12-09 06:25:59.126157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:19.645 [2024-12-09 06:25:59.126167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:120352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.645 [2024-12-09 06:25:59.126173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:19.645 [2024-12-09 06:25:59.126183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:120360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.645 [2024-12-09 06:25:59.126189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:19.645 [2024-12-09 06:25:59.126199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:120368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.645 [2024-12-09 06:25:59.126204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:19.645 [2024-12-09 06:25:59.126215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:120376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.645 [2024-12-09 06:25:59.126220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:19.645 [2024-12-09 06:25:59.126231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:120384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.645 [2024-12-09 06:25:59.126236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:19.645 [2024-12-09 06:25:59.126246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:120392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.645 [2024-12-09 06:25:59.126251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:19.645 [2024-12-09 06:25:59.126263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:120400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.645 [2024-12-09 06:25:59.126268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:19.645 [2024-12-09 06:25:59.126279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:120408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.645 [2024-12-09 06:25:59.126284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:19.645 [2024-12-09 06:25:59.126295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:120416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.645 [2024-12-09 06:25:59.126300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:19.645 [2024-12-09 06:25:59.126311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:120424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.645 [2024-12-09 06:25:59.126316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:19.645 [2024-12-09 06:25:59.126326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:120432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.645 [2024-12-09 06:25:59.126332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.645 [2024-12-09 06:25:59.126342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:120440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.645 [2024-12-09 06:25:59.126348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.645 [2024-12-09 06:25:59.126740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:120448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.645 [2024-12-09 06:25:59.126748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:19.645 [2024-12-09 06:25:59.126759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:120456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.645 [2024-12-09 06:25:59.126765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:19.645 [2024-12-09 06:25:59.126775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:120464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.645 [2024-12-09 06:25:59.126781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:19.645 [2024-12-09 06:25:59.126791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:120472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.645 [2024-12-09 06:25:59.126797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:19.645 [2024-12-09 06:25:59.126808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:120480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.645 [2024-12-09 06:25:59.126813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:19.645 [2024-12-09 06:25:59.126824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:120488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.645 [2024-12-09 06:25:59.126829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:19.645 [2024-12-09 06:25:59.126841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:120496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.645 [2024-12-09 06:25:59.126846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:19.645 [2024-12-09 06:25:59.126857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:120504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.645 [2024-12-09 06:25:59.126862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:19.645 [2024-12-09 06:25:59.126873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:120512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.645 [2024-12-09 06:25:59.126878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:19.645 [2024-12-09 06:25:59.126889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:120520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.645 [2024-12-09 06:25:59.126894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:19.645 [2024-12-09 06:25:59.126905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:120528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.645 [2024-12-09 06:25:59.126910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:19.645 [2024-12-09 06:25:59.126921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:120536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.645 [2024-12-09 06:25:59.126926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:19.645 [2024-12-09 06:25:59.126936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:120544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.645 [2024-12-09 06:25:59.126942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:19.645 [2024-12-09 06:25:59.126952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:120552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.645 [2024-12-09 06:25:59.126957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:19.645 [2024-12-09 06:25:59.126968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:120560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.645 [2024-12-09 06:25:59.126973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:19.645 [2024-12-09 06:25:59.126984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:120568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.645 [2024-12-09 06:25:59.126990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:19.645 [2024-12-09 06:25:59.127000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:120576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.645 [2024-12-09 06:25:59.127005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:19.645 [2024-12-09 06:25:59.127016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:120584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.645 [2024-12-09 06:25:59.127021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:19.645 [2024-12-09 06:25:59.127032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:120592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.645 [2024-12-09 06:25:59.127038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:19.645 [2024-12-09 06:25:59.127049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:120600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.645 [2024-12-09 06:25:59.127054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:19.645 [2024-12-09 06:25:59.127065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:120608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.645 [2024-12-09 06:25:59.127070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:19.646 [2024-12-09 06:25:59.127080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:120616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.646 [2024-12-09 06:25:59.127085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:19.646 [2024-12-09 06:25:59.127096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:120624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.646 [2024-12-09 06:25:59.127102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:19.646 [2024-12-09 06:25:59.127112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:120632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.646 [2024-12-09 06:25:59.127117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:19.646 [2024-12-09 06:25:59.127128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:120640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.646 [2024-12-09 06:25:59.127133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:19.646 [2024-12-09 06:25:59.127144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:120648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.646 [2024-12-09 06:25:59.127149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:19.646 [2024-12-09 06:25:59.127160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:120656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.646 [2024-12-09 06:25:59.127165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:19.646 [2024-12-09 06:25:59.127175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:120664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.646 [2024-12-09 06:25:59.127181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:19.646 [2024-12-09 06:25:59.127191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:120672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.646 [2024-12-09 06:25:59.127197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:19.646 [2024-12-09 06:25:59.127207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:120680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.646 [2024-12-09 06:25:59.127212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:19.646 [2024-12-09 06:25:59.127223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:120688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.646 [2024-12-09 06:25:59.127230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:19.646 [2024-12-09 06:25:59.127241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:120696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.646 [2024-12-09 06:25:59.127246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.646 [2024-12-09 06:25:59.127256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:120704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.646 [2024-12-09 06:25:59.127261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:19.646 [2024-12-09 06:25:59.127272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:120712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.646 [2024-12-09 06:25:59.127277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:19.646 [2024-12-09 06:25:59.127288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:120720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.646 [2024-12-09 06:25:59.127293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:19.646 [2024-12-09 06:25:59.127304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:120728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.646 [2024-12-09 06:25:59.127309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:19.646 [2024-12-09 06:25:59.127319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:120736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.646 [2024-12-09 06:25:59.127325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:19.646 [2024-12-09 06:25:59.127335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:120744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.646 [2024-12-09 06:25:59.127341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:19.646 [2024-12-09 06:25:59.127351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:120752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.646 [2024-12-09 06:25:59.127357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:19.646 [2024-12-09 06:25:59.127367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:120760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.646 [2024-12-09 06:25:59.127372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:19.646 [2024-12-09 06:25:59.127383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:120768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.646 [2024-12-09 06:25:59.127388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:19.646 [2024-12-09 06:25:59.127399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:120776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.646 [2024-12-09 06:25:59.127404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:19.646 [2024-12-09 06:25:59.127415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:120784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.646 [2024-12-09 06:25:59.127421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:19.646 [2024-12-09 06:25:59.127432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:120792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.646 [2024-12-09 06:25:59.127437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:19.646 [2024-12-09 06:25:59.127451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:120800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.646 [2024-12-09 06:25:59.127457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:19.646 [2024-12-09 06:25:59.127467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:120808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.646 [2024-12-09 06:25:59.127473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:19.646 [2024-12-09 06:25:59.127483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:120816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.646 [2024-12-09 06:25:59.127488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:19.646 [2024-12-09 06:25:59.127499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:120824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.646 [2024-12-09 06:25:59.127504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:19.646 [2024-12-09 06:25:59.127888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:120832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.646 [2024-12-09 06:25:59.127896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:19.646 [2024-12-09 06:25:59.127907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:120840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.646 [2024-12-09 06:25:59.127913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:19.646 [2024-12-09 06:25:59.127923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:120848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.646 [2024-12-09 06:25:59.127929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:19.646 [2024-12-09 06:25:59.127939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:120856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.646 [2024-12-09 06:25:59.127945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:19.646 [2024-12-09 06:25:59.127955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:120864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.646 [2024-12-09 06:25:59.127960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:19.646 [2024-12-09 06:25:59.127971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:120872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.646 [2024-12-09 06:25:59.127976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:19.646 [2024-12-09 06:25:59.127987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:120880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.646 [2024-12-09 06:25:59.127992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:19.646 [2024-12-09 06:25:59.128004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:120888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.646 [2024-12-09 06:25:59.128010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:19.646 [2024-12-09 06:25:59.128020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:119880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.646 [2024-12-09 06:25:59.128026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:19.646 [2024-12-09 06:25:59.128036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:119888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.646 [2024-12-09 06:25:59.128042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:19.646 [2024-12-09 06:25:59.128052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:119896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.646 [2024-12-09 06:25:59.128058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:19.646 [2024-12-09 06:25:59.128068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:119904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.646 [2024-12-09 06:25:59.128074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:19.647 [2024-12-09 06:25:59.128084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:119912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.647 [2024-12-09 06:25:59.128090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:19.647 [2024-12-09 06:25:59.128100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:119920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.647 [2024-12-09 06:25:59.128106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:19.647 [2024-12-09 06:25:59.128117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:119928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.647 [2024-12-09 06:25:59.128122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:19.647 [2024-12-09 06:25:59.128133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:119936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.647 [2024-12-09 06:25:59.128139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.647 [2024-12-09 06:25:59.128150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:119944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.647 [2024-12-09 06:25:59.128155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:19.647 [2024-12-09 06:25:59.128166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:119952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.647 [2024-12-09 06:25:59.128171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:19.647 [2024-12-09 06:25:59.128182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:119960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.647 [2024-12-09 06:25:59.128187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:19.647 [2024-12-09 06:25:59.128199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:119968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.647 [2024-12-09 06:25:59.128205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:19.647 [2024-12-09 06:25:59.128215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:119976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.647 [2024-12-09 06:25:59.128221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:19.647 [2024-12-09 06:25:59.128231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:119984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.647 [2024-12-09 06:25:59.128237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:19.647 [2024-12-09 06:25:59.128248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:119992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.647 [2024-12-09 06:25:59.128253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:19.647 [2024-12-09 06:25:59.128432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:120000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.647 [2024-12-09 06:25:59.128440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:19.647 [2024-12-09 06:25:59.128456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:120896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.647 [2024-12-09 06:25:59.128462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:19.647 [2024-12-09 06:25:59.128473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:120008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.647 [2024-12-09 06:25:59.128478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:19.647 [2024-12-09 06:25:59.128489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:120016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.647 [2024-12-09 06:25:59.128495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:19.647 [2024-12-09 06:25:59.128505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:120024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.647 [2024-12-09 06:25:59.128511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:19.647 [2024-12-09 06:25:59.128521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:120032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.647 [2024-12-09 06:25:59.128527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:19.647 [2024-12-09 06:25:59.128537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:120040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.647 [2024-12-09 06:25:59.128543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:19.647 [2024-12-09 06:25:59.128553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:120048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.647 [2024-12-09 06:25:59.128558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:19.647 [2024-12-09 06:25:59.128569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:120056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.647 [2024-12-09 06:25:59.128576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:19.647 [2024-12-09 06:25:59.128587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:120064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.647 [2024-12-09 06:25:59.128592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:19.647 [2024-12-09 06:25:59.128603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:120072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.647 [2024-12-09 06:25:59.128608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:19.647 [2024-12-09 06:25:59.128619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:120080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.647 [2024-12-09 06:25:59.128625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:19.647 [2024-12-09 06:25:59.128635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:120088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.647 [2024-12-09 06:25:59.128641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:19.647 [2024-12-09 06:25:59.128651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.647 [2024-12-09 06:25:59.128657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:19.647 [2024-12-09 06:25:59.128667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:120104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.647 [2024-12-09 06:25:59.128673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:19.647 [2024-12-09 06:25:59.128683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:120112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.647 [2024-12-09 06:25:59.128688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:19.647 [2024-12-09 06:25:59.128699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:120120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.647 [2024-12-09 06:25:59.128704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:19.647 [2024-12-09 06:25:59.128715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:120128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.647 [2024-12-09 06:25:59.128720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:19.647 [2024-12-09 06:25:59.128731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:120136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.647 [2024-12-09 06:25:59.128736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:19.647 [2024-12-09 06:25:59.128747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:120144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.647 [2024-12-09 06:25:59.128752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:19.647 [2024-12-09 06:25:59.128762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:120152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.647 [2024-12-09 06:25:59.128769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:19.647 [2024-12-09 06:25:59.128780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:120160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.647 [2024-12-09 06:25:59.128786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:19.647 [2024-12-09 06:25:59.128796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:120168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.647 [2024-12-09 06:25:59.128801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:19.647 [2024-12-09 06:25:59.128812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:120176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.647 [2024-12-09 06:25:59.128817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:19.647 [2024-12-09 06:25:59.128828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:120184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.647 [2024-12-09 06:25:59.128833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.647 [2024-12-09 06:25:59.128844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:120192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.647 [2024-12-09 06:25:59.128849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:19.647 [2024-12-09 06:25:59.128860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:120200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.647 [2024-12-09 06:25:59.128865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:19.647 [2024-12-09 06:25:59.128876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:120208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.648 [2024-12-09 06:25:59.128881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:19.648 [2024-12-09 06:25:59.128891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:120216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.648 [2024-12-09 06:25:59.128897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:19.648 [2024-12-09 06:25:59.128908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:120224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.648 [2024-12-09 06:25:59.128913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:19.648 [2024-12-09 06:25:59.128924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:120232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.648 [2024-12-09 06:25:59.128929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:19.648 [2024-12-09 06:25:59.128940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:120240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.648 [2024-12-09 06:25:59.128945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:19.648 [2024-12-09 06:25:59.128956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:120248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.648 [2024-12-09 06:25:59.128962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:19.648 [2024-12-09 06:25:59.128973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:120256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.648 [2024-12-09 06:25:59.128978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:19.648 [2024-12-09 06:25:59.128989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:120264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.648 [2024-12-09 06:25:59.128994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:19.648 [2024-12-09 06:25:59.129005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:120272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.648 [2024-12-09 06:25:59.129010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:19.648 [2024-12-09 06:25:59.129021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:120280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.648 [2024-12-09 06:25:59.129026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:19.648 [2024-12-09 06:25:59.129037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:120288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.648 [2024-12-09 06:25:59.129042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:19.648 [2024-12-09 06:25:59.129053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:120296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.648 [2024-12-09 06:25:59.129059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:19.648 [2024-12-09 06:25:59.129069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:120304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.648 [2024-12-09 06:25:59.129075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:19.648 [2024-12-09 06:25:59.129470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:120312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.648 [2024-12-09 06:25:59.129478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:19.648 [2024-12-09 06:25:59.129489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:120320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.648 [2024-12-09 06:25:59.129495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:19.648 [2024-12-09 06:25:59.129505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:120328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.648 [2024-12-09 06:25:59.129511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:19.648 [2024-12-09 06:25:59.129521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:120336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.648 [2024-12-09 06:25:59.129526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:19.648 [2024-12-09 06:25:59.129537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:120344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.648 [2024-12-09 06:25:59.129542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:19.648 [2024-12-09 06:25:59.129555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:120352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.648 [2024-12-09 06:25:59.129560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:19.648 [2024-12-09 06:25:59.129570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:120360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.648 [2024-12-09 06:25:59.129576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:19.648 [2024-12-09 06:25:59.129586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:120368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.648 [2024-12-09 06:25:59.129592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:19.648 [2024-12-09 06:25:59.129602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:120376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.648 [2024-12-09 06:25:59.129607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:19.648 [2024-12-09 06:25:59.129617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:120384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.648 [2024-12-09 06:25:59.129623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:19.648 [2024-12-09 06:25:59.129633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:120392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.648 [2024-12-09 06:25:59.129639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:19.648 [2024-12-09 06:25:59.129649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:120400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.648 [2024-12-09 06:25:59.129654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:19.648 [2024-12-09 06:25:59.129664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:120408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.648 [2024-12-09 06:25:59.129670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:19.648 [2024-12-09 06:25:59.129680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:120416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.648 [2024-12-09 06:25:59.129685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:19.648 [2024-12-09 06:25:59.129696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:120424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.648 [2024-12-09 06:25:59.129701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:19.648 [2024-12-09 06:25:59.129712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:120432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.648 [2024-12-09 06:25:59.129717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.648 [2024-12-09 06:25:59.129728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:120440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.648 [2024-12-09 06:25:59.129733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.648 [2024-12-09 06:25:59.129880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:120448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.648 [2024-12-09 06:25:59.129887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:19.648 [2024-12-09 06:25:59.129899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:120456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.648 [2024-12-09 06:25:59.129905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:19.648 [2024-12-09 06:25:59.129915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:120464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.648 [2024-12-09 06:25:59.129920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:19.648 [2024-12-09 06:25:59.129931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:120472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.648 [2024-12-09 06:25:59.129936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:19.648 [2024-12-09 06:25:59.129947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:120480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.648 [2024-12-09 06:25:59.129952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:19.648 [2024-12-09 06:25:59.129963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:120488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.648 [2024-12-09 06:25:59.129968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:19.648 [2024-12-09 06:25:59.129979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:120496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.648 [2024-12-09 06:25:59.129984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:19.648 [2024-12-09 06:25:59.129995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:120504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.648 [2024-12-09 06:25:59.130000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:19.648 [2024-12-09 06:25:59.130011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:120512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.648 [2024-12-09 06:25:59.130016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:19.648 [2024-12-09 06:25:59.130027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:120520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.649 [2024-12-09 06:25:59.130032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:19.649 [2024-12-09 06:25:59.130043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:120528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.649 [2024-12-09 06:25:59.130048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:19.649 [2024-12-09 06:25:59.130059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:120536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.649 [2024-12-09 06:25:59.130064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:19.649 [2024-12-09 06:25:59.130074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:120544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.649 [2024-12-09 06:25:59.130081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:19.649 [2024-12-09 06:25:59.130092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:120552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.649 [2024-12-09 06:25:59.130098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:19.649 [2024-12-09 06:25:59.130108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:120560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.649 [2024-12-09 06:25:59.130113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:19.649 [2024-12-09 06:25:59.130124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:120568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.649 [2024-12-09 06:25:59.130129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:19.649 [2024-12-09 06:25:59.130140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:120576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.649 [2024-12-09 06:25:59.130145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:19.649 [2024-12-09 06:25:59.130156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:120584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.649 [2024-12-09 06:25:59.130161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:19.649 [2024-12-09 06:25:59.130172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:120592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.649 [2024-12-09 06:25:59.130177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:19.649 [2024-12-09 06:25:59.130187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:120600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.649 [2024-12-09 06:25:59.130193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:19.649 [2024-12-09 06:25:59.130203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:120608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.649 [2024-12-09 06:25:59.130209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:19.649 [2024-12-09 06:25:59.130219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:120616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.649 [2024-12-09 06:25:59.130224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:19.649 [2024-12-09 06:25:59.130235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:120624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.649 [2024-12-09 06:25:59.130240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:19.649 [2024-12-09 06:25:59.130251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:120632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.649 [2024-12-09 06:25:59.130256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:19.649 [2024-12-09 06:25:59.130267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:120640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.649 [2024-12-09 06:25:59.130273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:19.649 [2024-12-09 06:25:59.133999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:120648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.649 [2024-12-09 06:25:59.134024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:19.649 [2024-12-09 06:25:59.134041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:120656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.649 [2024-12-09 06:25:59.134048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:19.649 [2024-12-09 06:25:59.134063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:120664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.649 [2024-12-09 06:25:59.134070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:19.649 [2024-12-09 06:25:59.134085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:120672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.649 [2024-12-09 06:25:59.134092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:19.649 [2024-12-09 06:25:59.134107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:120680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.649 [2024-12-09 06:25:59.134114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:19.649 [2024-12-09 06:25:59.134129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:120688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.649 [2024-12-09 06:25:59.134136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:19.649 [2024-12-09 06:25:59.134150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:120696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.649 [2024-12-09 06:25:59.134157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.649 [2024-12-09 06:25:59.134172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:120704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.649 [2024-12-09 06:25:59.134179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:19.649 [2024-12-09 06:25:59.134193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:120712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.649 [2024-12-09 06:25:59.134200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:19.649 [2024-12-09 06:25:59.134215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:120720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.649 [2024-12-09 06:25:59.134222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:19.649 [2024-12-09 06:25:59.134236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:120728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.649 [2024-12-09 06:25:59.134243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:19.649 [2024-12-09 06:25:59.134258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:120736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.649 [2024-12-09 06:25:59.134265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:19.649 [2024-12-09 06:25:59.134291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:120744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.649 [2024-12-09 06:25:59.134301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:19.649 [2024-12-09 06:25:59.134320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:120752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.649 [2024-12-09 06:25:59.134329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:19.649 [2024-12-09 06:25:59.134348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:120760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.649 [2024-12-09 06:25:59.134357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:19.649 [2024-12-09 06:25:59.134376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:120768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.649 [2024-12-09 06:25:59.134386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:19.649 [2024-12-09 06:25:59.134405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:120776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.649 [2024-12-09 06:25:59.134414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:19.649 [2024-12-09 06:25:59.134433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:120784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.649 [2024-12-09 06:25:59.134442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:19.649 [2024-12-09 06:25:59.134471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:120792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.649 [2024-12-09 06:25:59.134481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:19.650 [2024-12-09 06:25:59.134500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:120800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.650 [2024-12-09 06:25:59.134509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:19.650 [2024-12-09 06:25:59.134528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:120808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.650 [2024-12-09 06:25:59.134538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:19.650 [2024-12-09 06:25:59.134557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:120816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.650 [2024-12-09 06:25:59.134566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:19.650 [2024-12-09 06:25:59.135274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:120824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.650 [2024-12-09 06:25:59.135292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:19.650 [2024-12-09 06:25:59.135315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:120832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.650 [2024-12-09 06:25:59.135325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:19.650 [2024-12-09 06:25:59.135343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:120840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.650 [2024-12-09 06:25:59.135348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:19.650 [2024-12-09 06:25:59.135359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:120848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.650 [2024-12-09 06:25:59.135364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:19.650 [2024-12-09 06:25:59.135375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:120856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.650 [2024-12-09 06:25:59.135380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:19.650 [2024-12-09 06:25:59.135391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:120864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.650 [2024-12-09 06:25:59.135397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:19.650 [2024-12-09 06:25:59.135407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:120872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.650 [2024-12-09 06:25:59.135413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:19.650 [2024-12-09 06:25:59.135423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:120880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.650 [2024-12-09 06:25:59.135428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:19.650 [2024-12-09 06:25:59.135439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:120888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.650 [2024-12-09 06:25:59.135444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:19.650 [2024-12-09 06:25:59.135460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:119880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.650 [2024-12-09 06:25:59.135465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:19.650 [2024-12-09 06:25:59.135476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:119888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.650 [2024-12-09 06:25:59.135481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:19.650 [2024-12-09 06:25:59.135492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:119896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.650 [2024-12-09 06:25:59.135497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:19.650 [2024-12-09 06:25:59.135507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:119904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.650 [2024-12-09 06:25:59.135513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:19.650 [2024-12-09 06:25:59.135523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:119912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.650 [2024-12-09 06:25:59.135529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:19.650 [2024-12-09 06:25:59.135541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:119920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.650 [2024-12-09 06:25:59.135547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:19.650 [2024-12-09 06:25:59.135557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:119928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.650 [2024-12-09 06:25:59.135562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:19.650 [2024-12-09 06:25:59.135573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:119936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.650 [2024-12-09 06:25:59.135579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.650 [2024-12-09 06:25:59.135589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:119944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.650 [2024-12-09 06:25:59.135594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:19.650 [2024-12-09 06:25:59.135605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:119952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.650 [2024-12-09 06:25:59.135610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:19.650 [2024-12-09 06:25:59.135621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:119960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.650 [2024-12-09 06:25:59.135626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:19.650 [2024-12-09 06:25:59.135637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:119968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.650 [2024-12-09 06:25:59.135642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:19.650 [2024-12-09 06:25:59.135653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:119976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.650 [2024-12-09 06:25:59.135658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:19.650 [2024-12-09 06:25:59.135669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:119984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.650 [2024-12-09 06:25:59.135674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:19.650 [2024-12-09 06:25:59.135685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:119992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.650 [2024-12-09 06:25:59.135690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:19.650 [2024-12-09 06:25:59.135701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:120000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.650 [2024-12-09 06:25:59.135706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:19.650 [2024-12-09 06:25:59.135717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:120896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.650 [2024-12-09 06:25:59.135722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:19.650 [2024-12-09 06:25:59.135733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:120008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.650 [2024-12-09 06:25:59.135740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:19.650 [2024-12-09 06:25:59.135751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:120016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.650 [2024-12-09 06:25:59.135757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:19.650 [2024-12-09 06:25:59.135768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:120024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.650 [2024-12-09 06:25:59.135773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:19.650 [2024-12-09 06:25:59.135784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:120032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.650 [2024-12-09 06:25:59.135789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:19.650 [2024-12-09 06:25:59.135800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:120040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.650 [2024-12-09 06:25:59.135805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:19.650 [2024-12-09 06:25:59.135816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:120048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.650 [2024-12-09 06:25:59.135821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:19.650 [2024-12-09 06:25:59.135832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:120056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.650 [2024-12-09 06:25:59.135837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:19.650 [2024-12-09 06:25:59.135847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:120064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.650 [2024-12-09 06:25:59.135853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:19.650 [2024-12-09 06:25:59.135863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:120072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.650 [2024-12-09 06:25:59.135869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:19.650 [2024-12-09 06:25:59.135880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:120080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.650 [2024-12-09 06:25:59.135885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:19.651 [2024-12-09 06:25:59.135896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:120088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.651 [2024-12-09 06:25:59.135901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:19.651 [2024-12-09 06:25:59.135911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:120096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.651 [2024-12-09 06:25:59.135917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:19.651 [2024-12-09 06:25:59.135927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:120104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.651 [2024-12-09 06:25:59.135934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:19.651 [2024-12-09 06:25:59.135945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:120112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.651 [2024-12-09 06:25:59.135950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:19.651 [2024-12-09 06:25:59.135960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:120120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.651 [2024-12-09 06:25:59.135966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:19.651 [2024-12-09 06:25:59.135977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:120128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.651 [2024-12-09 06:25:59.135982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:19.651 [2024-12-09 06:25:59.135993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:120136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.651 [2024-12-09 06:25:59.135998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:19.651 [2024-12-09 06:25:59.136009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:120144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.651 [2024-12-09 06:25:59.136014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:19.651 [2024-12-09 06:25:59.136025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:120152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.651 [2024-12-09 06:25:59.136030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:19.651 [2024-12-09 06:25:59.136041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:120160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.651 [2024-12-09 06:25:59.136046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:19.651 [2024-12-09 06:25:59.136056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:120168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.651 [2024-12-09 06:25:59.136062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:19.651 [2024-12-09 06:25:59.136072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:120176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.651 [2024-12-09 06:25:59.136078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:19.651 [2024-12-09 06:25:59.136088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:120184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.651 [2024-12-09 06:25:59.136094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.651 [2024-12-09 06:25:59.136104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.651 [2024-12-09 06:25:59.136110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:19.651 [2024-12-09 06:25:59.136120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:120200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.651 [2024-12-09 06:25:59.136126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:19.651 [2024-12-09 06:25:59.136137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:120208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.651 [2024-12-09 06:25:59.136143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:19.651 [2024-12-09 06:25:59.136153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:120216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.651 [2024-12-09 06:25:59.136158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:19.651 [2024-12-09 06:25:59.136169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:120224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.651 [2024-12-09 06:25:59.136175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:19.651 [2024-12-09 06:25:59.136185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:120232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.651 [2024-12-09 06:25:59.136191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:19.651 [2024-12-09 06:25:59.136201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:120240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.651 [2024-12-09 06:25:59.136207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:19.651 [2024-12-09 06:25:59.136217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:120248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.651 [2024-12-09 06:25:59.136222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:19.651 [2024-12-09 06:25:59.136233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:120256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.651 [2024-12-09 06:25:59.136238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:19.651 [2024-12-09 06:25:59.136249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:120264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.651 [2024-12-09 06:25:59.136254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:19.651 [2024-12-09 06:25:59.136265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:120272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.651 [2024-12-09 06:25:59.136270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:19.651 [2024-12-09 06:25:59.136281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:120280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.651 [2024-12-09 06:25:59.136286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:19.651 [2024-12-09 06:25:59.136297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:120288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.651 [2024-12-09 06:25:59.136302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:19.651 [2024-12-09 06:25:59.136312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:120296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.651 [2024-12-09 06:25:59.136318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:19.651 [2024-12-09 06:25:59.136330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:120304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.651 [2024-12-09 06:25:59.136335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:19.651 [2024-12-09 06:25:59.136346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:120312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.651 [2024-12-09 06:25:59.136351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:19.651 [2024-12-09 06:25:59.136361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:120320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.651 [2024-12-09 06:25:59.136367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:19.651 [2024-12-09 06:25:59.136377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:120328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.651 [2024-12-09 06:25:59.136383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:19.651 [2024-12-09 06:25:59.136393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:120336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.651 [2024-12-09 06:25:59.136398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:19.651 [2024-12-09 06:25:59.136409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:120344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.651 [2024-12-09 06:25:59.136414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:19.651 [2024-12-09 06:25:59.136425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:120352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.651 [2024-12-09 06:25:59.136430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:19.651 [2024-12-09 06:25:59.136441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:120360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.651 [2024-12-09 06:25:59.136446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:19.651 [2024-12-09 06:25:59.136460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:120368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.651 [2024-12-09 06:25:59.136465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:19.651 [2024-12-09 06:25:59.136476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:120376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.651 [2024-12-09 06:25:59.136481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:19.651 [2024-12-09 06:25:59.136492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:120384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.651 [2024-12-09 06:25:59.136497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:19.652 [2024-12-09 06:25:59.136508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:120392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.652 [2024-12-09 06:25:59.136513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:19.652 [2024-12-09 06:25:59.136525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:120400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.652 [2024-12-09 06:25:59.136530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:19.652 [2024-12-09 06:25:59.136540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:120408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.652 [2024-12-09 06:25:59.136546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:19.652 [2024-12-09 06:25:59.136556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:120416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.652 [2024-12-09 06:25:59.136561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:19.652 [2024-12-09 06:25:59.136572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:120424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.652 [2024-12-09 06:25:59.136577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:19.652 [2024-12-09 06:25:59.136588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:120432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.652 [2024-12-09 06:25:59.136594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.652 [2024-12-09 06:25:59.137140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:120440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.652 [2024-12-09 06:25:59.137148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.652 [2024-12-09 06:25:59.137160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:120448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.652 [2024-12-09 06:25:59.137165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:19.652 [2024-12-09 06:25:59.137176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:120456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.652 [2024-12-09 06:25:59.137181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:19.652 [2024-12-09 06:25:59.137192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:120464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.652 [2024-12-09 06:25:59.137197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:19.652 [2024-12-09 06:25:59.137208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:120472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.652 [2024-12-09 06:25:59.137213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:19.652 [2024-12-09 06:25:59.137224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:120480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.652 [2024-12-09 06:25:59.137229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:19.652 [2024-12-09 06:25:59.137240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:120488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.652 [2024-12-09 06:25:59.137245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:19.652 [2024-12-09 06:25:59.137256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:120496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.652 [2024-12-09 06:25:59.137263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:19.652 [2024-12-09 06:25:59.137273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:120504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.652 [2024-12-09 06:25:59.137279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:19.652 [2024-12-09 06:25:59.137289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:120512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.652 [2024-12-09 06:25:59.137295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:19.652 [2024-12-09 06:25:59.137305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:120520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.652 [2024-12-09 06:25:59.137310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:19.652 [2024-12-09 06:25:59.137321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:120528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.652 [2024-12-09 06:25:59.137327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:19.652 [2024-12-09 06:25:59.137337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:120536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.652 [2024-12-09 06:25:59.137342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:19.652 [2024-12-09 06:25:59.137353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:120544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.652 [2024-12-09 06:25:59.137358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:19.652 [2024-12-09 06:25:59.137369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:120552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.652 [2024-12-09 06:25:59.137374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:19.652 [2024-12-09 06:25:59.137385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:120560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.652 [2024-12-09 06:25:59.137390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:19.652 [2024-12-09 06:25:59.137400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:120568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.652 [2024-12-09 06:25:59.137405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:19.652 [2024-12-09 06:25:59.137416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:120576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.652 [2024-12-09 06:25:59.137421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:19.652 [2024-12-09 06:25:59.137432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:120584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.652 [2024-12-09 06:25:59.137437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:19.652 [2024-12-09 06:25:59.137452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:120592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.652 [2024-12-09 06:25:59.137459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:19.652 [2024-12-09 06:25:59.137470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:120600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.652 [2024-12-09 06:25:59.137476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:19.652 [2024-12-09 06:25:59.137486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:120608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.652 [2024-12-09 06:25:59.137491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:19.652 [2024-12-09 06:25:59.137502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:120616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.652 [2024-12-09 06:25:59.137507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:19.652 [2024-12-09 06:25:59.137517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:120624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.652 [2024-12-09 06:25:59.137523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:19.652 [2024-12-09 06:25:59.137533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:120632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.652 [2024-12-09 06:25:59.137539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:19.652 [2024-12-09 06:25:59.137549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:120640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.652 [2024-12-09 06:25:59.137554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:19.652 [2024-12-09 06:25:59.137565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:120648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.652 [2024-12-09 06:25:59.137570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:19.652 [2024-12-09 06:25:59.137581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:120656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.652 [2024-12-09 06:25:59.137586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:19.652 [2024-12-09 06:25:59.137597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:120664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.652 [2024-12-09 06:25:59.137602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:19.652 [2024-12-09 06:25:59.137612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:120672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.652 [2024-12-09 06:25:59.137618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:19.652 [2024-12-09 06:25:59.137628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:120680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.652 [2024-12-09 06:25:59.137633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:19.652 [2024-12-09 06:25:59.137644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:120688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.652 [2024-12-09 06:25:59.137649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:19.652 [2024-12-09 06:25:59.137661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:120696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.653 [2024-12-09 06:25:59.137667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.653 [2024-12-09 06:25:59.137677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:120704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.653 [2024-12-09 06:25:59.137682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:19.653 [2024-12-09 06:25:59.137693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:120712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.653 [2024-12-09 06:25:59.137698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:19.653 [2024-12-09 06:25:59.137708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:120720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.653 [2024-12-09 06:25:59.137714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:19.653 [2024-12-09 06:25:59.137724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:120728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.653 [2024-12-09 06:25:59.137730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:19.653 [2024-12-09 06:25:59.137740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:120736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.653 [2024-12-09 06:25:59.137745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:19.653 [2024-12-09 06:25:59.137756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:120744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.653 [2024-12-09 06:25:59.137761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:19.653 [2024-12-09 06:25:59.137772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:120752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.653 [2024-12-09 06:25:59.137777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:19.653 [2024-12-09 06:25:59.137788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:120760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.653 [2024-12-09 06:25:59.137793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:19.653 [2024-12-09 06:25:59.137803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:120768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.653 [2024-12-09 06:25:59.137809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:19.653 [2024-12-09 06:25:59.137819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:120776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.653 [2024-12-09 06:25:59.137824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:19.653 [2024-12-09 06:25:59.137835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:120784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.653 [2024-12-09 06:25:59.137841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:19.653 [2024-12-09 06:25:59.137853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:120792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.653 [2024-12-09 06:25:59.137858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:19.653 [2024-12-09 06:25:59.137869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:120800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.653 [2024-12-09 06:25:59.137874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:19.653 [2024-12-09 06:25:59.137885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:120808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.653 [2024-12-09 06:25:59.137890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:19.653 [2024-12-09 06:25:59.138377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:120816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.653 [2024-12-09 06:25:59.138385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:19.653 [2024-12-09 06:25:59.138397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:120824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.653 [2024-12-09 06:25:59.138402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:19.653 [2024-12-09 06:25:59.138413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:120832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.653 [2024-12-09 06:25:59.138418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:19.653 [2024-12-09 06:25:59.138429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:120840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.653 [2024-12-09 06:25:59.138434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:19.653 [2024-12-09 06:25:59.138444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:120848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.653 [2024-12-09 06:25:59.138454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:19.653 [2024-12-09 06:25:59.138465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:120856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.653 [2024-12-09 06:25:59.138470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:19.653 [2024-12-09 06:25:59.138481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:120864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.653 [2024-12-09 06:25:59.138486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:19.653 [2024-12-09 06:25:59.138497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:120872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.653 [2024-12-09 06:25:59.138502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:19.653 [2024-12-09 06:25:59.138513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:120880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.653 [2024-12-09 06:25:59.138518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:19.653 [2024-12-09 06:25:59.138531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:120888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.653 [2024-12-09 06:25:59.138536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:19.653 [2024-12-09 06:25:59.138547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:119880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.653 [2024-12-09 06:25:59.138552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:19.653 [2024-12-09 06:25:59.138563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:119888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.653 [2024-12-09 06:25:59.138568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:19.653 [2024-12-09 06:25:59.138579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:119896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.653 [2024-12-09 06:25:59.138584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:19.653 [2024-12-09 06:25:59.138595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:119904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.653 [2024-12-09 06:25:59.138600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:19.653 [2024-12-09 06:25:59.138611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:119912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.653 [2024-12-09 06:25:59.138616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:19.653 [2024-12-09 06:25:59.138627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:119920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.653 [2024-12-09 06:25:59.138632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:19.653 [2024-12-09 06:25:59.138643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:119928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.653 [2024-12-09 06:25:59.138648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:19.653 [2024-12-09 06:25:59.138794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:119936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.653 [2024-12-09 06:25:59.138802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.653 [2024-12-09 06:25:59.138813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:119944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.653 [2024-12-09 06:25:59.138819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:19.653 [2024-12-09 06:25:59.138829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:119952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.653 [2024-12-09 06:25:59.138835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:19.653 [2024-12-09 06:25:59.138845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:119960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.653 [2024-12-09 06:25:59.138851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:19.653 [2024-12-09 06:25:59.138861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:119968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.653 [2024-12-09 06:25:59.138868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:19.653 [2024-12-09 06:25:59.138879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:119976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.653 [2024-12-09 06:25:59.138884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:19.653 [2024-12-09 06:25:59.138895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:119984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.653 [2024-12-09 06:25:59.138900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:19.653 [2024-12-09 06:25:59.138911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:119992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.654 [2024-12-09 06:25:59.138916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:19.654 [2024-12-09 06:25:59.138927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:120000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.654 [2024-12-09 06:25:59.138933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:19.654 [2024-12-09 06:25:59.138943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:120896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.654 [2024-12-09 06:25:59.138948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:19.654 [2024-12-09 06:25:59.138959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:120008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.654 [2024-12-09 06:25:59.138964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:19.654 [2024-12-09 06:25:59.138975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:120016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.654 [2024-12-09 06:25:59.138980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:19.654 [2024-12-09 06:25:59.138991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:120024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.654 [2024-12-09 06:25:59.138996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:19.654 [2024-12-09 06:25:59.139007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.654 [2024-12-09 06:25:59.139012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:19.654 [2024-12-09 06:25:59.139023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:120040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.654 [2024-12-09 06:25:59.139028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:19.654 [2024-12-09 06:25:59.139038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:120048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.654 [2024-12-09 06:25:59.139044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:19.654 [2024-12-09 06:25:59.139054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:120056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.654 [2024-12-09 06:25:59.139061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:19.654 [2024-12-09 06:25:59.139071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:120064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.654 [2024-12-09 06:25:59.139077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:19.654 [2024-12-09 06:25:59.139087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:120072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.654 [2024-12-09 06:25:59.139093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:19.654 [2024-12-09 06:25:59.139103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:120080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.654 [2024-12-09 06:25:59.139109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:19.654 [2024-12-09 06:25:59.139119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:120088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.654 [2024-12-09 06:25:59.139125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:19.654 [2024-12-09 06:25:59.139135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:120096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.654 [2024-12-09 06:25:59.139141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:19.654 [2024-12-09 06:25:59.139151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:120104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.654 [2024-12-09 06:25:59.139157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:19.654 [2024-12-09 06:25:59.139167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:120112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.654 [2024-12-09 06:25:59.139173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:19.654 [2024-12-09 06:25:59.139183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:120120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.654 [2024-12-09 06:25:59.139189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:19.654 [2024-12-09 06:25:59.139199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:120128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.654 [2024-12-09 06:25:59.139205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:19.654 [2024-12-09 06:25:59.139215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:120136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.654 [2024-12-09 06:25:59.139220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:19.654 [2024-12-09 06:25:59.139231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:120144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.654 [2024-12-09 06:25:59.139236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:19.654 [2024-12-09 06:25:59.139247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:120152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.654 [2024-12-09 06:25:59.139252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:19.654 [2024-12-09 06:25:59.139266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:120160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.654 [2024-12-09 06:25:59.139271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:19.654 [2024-12-09 06:25:59.139282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:120168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.654 [2024-12-09 06:25:59.139287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:19.654 [2024-12-09 06:25:59.139298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:120176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.654 [2024-12-09 06:25:59.139303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:19.654 [2024-12-09 06:25:59.139314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:120184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.654 [2024-12-09 06:25:59.139319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.654 [2024-12-09 06:25:59.139329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:120192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.654 [2024-12-09 06:25:59.139335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:19.654 [2024-12-09 06:25:59.139345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:120200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.654 [2024-12-09 06:25:59.139351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:19.654 [2024-12-09 06:25:59.139361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:120208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.654 [2024-12-09 06:25:59.139366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:19.654 [2024-12-09 06:25:59.139377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:120216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.654 [2024-12-09 06:25:59.139382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:19.654 [2024-12-09 06:25:59.139393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:120224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.654 [2024-12-09 06:25:59.139398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:19.654 [2024-12-09 06:25:59.139409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:120232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.654 [2024-12-09 06:25:59.139414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:19.654 [2024-12-09 06:25:59.139425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:120240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.654 [2024-12-09 06:25:59.139430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:19.654 [2024-12-09 06:25:59.139442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:120248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.654 [2024-12-09 06:25:59.139447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:19.654 [2024-12-09 06:25:59.139464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:120256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.654 [2024-12-09 06:25:59.139469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:19.654 [2024-12-09 06:25:59.139479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:120264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.655 [2024-12-09 06:25:59.139485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:19.655 [2024-12-09 06:25:59.139495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:120272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.655 [2024-12-09 06:25:59.139501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:19.655 [2024-12-09 06:25:59.139511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:120280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.655 [2024-12-09 06:25:59.139517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:19.655 [2024-12-09 06:25:59.139527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:120288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.655 [2024-12-09 06:25:59.139533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:19.655 [2024-12-09 06:25:59.139859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:120296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.655 [2024-12-09 06:25:59.139867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:19.655 [2024-12-09 06:25:59.139890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:120304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.655 [2024-12-09 06:25:59.139896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:19.655 [2024-12-09 06:25:59.139909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:120312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.655 [2024-12-09 06:25:59.139915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:19.655 [2024-12-09 06:25:59.139927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:120320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.655 [2024-12-09 06:25:59.139933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:19.655 [2024-12-09 06:25:59.139945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:120328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.655 [2024-12-09 06:25:59.139951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:19.655 [2024-12-09 06:25:59.139964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:120336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.655 [2024-12-09 06:25:59.139969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:19.655 [2024-12-09 06:25:59.139982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:120344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.655 [2024-12-09 06:25:59.139987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:19.655 [2024-12-09 06:25:59.140002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:120352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.655 [2024-12-09 06:25:59.140007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:19.655 [2024-12-09 06:25:59.140020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:120360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.655 [2024-12-09 06:25:59.140025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:19.655 [2024-12-09 06:25:59.140039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:120368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.655 [2024-12-09 06:25:59.140044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:19.655 [2024-12-09 06:25:59.140057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:120376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.655 [2024-12-09 06:25:59.140062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:19.655 [2024-12-09 06:25:59.140075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:120384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.655 [2024-12-09 06:25:59.140080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:19.655 [2024-12-09 06:25:59.140093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:120392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.655 [2024-12-09 06:25:59.140098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:19.655 [2024-12-09 06:25:59.140111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:120400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.655 [2024-12-09 06:25:59.140117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:19.655 [2024-12-09 06:25:59.140130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:120408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.655 [2024-12-09 06:25:59.140135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:19.655 [2024-12-09 06:25:59.140148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:120416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.655 [2024-12-09 06:25:59.140153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:19.655 [2024-12-09 06:25:59.140166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:120424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.655 [2024-12-09 06:25:59.140171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:19.655 [2024-12-09 06:25:59.140184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:120432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.655 [2024-12-09 06:25:59.140189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.655 [2024-12-09 06:25:59.140202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:120440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.655 [2024-12-09 06:25:59.140207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.655 [2024-12-09 06:25:59.140220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:120448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.655 [2024-12-09 06:25:59.140227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:19.655 [2024-12-09 06:25:59.140240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:120456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.655 [2024-12-09 06:25:59.140245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:19.655 [2024-12-09 06:25:59.140258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:120464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.655 [2024-12-09 06:25:59.140264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:19.655 [2024-12-09 06:25:59.140277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:120472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.655 [2024-12-09 06:25:59.140282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:19.655 [2024-12-09 06:25:59.140295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:120480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.655 [2024-12-09 06:25:59.140300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:19.655 [2024-12-09 06:25:59.140313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:120488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.655 [2024-12-09 06:25:59.140318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:19.656 [2024-12-09 06:25:59.140331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:120496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.656 [2024-12-09 06:25:59.140336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:19.656 [2024-12-09 06:25:59.140349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:120504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.656 [2024-12-09 06:25:59.140354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:19.656 [2024-12-09 06:25:59.140367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:120512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.656 [2024-12-09 06:25:59.140373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:19.656 [2024-12-09 06:25:59.140385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:120520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.656 [2024-12-09 06:25:59.140391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:19.656 [2024-12-09 06:25:59.140403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:120528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.656 [2024-12-09 06:25:59.140409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:19.656 [2024-12-09 06:25:59.140422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:120536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.656 [2024-12-09 06:25:59.140427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:19.656 [2024-12-09 06:25:59.140440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:120544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.656 [2024-12-09 06:25:59.140447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:19.656 [2024-12-09 06:25:59.140465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:120552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.656 [2024-12-09 06:25:59.140471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:19.656 [2024-12-09 06:25:59.140484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:120560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.656 [2024-12-09 06:25:59.140489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:19.656 [2024-12-09 06:25:59.140558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:120568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.656 [2024-12-09 06:25:59.140565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:19.656 [2024-12-09 06:25:59.140580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:120576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.656 [2024-12-09 06:25:59.140585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:19.656 [2024-12-09 06:25:59.140600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:120584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.656 [2024-12-09 06:25:59.140605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:19.656 [2024-12-09 06:25:59.140620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:120592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.656 [2024-12-09 06:25:59.140625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:19.656 [2024-12-09 06:25:59.140640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:120600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.656 [2024-12-09 06:25:59.140645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:19.656 [2024-12-09 06:25:59.140660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:120608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.656 [2024-12-09 06:25:59.140665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:19.656 [2024-12-09 06:25:59.140680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:120616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.656 [2024-12-09 06:25:59.140685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:19.656 [2024-12-09 06:25:59.140700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:120624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.656 [2024-12-09 06:25:59.140705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:19.656 [2024-12-09 06:25:59.140720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:120632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.656 [2024-12-09 06:25:59.140725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:19.656 [2024-12-09 06:25:59.140739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:120640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.656 [2024-12-09 06:25:59.140745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:19.656 [2024-12-09 06:25:59.140761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:120648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.656 [2024-12-09 06:25:59.140767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:19.656 [2024-12-09 06:25:59.140781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:120656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.656 [2024-12-09 06:25:59.140787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:19.656 [2024-12-09 06:25:59.140801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:120664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.656 [2024-12-09 06:25:59.140807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:19.656 [2024-12-09 06:25:59.140821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:120672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.656 [2024-12-09 06:25:59.140826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:19.656 [2024-12-09 06:25:59.140841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:120680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.656 [2024-12-09 06:25:59.140846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:19.656 [2024-12-09 06:25:59.140860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:120688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.656 [2024-12-09 06:25:59.140866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:19.656 [2024-12-09 06:25:59.140880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:120696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.656 [2024-12-09 06:25:59.140885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.656 [2024-12-09 06:25:59.140900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:120704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.656 [2024-12-09 06:25:59.140905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:19.656 [2024-12-09 06:25:59.140920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:120712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.656 [2024-12-09 06:25:59.140925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:19.656 [2024-12-09 06:25:59.140939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:120720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.656 [2024-12-09 06:25:59.140945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:19.656 [2024-12-09 06:25:59.140959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:120728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.656 [2024-12-09 06:25:59.140964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:19.656 [2024-12-09 06:25:59.140979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:120736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.656 [2024-12-09 06:25:59.140984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:19.656 [2024-12-09 06:25:59.141000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:120744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.656 [2024-12-09 06:25:59.141005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:19.656 [2024-12-09 06:25:59.141020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:120752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.656 [2024-12-09 06:25:59.141025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:19.656 [2024-12-09 06:25:59.141040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:120760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.656 [2024-12-09 06:25:59.141045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:19.656 [2024-12-09 06:25:59.141060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:120768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.656 [2024-12-09 06:25:59.141065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:19.656 [2024-12-09 06:25:59.141080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:120776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.656 [2024-12-09 06:25:59.141085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:19.656 [2024-12-09 06:25:59.141100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:120784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.656 [2024-12-09 06:25:59.141105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:19.656 [2024-12-09 06:25:59.141120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:120792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.656 [2024-12-09 06:25:59.141125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:19.657 [2024-12-09 06:25:59.141140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:120800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.657 [2024-12-09 06:25:59.141145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:19.657 [2024-12-09 06:25:59.141160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:120808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.657 [2024-12-09 06:25:59.141165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:19.657 [2024-12-09 06:25:59.141179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:120816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.657 [2024-12-09 06:25:59.141184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:19.657 [2024-12-09 06:25:59.141199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:120824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.657 [2024-12-09 06:25:59.141204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:19.657 [2024-12-09 06:25:59.141219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:120832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.657 [2024-12-09 06:25:59.141225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:19.657 [2024-12-09 06:25:59.141239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:120840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.657 [2024-12-09 06:25:59.141246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:19.657 [2024-12-09 06:25:59.141260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:120848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.657 [2024-12-09 06:25:59.141266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:19.657 [2024-12-09 06:25:59.141280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:120856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.657 [2024-12-09 06:25:59.141285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:19.657 [2024-12-09 06:25:59.141300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:120864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.657 [2024-12-09 06:25:59.141305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:19.657 [2024-12-09 06:25:59.141320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:120872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.657 [2024-12-09 06:25:59.141325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:19.657 [2024-12-09 06:25:59.141339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:120880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.657 [2024-12-09 06:25:59.141345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:19.657 [2024-12-09 06:25:59.141359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:120888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.657 [2024-12-09 06:25:59.141365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:19.657 [2024-12-09 06:25:59.141379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:119880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.657 [2024-12-09 06:25:59.141384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:19.657 [2024-12-09 06:25:59.141399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:119888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.657 [2024-12-09 06:25:59.141404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:19.657 [2024-12-09 06:25:59.141419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:119896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.657 [2024-12-09 06:25:59.141424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:19.657 [2024-12-09 06:25:59.141439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:119904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.657 [2024-12-09 06:25:59.141444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:19.657 [2024-12-09 06:25:59.141463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:119912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.657 [2024-12-09 06:25:59.141468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:19.657 [2024-12-09 06:25:59.141483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:119920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.657 [2024-12-09 06:25:59.141490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:19.657 [2024-12-09 06:25:59.141505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:119928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.657 [2024-12-09 06:25:59.141510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:19.657 12171.42 IOPS, 47.54 MiB/s [2024-12-09T05:26:14.244Z] 11235.15 IOPS, 43.89 MiB/s [2024-12-09T05:26:14.244Z] 10432.64 IOPS, 40.75 MiB/s [2024-12-09T05:26:14.244Z] 9798.67 IOPS, 38.28 MiB/s [2024-12-09T05:26:14.244Z] 9964.94 IOPS, 38.93 MiB/s [2024-12-09T05:26:14.244Z] 10137.00 IOPS, 39.60 MiB/s [2024-12-09T05:26:14.244Z] 10467.83 IOPS, 40.89 MiB/s [2024-12-09T05:26:14.244Z] 10774.68 IOPS, 42.09 MiB/s [2024-12-09T05:26:14.244Z] 10944.30 IOPS, 42.75 MiB/s [2024-12-09T05:26:14.244Z] 11013.67 IOPS, 43.02 MiB/s [2024-12-09T05:26:14.244Z] 11081.18 IOPS, 43.29 MiB/s [2024-12-09T05:26:14.244Z] 11301.61 IOPS, 44.15 MiB/s [2024-12-09T05:26:14.244Z] 11507.46 IOPS, 44.95 MiB/s [2024-12-09T05:26:14.244Z] [2024-12-09 06:26:11.707190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:40104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.657 [2024-12-09 06:26:11.707225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:19.657 [2024-12-09 06:26:11.707254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:40136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.657 [2024-12-09 06:26:11.707261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:19.657 [2024-12-09 06:26:11.707272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:40160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.657 [2024-12-09 06:26:11.707278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:19.657 [2024-12-09 06:26:11.707289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:40192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.657 [2024-12-09 06:26:11.707295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:19.657 [2024-12-09 06:26:11.707306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:40112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.657 [2024-12-09 06:26:11.707311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:19.657 [2024-12-09 06:26:11.707322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:40144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.657 [2024-12-09 06:26:11.707327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:19.657 [2024-12-09 06:26:11.707338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:40184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.657 [2024-12-09 06:26:11.707344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:19.657 [2024-12-09 06:26:11.708266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.657 [2024-12-09 06:26:11.708278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:19.657 [2024-12-09 06:26:11.708291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:40256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.657 [2024-12-09 06:26:11.708297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:19.657 [2024-12-09 06:26:11.708308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:40272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.657 [2024-12-09 06:26:11.708318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.657 [2024-12-09 06:26:11.708329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:40288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.657 [2024-12-09 06:26:11.708335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:19.657 [2024-12-09 06:26:11.708346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:40304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.657 [2024-12-09 06:26:11.708351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:19.657 [2024-12-09 06:26:11.708362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.657 [2024-12-09 06:26:11.708367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:19.657 [2024-12-09 06:26:11.708378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:40336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.657 [2024-12-09 06:26:11.708383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:19.657 [2024-12-09 06:26:11.708394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:40352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.657 [2024-12-09 06:26:11.708399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:19.657 [2024-12-09 06:26:11.708410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:40368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.657 [2024-12-09 06:26:11.708415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:19.657 [2024-12-09 06:26:11.708426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:40384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.657 [2024-12-09 06:26:11.708431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:19.657 [2024-12-09 06:26:11.708442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.658 [2024-12-09 06:26:11.708452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:19.658 [2024-12-09 06:26:11.708463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:40416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.658 [2024-12-09 06:26:11.708468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:19.658 [2024-12-09 06:26:11.708479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.658 [2024-12-09 06:26:11.708484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:19.658 [2024-12-09 06:26:11.708495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:40448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.658 [2024-12-09 06:26:11.708500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:19.658 [2024-12-09 06:26:11.708511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:40464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.658 [2024-12-09 06:26:11.708519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:19.658 [2024-12-09 06:26:11.708529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:40480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.658 [2024-12-09 06:26:11.708535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:19.658 [2024-12-09 06:26:11.708545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.658 [2024-12-09 06:26:11.708550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:19.658 [2024-12-09 06:26:11.708561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:40512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.658 [2024-12-09 06:26:11.708566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:19.658 [2024-12-09 06:26:11.708577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:40232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.658 [2024-12-09 06:26:11.708583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:19.658 [2024-12-09 06:26:11.708593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:40528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.658 [2024-12-09 06:26:11.708599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:19.658 [2024-12-09 06:26:11.708609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:40544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.658 [2024-12-09 06:26:11.708614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:19.658 [2024-12-09 06:26:11.708626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:40560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.658 [2024-12-09 06:26:11.708631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:19.658 [2024-12-09 06:26:11.708642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:40576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.658 [2024-12-09 06:26:11.708647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:19.658 [2024-12-09 06:26:11.708658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:40592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.658 [2024-12-09 06:26:11.708663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:19.658 [2024-12-09 06:26:11.708673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:40608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.658 [2024-12-09 06:26:11.708679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:19.658 [2024-12-09 06:26:11.708690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:40624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.658 [2024-12-09 06:26:11.708695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:19.658 [2024-12-09 06:26:11.708705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:40640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.658 [2024-12-09 06:26:11.708711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:19.658 [2024-12-09 06:26:11.708723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:40656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.658 [2024-12-09 06:26:11.708728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:19.658 [2024-12-09 06:26:11.708739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:40672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.658 [2024-12-09 06:26:11.708745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:19.658 [2024-12-09 06:26:11.708755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:40688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.658 [2024-12-09 06:26:11.708761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:19.658 [2024-12-09 06:26:11.708771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:40704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.658 [2024-12-09 06:26:11.708776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:19.658 [2024-12-09 06:26:11.708787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:40720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.658 [2024-12-09 06:26:11.708793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:19.658 [2024-12-09 06:26:11.708803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:40736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.658 [2024-12-09 06:26:11.708809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:19.658 [2024-12-09 06:26:11.708819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:40752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.658 [2024-12-09 06:26:11.708825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:19.658 [2024-12-09 06:26:11.708836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.658 [2024-12-09 06:26:11.708841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.658 [2024-12-09 06:26:11.708852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:40784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.658 [2024-12-09 06:26:11.708857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:19.658 [2024-12-09 06:26:11.708868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:40208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.658 [2024-12-09 06:26:11.708873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:19.658 [2024-12-09 06:26:11.708884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.658 [2024-12-09 06:26:11.708889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:19.658 [2024-12-09 06:26:11.708900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:40816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.658 [2024-12-09 06:26:11.708905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:19.658 [2024-12-09 06:26:11.708917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:40832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.658 [2024-12-09 06:26:11.708923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:19.658 [2024-12-09 06:26:11.708934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:40848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.658 [2024-12-09 06:26:11.708939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:19.658 [2024-12-09 06:26:11.710019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:40864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.658 [2024-12-09 06:26:11.710034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:19.658 [2024-12-09 06:26:11.710047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:40880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.658 [2024-12-09 06:26:11.710052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:19.658 [2024-12-09 06:26:11.710063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:40896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.658 [2024-12-09 06:26:11.710068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:19.658 [2024-12-09 06:26:11.710079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:40912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.658 [2024-12-09 06:26:11.710085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:19.659 [2024-12-09 06:26:11.710095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:40928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.659 [2024-12-09 06:26:11.710101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:19.659 [2024-12-09 06:26:11.710111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:40944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.659 [2024-12-09 06:26:11.710117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:19.659 11609.16 IOPS, 45.35 MiB/s [2024-12-09T05:26:14.246Z] 11649.42 IOPS, 45.51 MiB/s [2024-12-09T05:26:14.246Z] Received shutdown signal, test time was about 26.619674 seconds 00:27:19.659 00:27:19.659 Latency(us) 00:27:19.659 [2024-12-09T05:26:14.246Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:19.659 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:27:19.659 Verification LBA range: start 0x0 length 0x4000 00:27:19.659 Nvme0n1 : 26.62 11665.14 45.57 0.00 0.00 10953.28 401.72 3071521.08 00:27:19.659 [2024-12-09T05:26:14.246Z] =================================================================================================================== 00:27:19.659 [2024-12-09T05:26:14.246Z] Total : 11665.14 45.57 0.00 0.00 10953.28 401.72 3071521.08 00:27:19.659 06:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:19.659 06:26:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:27:19.659 06:26:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:19.659 06:26:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:27:19.659 06:26:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:19.659 06:26:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:27:19.659 06:26:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:19.659 06:26:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:27:19.659 06:26:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:19.659 06:26:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:19.659 rmmod nvme_tcp 00:27:19.659 rmmod nvme_fabrics 00:27:19.919 rmmod nvme_keyring 00:27:19.919 06:26:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:19.919 06:26:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:27:19.919 06:26:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:27:19.920 06:26:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 448865 ']' 00:27:19.920 06:26:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 448865 00:27:19.920 06:26:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 448865 ']' 00:27:19.920 06:26:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 448865 00:27:19.920 06:26:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:27:19.920 06:26:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:19.920 06:26:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 448865 00:27:19.920 06:26:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:19.920 06:26:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:19.920 06:26:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 448865' 00:27:19.920 killing process with pid 448865 00:27:19.920 06:26:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 448865 00:27:19.920 06:26:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 448865 00:27:19.920 06:26:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:19.920 06:26:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:19.920 06:26:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:19.920 06:26:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:27:19.920 06:26:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:19.920 06:26:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:27:19.920 06:26:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:27:19.920 06:26:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:19.920 06:26:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:19.920 06:26:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:19.920 06:26:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:19.920 06:26:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:22.460 06:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:22.460 00:27:22.460 real 0m40.266s 00:27:22.460 user 1m43.924s 00:27:22.460 sys 0m11.338s 00:27:22.460 06:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:22.460 06:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:22.460 ************************************ 00:27:22.460 END TEST nvmf_host_multipath_status 00:27:22.460 ************************************ 00:27:22.460 06:26:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:22.460 06:26:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:22.460 06:26:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:22.460 06:26:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.460 ************************************ 00:27:22.460 START TEST nvmf_discovery_remove_ifc 00:27:22.460 ************************************ 00:27:22.460 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:22.460 * Looking for test storage... 00:27:22.460 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:22.460 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:22.460 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:27:22.460 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:22.460 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:22.460 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:22.460 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:22.460 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:22.460 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:27:22.460 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:27:22.460 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:27:22.460 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:27:22.460 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:27:22.460 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:27:22.460 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:27:22.460 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:22.460 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:27:22.460 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:27:22.460 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:22.460 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:22.460 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:27:22.460 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:27:22.460 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:22.460 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:27:22.460 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:27:22.460 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:27:22.460 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:27:22.460 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:22.460 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:27:22.460 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:27:22.460 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:22.460 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:22.460 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:27:22.460 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:22.460 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:22.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:22.460 --rc genhtml_branch_coverage=1 00:27:22.460 --rc genhtml_function_coverage=1 00:27:22.460 --rc genhtml_legend=1 00:27:22.460 --rc geninfo_all_blocks=1 00:27:22.460 --rc geninfo_unexecuted_blocks=1 00:27:22.460 00:27:22.460 ' 00:27:22.460 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:22.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:22.460 --rc genhtml_branch_coverage=1 00:27:22.460 --rc genhtml_function_coverage=1 00:27:22.460 --rc genhtml_legend=1 00:27:22.460 --rc geninfo_all_blocks=1 00:27:22.460 --rc geninfo_unexecuted_blocks=1 00:27:22.460 00:27:22.460 ' 00:27:22.460 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:22.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:22.460 --rc genhtml_branch_coverage=1 00:27:22.460 --rc genhtml_function_coverage=1 00:27:22.460 --rc genhtml_legend=1 00:27:22.460 --rc geninfo_all_blocks=1 00:27:22.460 --rc geninfo_unexecuted_blocks=1 00:27:22.460 00:27:22.460 ' 00:27:22.460 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:22.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:22.460 --rc genhtml_branch_coverage=1 00:27:22.460 --rc genhtml_function_coverage=1 00:27:22.460 --rc genhtml_legend=1 00:27:22.460 --rc geninfo_all_blocks=1 00:27:22.460 --rc geninfo_unexecuted_blocks=1 00:27:22.460 00:27:22.460 ' 00:27:22.460 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:22.460 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:27:22.460 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:22.460 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:22.460 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:22.460 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:22.460 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:22.460 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:22.460 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:22.460 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:22.460 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:22.460 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:22.460 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:27:22.461 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:27:22.461 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:22.461 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:22.461 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:22.461 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:22.461 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:22.461 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:27:22.461 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:22.461 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:22.461 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:22.461 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:22.461 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:22.461 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:22.461 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:27:22.461 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:22.461 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:27:22.461 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:22.461 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:22.461 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:22.461 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:22.461 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:22.461 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:22.461 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:22.461 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:22.461 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:22.461 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:22.461 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:27:22.461 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:27:22.461 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:27:22.461 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:27:22.461 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:27:22.461 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:27:22.461 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:27:22.461 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:22.461 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:22.461 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:22.461 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:22.461 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:22.461 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:22.461 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:22.461 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:22.461 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:22.461 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:22.461 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:27:22.461 06:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:30.597 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:30.597 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:27:30.597 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:30.597 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:30.597 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:30.597 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:30.597 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:30.597 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:27:30.597 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:30.597 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:27:30.597 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:27:30.597 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:27:30.597 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:27:30.597 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:27:30.597 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:27:30.597 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:30.597 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:30.597 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:30.597 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:30.597 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:30.597 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:30.597 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:30.597 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:30.597 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:30.597 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:30.597 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:30.597 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:30.597 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:30.597 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:30.597 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:30.597 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:30.597 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:30.597 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:30.597 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:30.597 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:30.597 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:30.597 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:30.597 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:30.597 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:30.597 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:30.597 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:30.597 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:30.597 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:30.597 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:30.597 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:30.597 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:30.597 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:30.597 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:30.597 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:30.597 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:30.597 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:30.597 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:30.597 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:30.597 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:30.597 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:30.597 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:30.597 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:30.597 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:30.597 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:30.597 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:30.597 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:30.597 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:30.597 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:30.597 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:30.597 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:30.597 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:30.597 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:30.597 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:30.597 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:30.597 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:30.598 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:30.598 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:30.598 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:30.598 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:27:30.598 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:30.598 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:30.598 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:30.598 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:30.598 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:30.598 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:30.598 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:30.598 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:30.598 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:30.598 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:30.598 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:30.598 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:30.598 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:30.598 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:30.598 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:30.598 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:30.598 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:30.598 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:30.598 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:30.598 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:30.598 06:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:30.598 06:26:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:30.598 06:26:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:30.598 06:26:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:30.598 06:26:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:30.598 06:26:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:30.598 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:30.598 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.597 ms 00:27:30.598 00:27:30.598 --- 10.0.0.2 ping statistics --- 00:27:30.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:30.598 rtt min/avg/max/mdev = 0.597/0.597/0.597/0.000 ms 00:27:30.598 06:26:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:30.598 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:30.598 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.268 ms 00:27:30.598 00:27:30.598 --- 10.0.0.1 ping statistics --- 00:27:30.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:30.598 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:27:30.598 06:26:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:30.598 06:26:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:27:30.598 06:26:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:30.598 06:26:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:30.598 06:26:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:30.598 06:26:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:30.598 06:26:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:30.598 06:26:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:30.598 06:26:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:30.598 06:26:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:27:30.598 06:26:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:30.598 06:26:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:30.598 06:26:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:30.598 06:26:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=458143 00:27:30.598 06:26:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 458143 00:27:30.598 06:26:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:30.598 06:26:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 458143 ']' 00:27:30.598 06:26:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:30.598 06:26:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:30.598 06:26:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:30.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:30.598 06:26:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:30.598 06:26:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:30.598 [2024-12-09 06:26:24.261995] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:27:30.598 [2024-12-09 06:26:24.262076] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:30.598 [2024-12-09 06:26:24.342671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:30.598 [2024-12-09 06:26:24.390925] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:30.598 [2024-12-09 06:26:24.390975] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:30.598 [2024-12-09 06:26:24.390982] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:30.598 [2024-12-09 06:26:24.390989] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:30.598 [2024-12-09 06:26:24.390995] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:30.598 [2024-12-09 06:26:24.391741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:30.598 06:26:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:30.598 06:26:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:27:30.598 06:26:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:30.598 06:26:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:30.598 06:26:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:30.598 06:26:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:30.598 06:26:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:27:30.598 06:26:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.598 06:26:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:30.598 [2024-12-09 06:26:25.132920] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:30.598 [2024-12-09 06:26:25.141171] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:27:30.598 null0 00:27:30.598 [2024-12-09 06:26:25.173148] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:30.858 06:26:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.858 06:26:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=458458 00:27:30.858 06:26:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 458458 /tmp/host.sock 00:27:30.858 06:26:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:27:30.858 06:26:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 458458 ']' 00:27:30.858 06:26:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:27:30.858 06:26:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:30.858 06:26:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:27:30.858 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:27:30.858 06:26:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:30.858 06:26:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:30.858 [2024-12-09 06:26:25.260586] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:27:30.858 [2024-12-09 06:26:25.260648] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid458458 ] 00:27:30.858 [2024-12-09 06:26:25.349858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:30.858 [2024-12-09 06:26:25.400910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:31.798 06:26:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:31.798 06:26:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:27:31.798 06:26:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:31.798 06:26:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:27:31.798 06:26:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.798 06:26:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:31.798 06:26:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.798 06:26:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:27:31.798 06:26:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.798 06:26:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:31.798 06:26:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.798 06:26:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:27:31.798 06:26:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.798 06:26:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:32.750 [2024-12-09 06:26:27.212337] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:32.750 [2024-12-09 06:26:27.212357] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:32.750 [2024-12-09 06:26:27.212370] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:33.011 [2024-12-09 06:26:27.342767] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:27:33.011 [2024-12-09 06:26:27.565031] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:27:33.011 [2024-12-09 06:26:27.566070] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1fa1010:1 started. 00:27:33.011 [2024-12-09 06:26:27.567531] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:33.011 [2024-12-09 06:26:27.567573] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:33.011 [2024-12-09 06:26:27.567595] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:33.011 [2024-12-09 06:26:27.567607] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:33.011 [2024-12-09 06:26:27.567626] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:33.011 06:26:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.011 06:26:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:27:33.011 [2024-12-09 06:26:27.571390] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1fa1010 was disconnected and freed. delete nvme_qpair. 00:27:33.011 06:26:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:33.011 06:26:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:33.011 06:26:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:33.011 06:26:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:33.011 06:26:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.011 06:26:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:33.011 06:26:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:33.271 06:26:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.272 06:26:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:27:33.272 06:26:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:27:33.272 06:26:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:27:33.272 06:26:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:27:33.272 06:26:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:33.272 06:26:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:33.272 06:26:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:33.272 06:26:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.272 06:26:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:33.272 06:26:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:33.272 06:26:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:33.272 06:26:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.272 06:26:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:33.272 06:26:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:34.654 06:26:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:34.654 06:26:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:34.654 06:26:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:34.654 06:26:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:34.654 06:26:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.654 06:26:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:34.654 06:26:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:34.654 06:26:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.654 06:26:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:34.654 06:26:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:35.591 06:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:35.591 06:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:35.591 06:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:35.591 06:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.591 06:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:35.591 06:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:35.591 06:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:35.591 06:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.591 06:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:35.591 06:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:36.554 06:26:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:36.554 06:26:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:36.554 06:26:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:36.554 06:26:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.554 06:26:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:36.554 06:26:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:36.554 06:26:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:36.554 06:26:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.554 06:26:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:36.554 06:26:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:37.536 06:26:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:37.536 06:26:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:37.536 06:26:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:37.536 06:26:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:37.536 06:26:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.536 06:26:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:37.536 06:26:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:37.536 06:26:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.536 06:26:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:37.536 06:26:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:38.512 [2024-12-09 06:26:33.008181] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:27:38.513 [2024-12-09 06:26:33.008218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:38.513 [2024-12-09 06:26:33.008227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:38.513 [2024-12-09 06:26:33.008235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:38.513 [2024-12-09 06:26:33.008241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:38.513 [2024-12-09 06:26:33.008247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:38.513 [2024-12-09 06:26:33.008253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:38.513 [2024-12-09 06:26:33.008258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:38.513 [2024-12-09 06:26:33.008263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:38.513 [2024-12-09 06:26:33.008269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:38.513 [2024-12-09 06:26:33.008275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:38.513 [2024-12-09 06:26:33.008284] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d920 is same with the state(6) to be set 00:27:38.513 [2024-12-09 06:26:33.018202] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7d920 (9): Bad file descriptor 00:27:38.513 06:26:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:38.513 06:26:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:38.513 06:26:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:38.513 06:26:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.513 06:26:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:38.513 06:26:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:38.513 06:26:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:38.513 [2024-12-09 06:26:33.028235] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:38.513 [2024-12-09 06:26:33.028245] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:38.513 [2024-12-09 06:26:33.028251] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:38.513 [2024-12-09 06:26:33.028255] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:38.513 [2024-12-09 06:26:33.028273] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:39.958 [2024-12-09 06:26:34.083538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:27:39.958 [2024-12-09 06:26:34.083633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7d920 with addr=10.0.0.2, port=4420 00:27:39.958 [2024-12-09 06:26:34.083666] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7d920 is same with the state(6) to be set 00:27:39.958 [2024-12-09 06:26:34.083723] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7d920 (9): Bad file descriptor 00:27:39.958 [2024-12-09 06:26:34.083870] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:27:39.958 [2024-12-09 06:26:34.083930] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:39.958 [2024-12-09 06:26:34.083953] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:39.958 [2024-12-09 06:26:34.083978] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:39.958 [2024-12-09 06:26:34.083999] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:39.958 [2024-12-09 06:26:34.084017] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:39.958 [2024-12-09 06:26:34.084030] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:39.958 [2024-12-09 06:26:34.084053] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:39.958 [2024-12-09 06:26:34.084068] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:39.958 06:26:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.958 06:26:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:39.958 06:26:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:40.559 [2024-12-09 06:26:35.086478] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:40.559 [2024-12-09 06:26:35.086498] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:40.559 [2024-12-09 06:26:35.086507] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:40.559 [2024-12-09 06:26:35.086512] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:40.559 [2024-12-09 06:26:35.086518] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:27:40.559 [2024-12-09 06:26:35.086524] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:40.559 [2024-12-09 06:26:35.086527] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:40.559 [2024-12-09 06:26:35.086531] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:40.559 [2024-12-09 06:26:35.086550] bdev_nvme.c:7262:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:27:40.559 [2024-12-09 06:26:35.086568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:40.559 [2024-12-09 06:26:35.086576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.559 [2024-12-09 06:26:35.086583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:40.559 [2024-12-09 06:26:35.086589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.559 [2024-12-09 06:26:35.086595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:40.559 [2024-12-09 06:26:35.086600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.559 [2024-12-09 06:26:35.086606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:40.559 [2024-12-09 06:26:35.086611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.559 [2024-12-09 06:26:35.086617] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:40.559 [2024-12-09 06:26:35.086622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.559 [2024-12-09 06:26:35.086628] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:27:40.559 [2024-12-09 06:26:35.086750] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f6cc60 (9): Bad file descriptor 00:27:40.559 [2024-12-09 06:26:35.087761] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:27:40.559 [2024-12-09 06:26:35.087768] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:27:40.559 06:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:40.559 06:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:40.559 06:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:40.559 06:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.560 06:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:40.560 06:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:40.560 06:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:40.560 06:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.824 06:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:27:40.824 06:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:40.824 06:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:40.824 06:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:27:40.824 06:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:40.824 06:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:40.824 06:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:40.824 06:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:40.824 06:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:40.824 06:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.824 06:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:40.824 06:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.824 06:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:40.824 06:26:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:41.799 06:26:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:41.799 06:26:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:41.799 06:26:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:41.799 06:26:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.799 06:26:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:41.799 06:26:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:41.799 06:26:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:41.799 06:26:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.799 06:26:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:41.799 06:26:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:42.794 [2024-12-09 06:26:37.147641] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:42.794 [2024-12-09 06:26:37.147656] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:42.794 [2024-12-09 06:26:37.147667] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:42.794 [2024-12-09 06:26:37.235909] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:27:42.794 [2024-12-09 06:26:37.294531] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:27:42.794 [2024-12-09 06:26:37.295209] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x1f881e0:1 started. 00:27:42.794 [2024-12-09 06:26:37.296138] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:42.794 [2024-12-09 06:26:37.296165] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:42.794 [2024-12-09 06:26:37.296180] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:42.794 [2024-12-09 06:26:37.296192] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:27:42.794 [2024-12-09 06:26:37.296201] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:42.794 [2024-12-09 06:26:37.346079] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x1f881e0 was disconnected and freed. delete nvme_qpair. 00:27:42.794 06:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:42.794 06:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:42.794 06:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:42.794 06:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.794 06:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:42.794 06:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:42.794 06:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:43.080 06:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.080 06:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:27:43.080 06:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:27:43.080 06:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 458458 00:27:43.080 06:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 458458 ']' 00:27:43.080 06:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 458458 00:27:43.080 06:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:27:43.080 06:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:43.080 06:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 458458 00:27:43.080 06:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:43.080 06:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:43.080 06:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 458458' 00:27:43.080 killing process with pid 458458 00:27:43.080 06:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 458458 00:27:43.080 06:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 458458 00:27:43.080 06:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:27:43.080 06:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:43.080 06:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:27:43.080 06:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:43.080 06:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:27:43.080 06:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:43.080 06:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:43.080 rmmod nvme_tcp 00:27:43.080 rmmod nvme_fabrics 00:27:43.080 rmmod nvme_keyring 00:27:43.080 06:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:43.080 06:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:27:43.080 06:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:27:43.080 06:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 458143 ']' 00:27:43.080 06:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 458143 00:27:43.080 06:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 458143 ']' 00:27:43.080 06:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 458143 00:27:43.080 06:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:27:43.080 06:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:43.080 06:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 458143 00:27:43.364 06:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:43.364 06:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:43.364 06:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 458143' 00:27:43.364 killing process with pid 458143 00:27:43.364 06:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 458143 00:27:43.364 06:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 458143 00:27:43.364 06:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:43.364 06:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:43.364 06:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:43.364 06:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:27:43.364 06:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:27:43.364 06:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:27:43.364 06:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:43.364 06:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:43.364 06:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:43.364 06:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:43.364 06:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:43.364 06:26:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:45.396 06:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:45.396 00:27:45.396 real 0m23.354s 00:27:45.396 user 0m27.518s 00:27:45.396 sys 0m6.926s 00:27:45.396 06:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:45.396 06:26:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:45.396 ************************************ 00:27:45.396 END TEST nvmf_discovery_remove_ifc 00:27:45.396 ************************************ 00:27:45.396 06:26:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:45.396 06:26:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:45.396 06:26:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:45.396 06:26:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.396 ************************************ 00:27:45.396 START TEST nvmf_identify_kernel_target 00:27:45.396 ************************************ 00:27:45.396 06:26:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:45.682 * Looking for test storage... 00:27:45.682 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:45.682 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:45.682 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:27:45.682 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:45.682 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:45.683 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:45.683 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:45.683 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:45.683 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:27:45.683 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:27:45.683 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:27:45.683 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:27:45.683 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:27:45.683 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:27:45.683 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:27:45.683 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:45.683 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:27:45.683 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:27:45.683 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:45.683 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:45.683 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:27:45.683 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:27:45.683 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:45.683 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:27:45.683 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:27:45.683 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:27:45.683 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:27:45.683 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:45.683 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:27:45.683 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:27:45.683 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:45.683 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:45.683 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:27:45.683 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:45.683 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:45.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:45.683 --rc genhtml_branch_coverage=1 00:27:45.683 --rc genhtml_function_coverage=1 00:27:45.683 --rc genhtml_legend=1 00:27:45.683 --rc geninfo_all_blocks=1 00:27:45.683 --rc geninfo_unexecuted_blocks=1 00:27:45.683 00:27:45.683 ' 00:27:45.683 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:45.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:45.683 --rc genhtml_branch_coverage=1 00:27:45.683 --rc genhtml_function_coverage=1 00:27:45.683 --rc genhtml_legend=1 00:27:45.683 --rc geninfo_all_blocks=1 00:27:45.683 --rc geninfo_unexecuted_blocks=1 00:27:45.683 00:27:45.683 ' 00:27:45.683 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:45.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:45.683 --rc genhtml_branch_coverage=1 00:27:45.683 --rc genhtml_function_coverage=1 00:27:45.683 --rc genhtml_legend=1 00:27:45.683 --rc geninfo_all_blocks=1 00:27:45.683 --rc geninfo_unexecuted_blocks=1 00:27:45.683 00:27:45.683 ' 00:27:45.683 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:45.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:45.683 --rc genhtml_branch_coverage=1 00:27:45.683 --rc genhtml_function_coverage=1 00:27:45.683 --rc genhtml_legend=1 00:27:45.683 --rc geninfo_all_blocks=1 00:27:45.683 --rc geninfo_unexecuted_blocks=1 00:27:45.683 00:27:45.683 ' 00:27:45.683 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:45.683 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:27:45.683 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:45.683 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:45.683 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:45.683 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:45.683 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:45.683 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:45.683 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:45.683 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:45.683 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:45.683 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:45.683 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:27:45.683 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:27:45.683 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:45.683 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:45.683 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:45.683 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:45.683 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:45.683 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:27:45.683 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:45.683 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:45.683 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:45.683 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.683 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.683 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.683 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:27:45.683 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.683 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:27:45.683 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:45.683 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:45.683 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:45.683 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:45.683 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:45.683 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:45.683 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:45.683 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:45.683 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:45.683 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:45.683 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:27:45.683 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:45.683 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:45.684 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:45.684 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:45.684 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:45.684 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:45.684 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:45.684 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:45.684 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:45.684 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:45.684 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:27:45.684 06:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:53.998 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:53.998 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:27:53.998 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:53.998 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:53.998 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:53.998 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:53.998 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:53.998 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:27:53.998 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:53.998 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:27:53.998 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:27:53.998 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:27:53.998 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:27:53.998 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:27:53.998 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:27:53.998 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:53.998 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:53.998 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:53.998 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:53.998 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:53.998 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:53.998 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:53.998 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:53.998 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:53.998 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:53.998 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:53.998 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:53.998 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:53.998 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:53.998 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:53.998 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:53.998 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:53.998 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:53.998 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:53.998 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:53.998 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:53.999 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:53.999 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:53.999 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:53.999 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:53.999 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:53.999 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:53.999 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:53.999 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:53.999 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:53.999 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:53.999 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:53.999 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:53.999 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:53.999 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:53.999 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:53.999 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:53.999 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:53.999 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:53.999 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:53.999 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:53.999 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:53.999 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:53.999 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:53.999 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:53.999 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:53.999 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:53.999 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:53.999 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:53.999 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:53.999 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:53.999 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:53.999 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:53.999 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:53.999 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:53.999 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:53.999 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:53.999 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:53.999 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:27:53.999 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:53.999 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:53.999 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:53.999 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:53.999 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:53.999 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:53.999 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:53.999 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:53.999 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:53.999 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:53.999 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:53.999 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:53.999 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:53.999 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:53.999 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:53.999 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:53.999 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:53.999 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:53.999 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:53.999 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:53.999 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:53.999 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:53.999 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:53.999 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:53.999 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:53.999 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:53.999 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:53.999 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.623 ms 00:27:53.999 00:27:53.999 --- 10.0.0.2 ping statistics --- 00:27:53.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:53.999 rtt min/avg/max/mdev = 0.623/0.623/0.623/0.000 ms 00:27:53.999 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:53.999 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:53.999 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.299 ms 00:27:53.999 00:27:53.999 --- 10.0.0.1 ping statistics --- 00:27:53.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:53.999 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:27:53.999 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:53.999 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:27:53.999 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:53.999 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:53.999 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:53.999 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:53.999 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:53.999 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:53.999 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:54.000 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:27:54.000 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:27:54.000 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:27:54.000 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:54.000 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:54.000 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.000 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.000 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:54.000 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.000 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:54.000 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:54.000 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:54.000 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:27:54.000 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:54.000 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:54.000 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:27:54.000 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:54.000 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:54.000 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:54.000 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:27:54.000 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:27:54.000 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:27:54.000 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:54.000 06:26:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:56.543 Waiting for block devices as requested 00:27:56.543 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:56.543 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:56.543 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:56.543 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:56.543 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:56.803 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:56.803 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:56.803 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:57.063 0000:65:00.0 (8086 0a54): vfio-pci -> nvme 00:27:57.063 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:57.063 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:57.323 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:57.323 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:57.323 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:57.584 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:57.584 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:57.584 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:58.155 06:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:27:58.155 06:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:58.156 06:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:27:58.156 06:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:27:58.156 06:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:58.156 06:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:27:58.156 06:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:27:58.156 06:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:27:58.156 06:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:58.156 No valid GPT data, bailing 00:27:58.156 06:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:58.156 06:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:27:58.156 06:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:27:58.156 06:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:27:58.156 06:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:27:58.156 06:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:58.156 06:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:58.156 06:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:58.156 06:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:58.156 06:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:27:58.156 06:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:27:58.156 06:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:27:58.156 06:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:27:58.156 06:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:27:58.156 06:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:27:58.156 06:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:27:58.156 06:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:58.156 06:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -a 10.0.0.1 -t tcp -s 4420 00:27:58.156 00:27:58.156 Discovery Log Number of Records 2, Generation counter 2 00:27:58.156 =====Discovery Log Entry 0====== 00:27:58.156 trtype: tcp 00:27:58.156 adrfam: ipv4 00:27:58.156 subtype: current discovery subsystem 00:27:58.156 treq: not specified, sq flow control disable supported 00:27:58.156 portid: 1 00:27:58.156 trsvcid: 4420 00:27:58.156 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:58.156 traddr: 10.0.0.1 00:27:58.156 eflags: none 00:27:58.156 sectype: none 00:27:58.156 =====Discovery Log Entry 1====== 00:27:58.156 trtype: tcp 00:27:58.156 adrfam: ipv4 00:27:58.156 subtype: nvme subsystem 00:27:58.156 treq: not specified, sq flow control disable supported 00:27:58.156 portid: 1 00:27:58.156 trsvcid: 4420 00:27:58.156 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:58.156 traddr: 10.0.0.1 00:27:58.156 eflags: none 00:27:58.156 sectype: none 00:27:58.156 06:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:27:58.156 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:27:58.418 ===================================================== 00:27:58.418 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:58.418 ===================================================== 00:27:58.418 Controller Capabilities/Features 00:27:58.418 ================================ 00:27:58.418 Vendor ID: 0000 00:27:58.418 Subsystem Vendor ID: 0000 00:27:58.418 Serial Number: 8aee12eb50ba0f253123 00:27:58.418 Model Number: Linux 00:27:58.418 Firmware Version: 6.8.9-20 00:27:58.418 Recommended Arb Burst: 0 00:27:58.418 IEEE OUI Identifier: 00 00 00 00:27:58.418 Multi-path I/O 00:27:58.418 May have multiple subsystem ports: No 00:27:58.418 May have multiple controllers: No 00:27:58.418 Associated with SR-IOV VF: No 00:27:58.418 Max Data Transfer Size: Unlimited 00:27:58.418 Max Number of Namespaces: 0 00:27:58.418 Max Number of I/O Queues: 1024 00:27:58.418 NVMe Specification Version (VS): 1.3 00:27:58.418 NVMe Specification Version (Identify): 1.3 00:27:58.418 Maximum Queue Entries: 1024 00:27:58.418 Contiguous Queues Required: No 00:27:58.418 Arbitration Mechanisms Supported 00:27:58.418 Weighted Round Robin: Not Supported 00:27:58.418 Vendor Specific: Not Supported 00:27:58.418 Reset Timeout: 7500 ms 00:27:58.418 Doorbell Stride: 4 bytes 00:27:58.418 NVM Subsystem Reset: Not Supported 00:27:58.418 Command Sets Supported 00:27:58.418 NVM Command Set: Supported 00:27:58.418 Boot Partition: Not Supported 00:27:58.418 Memory Page Size Minimum: 4096 bytes 00:27:58.418 Memory Page Size Maximum: 4096 bytes 00:27:58.418 Persistent Memory Region: Not Supported 00:27:58.418 Optional Asynchronous Events Supported 00:27:58.418 Namespace Attribute Notices: Not Supported 00:27:58.418 Firmware Activation Notices: Not Supported 00:27:58.418 ANA Change Notices: Not Supported 00:27:58.418 PLE Aggregate Log Change Notices: Not Supported 00:27:58.418 LBA Status Info Alert Notices: Not Supported 00:27:58.418 EGE Aggregate Log Change Notices: Not Supported 00:27:58.418 Normal NVM Subsystem Shutdown event: Not Supported 00:27:58.418 Zone Descriptor Change Notices: Not Supported 00:27:58.418 Discovery Log Change Notices: Supported 00:27:58.418 Controller Attributes 00:27:58.418 128-bit Host Identifier: Not Supported 00:27:58.418 Non-Operational Permissive Mode: Not Supported 00:27:58.418 NVM Sets: Not Supported 00:27:58.418 Read Recovery Levels: Not Supported 00:27:58.418 Endurance Groups: Not Supported 00:27:58.418 Predictable Latency Mode: Not Supported 00:27:58.418 Traffic Based Keep ALive: Not Supported 00:27:58.418 Namespace Granularity: Not Supported 00:27:58.418 SQ Associations: Not Supported 00:27:58.418 UUID List: Not Supported 00:27:58.418 Multi-Domain Subsystem: Not Supported 00:27:58.418 Fixed Capacity Management: Not Supported 00:27:58.418 Variable Capacity Management: Not Supported 00:27:58.418 Delete Endurance Group: Not Supported 00:27:58.418 Delete NVM Set: Not Supported 00:27:58.418 Extended LBA Formats Supported: Not Supported 00:27:58.418 Flexible Data Placement Supported: Not Supported 00:27:58.418 00:27:58.418 Controller Memory Buffer Support 00:27:58.418 ================================ 00:27:58.418 Supported: No 00:27:58.418 00:27:58.418 Persistent Memory Region Support 00:27:58.418 ================================ 00:27:58.418 Supported: No 00:27:58.418 00:27:58.418 Admin Command Set Attributes 00:27:58.418 ============================ 00:27:58.418 Security Send/Receive: Not Supported 00:27:58.418 Format NVM: Not Supported 00:27:58.418 Firmware Activate/Download: Not Supported 00:27:58.418 Namespace Management: Not Supported 00:27:58.418 Device Self-Test: Not Supported 00:27:58.418 Directives: Not Supported 00:27:58.418 NVMe-MI: Not Supported 00:27:58.418 Virtualization Management: Not Supported 00:27:58.418 Doorbell Buffer Config: Not Supported 00:27:58.418 Get LBA Status Capability: Not Supported 00:27:58.418 Command & Feature Lockdown Capability: Not Supported 00:27:58.418 Abort Command Limit: 1 00:27:58.418 Async Event Request Limit: 1 00:27:58.418 Number of Firmware Slots: N/A 00:27:58.418 Firmware Slot 1 Read-Only: N/A 00:27:58.418 Firmware Activation Without Reset: N/A 00:27:58.418 Multiple Update Detection Support: N/A 00:27:58.418 Firmware Update Granularity: No Information Provided 00:27:58.418 Per-Namespace SMART Log: No 00:27:58.418 Asymmetric Namespace Access Log Page: Not Supported 00:27:58.418 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:58.418 Command Effects Log Page: Not Supported 00:27:58.419 Get Log Page Extended Data: Supported 00:27:58.419 Telemetry Log Pages: Not Supported 00:27:58.419 Persistent Event Log Pages: Not Supported 00:27:58.419 Supported Log Pages Log Page: May Support 00:27:58.419 Commands Supported & Effects Log Page: Not Supported 00:27:58.419 Feature Identifiers & Effects Log Page:May Support 00:27:58.419 NVMe-MI Commands & Effects Log Page: May Support 00:27:58.419 Data Area 4 for Telemetry Log: Not Supported 00:27:58.419 Error Log Page Entries Supported: 1 00:27:58.419 Keep Alive: Not Supported 00:27:58.419 00:27:58.419 NVM Command Set Attributes 00:27:58.419 ========================== 00:27:58.419 Submission Queue Entry Size 00:27:58.419 Max: 1 00:27:58.419 Min: 1 00:27:58.419 Completion Queue Entry Size 00:27:58.419 Max: 1 00:27:58.419 Min: 1 00:27:58.419 Number of Namespaces: 0 00:27:58.419 Compare Command: Not Supported 00:27:58.419 Write Uncorrectable Command: Not Supported 00:27:58.419 Dataset Management Command: Not Supported 00:27:58.419 Write Zeroes Command: Not Supported 00:27:58.419 Set Features Save Field: Not Supported 00:27:58.419 Reservations: Not Supported 00:27:58.419 Timestamp: Not Supported 00:27:58.419 Copy: Not Supported 00:27:58.419 Volatile Write Cache: Not Present 00:27:58.419 Atomic Write Unit (Normal): 1 00:27:58.419 Atomic Write Unit (PFail): 1 00:27:58.419 Atomic Compare & Write Unit: 1 00:27:58.419 Fused Compare & Write: Not Supported 00:27:58.419 Scatter-Gather List 00:27:58.419 SGL Command Set: Supported 00:27:58.419 SGL Keyed: Not Supported 00:27:58.419 SGL Bit Bucket Descriptor: Not Supported 00:27:58.419 SGL Metadata Pointer: Not Supported 00:27:58.419 Oversized SGL: Not Supported 00:27:58.419 SGL Metadata Address: Not Supported 00:27:58.419 SGL Offset: Supported 00:27:58.419 Transport SGL Data Block: Not Supported 00:27:58.419 Replay Protected Memory Block: Not Supported 00:27:58.419 00:27:58.419 Firmware Slot Information 00:27:58.419 ========================= 00:27:58.419 Active slot: 0 00:27:58.419 00:27:58.419 00:27:58.419 Error Log 00:27:58.419 ========= 00:27:58.419 00:27:58.419 Active Namespaces 00:27:58.419 ================= 00:27:58.419 Discovery Log Page 00:27:58.419 ================== 00:27:58.419 Generation Counter: 2 00:27:58.419 Number of Records: 2 00:27:58.419 Record Format: 0 00:27:58.419 00:27:58.419 Discovery Log Entry 0 00:27:58.419 ---------------------- 00:27:58.419 Transport Type: 3 (TCP) 00:27:58.419 Address Family: 1 (IPv4) 00:27:58.419 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:58.419 Entry Flags: 00:27:58.419 Duplicate Returned Information: 0 00:27:58.419 Explicit Persistent Connection Support for Discovery: 0 00:27:58.419 Transport Requirements: 00:27:58.419 Secure Channel: Not Specified 00:27:58.419 Port ID: 1 (0x0001) 00:27:58.419 Controller ID: 65535 (0xffff) 00:27:58.419 Admin Max SQ Size: 32 00:27:58.419 Transport Service Identifier: 4420 00:27:58.419 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:58.419 Transport Address: 10.0.0.1 00:27:58.419 Discovery Log Entry 1 00:27:58.419 ---------------------- 00:27:58.419 Transport Type: 3 (TCP) 00:27:58.419 Address Family: 1 (IPv4) 00:27:58.419 Subsystem Type: 2 (NVM Subsystem) 00:27:58.419 Entry Flags: 00:27:58.419 Duplicate Returned Information: 0 00:27:58.419 Explicit Persistent Connection Support for Discovery: 0 00:27:58.419 Transport Requirements: 00:27:58.419 Secure Channel: Not Specified 00:27:58.419 Port ID: 1 (0x0001) 00:27:58.419 Controller ID: 65535 (0xffff) 00:27:58.419 Admin Max SQ Size: 32 00:27:58.419 Transport Service Identifier: 4420 00:27:58.419 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:27:58.419 Transport Address: 10.0.0.1 00:27:58.419 06:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:58.419 get_feature(0x01) failed 00:27:58.419 get_feature(0x02) failed 00:27:58.419 get_feature(0x04) failed 00:27:58.419 ===================================================== 00:27:58.419 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:58.419 ===================================================== 00:27:58.419 Controller Capabilities/Features 00:27:58.419 ================================ 00:27:58.419 Vendor ID: 0000 00:27:58.419 Subsystem Vendor ID: 0000 00:27:58.419 Serial Number: be6843e1af2c7c2396ae 00:27:58.419 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:27:58.419 Firmware Version: 6.8.9-20 00:27:58.419 Recommended Arb Burst: 6 00:27:58.419 IEEE OUI Identifier: 00 00 00 00:27:58.419 Multi-path I/O 00:27:58.419 May have multiple subsystem ports: Yes 00:27:58.419 May have multiple controllers: Yes 00:27:58.419 Associated with SR-IOV VF: No 00:27:58.419 Max Data Transfer Size: Unlimited 00:27:58.419 Max Number of Namespaces: 1024 00:27:58.419 Max Number of I/O Queues: 128 00:27:58.419 NVMe Specification Version (VS): 1.3 00:27:58.419 NVMe Specification Version (Identify): 1.3 00:27:58.419 Maximum Queue Entries: 1024 00:27:58.419 Contiguous Queues Required: No 00:27:58.419 Arbitration Mechanisms Supported 00:27:58.419 Weighted Round Robin: Not Supported 00:27:58.419 Vendor Specific: Not Supported 00:27:58.419 Reset Timeout: 7500 ms 00:27:58.419 Doorbell Stride: 4 bytes 00:27:58.419 NVM Subsystem Reset: Not Supported 00:27:58.419 Command Sets Supported 00:27:58.419 NVM Command Set: Supported 00:27:58.419 Boot Partition: Not Supported 00:27:58.419 Memory Page Size Minimum: 4096 bytes 00:27:58.419 Memory Page Size Maximum: 4096 bytes 00:27:58.419 Persistent Memory Region: Not Supported 00:27:58.419 Optional Asynchronous Events Supported 00:27:58.419 Namespace Attribute Notices: Supported 00:27:58.419 Firmware Activation Notices: Not Supported 00:27:58.419 ANA Change Notices: Supported 00:27:58.419 PLE Aggregate Log Change Notices: Not Supported 00:27:58.419 LBA Status Info Alert Notices: Not Supported 00:27:58.419 EGE Aggregate Log Change Notices: Not Supported 00:27:58.419 Normal NVM Subsystem Shutdown event: Not Supported 00:27:58.419 Zone Descriptor Change Notices: Not Supported 00:27:58.419 Discovery Log Change Notices: Not Supported 00:27:58.419 Controller Attributes 00:27:58.419 128-bit Host Identifier: Supported 00:27:58.419 Non-Operational Permissive Mode: Not Supported 00:27:58.419 NVM Sets: Not Supported 00:27:58.419 Read Recovery Levels: Not Supported 00:27:58.419 Endurance Groups: Not Supported 00:27:58.419 Predictable Latency Mode: Not Supported 00:27:58.419 Traffic Based Keep ALive: Supported 00:27:58.419 Namespace Granularity: Not Supported 00:27:58.419 SQ Associations: Not Supported 00:27:58.419 UUID List: Not Supported 00:27:58.419 Multi-Domain Subsystem: Not Supported 00:27:58.419 Fixed Capacity Management: Not Supported 00:27:58.419 Variable Capacity Management: Not Supported 00:27:58.419 Delete Endurance Group: Not Supported 00:27:58.419 Delete NVM Set: Not Supported 00:27:58.419 Extended LBA Formats Supported: Not Supported 00:27:58.419 Flexible Data Placement Supported: Not Supported 00:27:58.419 00:27:58.419 Controller Memory Buffer Support 00:27:58.419 ================================ 00:27:58.419 Supported: No 00:27:58.419 00:27:58.419 Persistent Memory Region Support 00:27:58.419 ================================ 00:27:58.419 Supported: No 00:27:58.419 00:27:58.419 Admin Command Set Attributes 00:27:58.419 ============================ 00:27:58.419 Security Send/Receive: Not Supported 00:27:58.419 Format NVM: Not Supported 00:27:58.419 Firmware Activate/Download: Not Supported 00:27:58.419 Namespace Management: Not Supported 00:27:58.419 Device Self-Test: Not Supported 00:27:58.419 Directives: Not Supported 00:27:58.419 NVMe-MI: Not Supported 00:27:58.419 Virtualization Management: Not Supported 00:27:58.419 Doorbell Buffer Config: Not Supported 00:27:58.419 Get LBA Status Capability: Not Supported 00:27:58.419 Command & Feature Lockdown Capability: Not Supported 00:27:58.419 Abort Command Limit: 4 00:27:58.419 Async Event Request Limit: 4 00:27:58.419 Number of Firmware Slots: N/A 00:27:58.419 Firmware Slot 1 Read-Only: N/A 00:27:58.419 Firmware Activation Without Reset: N/A 00:27:58.419 Multiple Update Detection Support: N/A 00:27:58.419 Firmware Update Granularity: No Information Provided 00:27:58.419 Per-Namespace SMART Log: Yes 00:27:58.419 Asymmetric Namespace Access Log Page: Supported 00:27:58.419 ANA Transition Time : 10 sec 00:27:58.419 00:27:58.419 Asymmetric Namespace Access Capabilities 00:27:58.420 ANA Optimized State : Supported 00:27:58.420 ANA Non-Optimized State : Supported 00:27:58.420 ANA Inaccessible State : Supported 00:27:58.420 ANA Persistent Loss State : Supported 00:27:58.420 ANA Change State : Supported 00:27:58.420 ANAGRPID is not changed : No 00:27:58.420 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:27:58.420 00:27:58.420 ANA Group Identifier Maximum : 128 00:27:58.420 Number of ANA Group Identifiers : 128 00:27:58.420 Max Number of Allowed Namespaces : 1024 00:27:58.420 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:27:58.420 Command Effects Log Page: Supported 00:27:58.420 Get Log Page Extended Data: Supported 00:27:58.420 Telemetry Log Pages: Not Supported 00:27:58.420 Persistent Event Log Pages: Not Supported 00:27:58.420 Supported Log Pages Log Page: May Support 00:27:58.420 Commands Supported & Effects Log Page: Not Supported 00:27:58.420 Feature Identifiers & Effects Log Page:May Support 00:27:58.420 NVMe-MI Commands & Effects Log Page: May Support 00:27:58.420 Data Area 4 for Telemetry Log: Not Supported 00:27:58.420 Error Log Page Entries Supported: 128 00:27:58.420 Keep Alive: Supported 00:27:58.420 Keep Alive Granularity: 1000 ms 00:27:58.420 00:27:58.420 NVM Command Set Attributes 00:27:58.420 ========================== 00:27:58.420 Submission Queue Entry Size 00:27:58.420 Max: 64 00:27:58.420 Min: 64 00:27:58.420 Completion Queue Entry Size 00:27:58.420 Max: 16 00:27:58.420 Min: 16 00:27:58.420 Number of Namespaces: 1024 00:27:58.420 Compare Command: Not Supported 00:27:58.420 Write Uncorrectable Command: Not Supported 00:27:58.420 Dataset Management Command: Supported 00:27:58.420 Write Zeroes Command: Supported 00:27:58.420 Set Features Save Field: Not Supported 00:27:58.420 Reservations: Not Supported 00:27:58.420 Timestamp: Not Supported 00:27:58.420 Copy: Not Supported 00:27:58.420 Volatile Write Cache: Present 00:27:58.420 Atomic Write Unit (Normal): 1 00:27:58.420 Atomic Write Unit (PFail): 1 00:27:58.420 Atomic Compare & Write Unit: 1 00:27:58.420 Fused Compare & Write: Not Supported 00:27:58.420 Scatter-Gather List 00:27:58.420 SGL Command Set: Supported 00:27:58.420 SGL Keyed: Not Supported 00:27:58.420 SGL Bit Bucket Descriptor: Not Supported 00:27:58.420 SGL Metadata Pointer: Not Supported 00:27:58.420 Oversized SGL: Not Supported 00:27:58.420 SGL Metadata Address: Not Supported 00:27:58.420 SGL Offset: Supported 00:27:58.420 Transport SGL Data Block: Not Supported 00:27:58.420 Replay Protected Memory Block: Not Supported 00:27:58.420 00:27:58.420 Firmware Slot Information 00:27:58.420 ========================= 00:27:58.420 Active slot: 0 00:27:58.420 00:27:58.420 Asymmetric Namespace Access 00:27:58.420 =========================== 00:27:58.420 Change Count : 0 00:27:58.420 Number of ANA Group Descriptors : 1 00:27:58.420 ANA Group Descriptor : 0 00:27:58.420 ANA Group ID : 1 00:27:58.420 Number of NSID Values : 1 00:27:58.420 Change Count : 0 00:27:58.420 ANA State : 1 00:27:58.420 Namespace Identifier : 1 00:27:58.420 00:27:58.420 Commands Supported and Effects 00:27:58.420 ============================== 00:27:58.420 Admin Commands 00:27:58.420 -------------- 00:27:58.420 Get Log Page (02h): Supported 00:27:58.420 Identify (06h): Supported 00:27:58.420 Abort (08h): Supported 00:27:58.420 Set Features (09h): Supported 00:27:58.420 Get Features (0Ah): Supported 00:27:58.420 Asynchronous Event Request (0Ch): Supported 00:27:58.420 Keep Alive (18h): Supported 00:27:58.420 I/O Commands 00:27:58.420 ------------ 00:27:58.420 Flush (00h): Supported 00:27:58.420 Write (01h): Supported LBA-Change 00:27:58.420 Read (02h): Supported 00:27:58.420 Write Zeroes (08h): Supported LBA-Change 00:27:58.420 Dataset Management (09h): Supported 00:27:58.420 00:27:58.420 Error Log 00:27:58.420 ========= 00:27:58.420 Entry: 0 00:27:58.420 Error Count: 0x3 00:27:58.420 Submission Queue Id: 0x0 00:27:58.420 Command Id: 0x5 00:27:58.420 Phase Bit: 0 00:27:58.420 Status Code: 0x2 00:27:58.420 Status Code Type: 0x0 00:27:58.420 Do Not Retry: 1 00:27:58.420 Error Location: 0x28 00:27:58.420 LBA: 0x0 00:27:58.420 Namespace: 0x0 00:27:58.420 Vendor Log Page: 0x0 00:27:58.420 ----------- 00:27:58.420 Entry: 1 00:27:58.420 Error Count: 0x2 00:27:58.420 Submission Queue Id: 0x0 00:27:58.420 Command Id: 0x5 00:27:58.420 Phase Bit: 0 00:27:58.420 Status Code: 0x2 00:27:58.420 Status Code Type: 0x0 00:27:58.420 Do Not Retry: 1 00:27:58.420 Error Location: 0x28 00:27:58.420 LBA: 0x0 00:27:58.420 Namespace: 0x0 00:27:58.420 Vendor Log Page: 0x0 00:27:58.420 ----------- 00:27:58.420 Entry: 2 00:27:58.420 Error Count: 0x1 00:27:58.420 Submission Queue Id: 0x0 00:27:58.420 Command Id: 0x4 00:27:58.420 Phase Bit: 0 00:27:58.420 Status Code: 0x2 00:27:58.420 Status Code Type: 0x0 00:27:58.420 Do Not Retry: 1 00:27:58.420 Error Location: 0x28 00:27:58.420 LBA: 0x0 00:27:58.420 Namespace: 0x0 00:27:58.420 Vendor Log Page: 0x0 00:27:58.420 00:27:58.420 Number of Queues 00:27:58.420 ================ 00:27:58.420 Number of I/O Submission Queues: 128 00:27:58.420 Number of I/O Completion Queues: 128 00:27:58.420 00:27:58.420 ZNS Specific Controller Data 00:27:58.420 ============================ 00:27:58.420 Zone Append Size Limit: 0 00:27:58.420 00:27:58.420 00:27:58.420 Active Namespaces 00:27:58.420 ================= 00:27:58.420 get_feature(0x05) failed 00:27:58.420 Namespace ID:1 00:27:58.420 Command Set Identifier: NVM (00h) 00:27:58.420 Deallocate: Supported 00:27:58.420 Deallocated/Unwritten Error: Not Supported 00:27:58.420 Deallocated Read Value: Unknown 00:27:58.420 Deallocate in Write Zeroes: Not Supported 00:27:58.420 Deallocated Guard Field: 0xFFFF 00:27:58.420 Flush: Supported 00:27:58.420 Reservation: Not Supported 00:27:58.420 Namespace Sharing Capabilities: Multiple Controllers 00:27:58.420 Size (in LBAs): 3907029168 (1863GiB) 00:27:58.420 Capacity (in LBAs): 3907029168 (1863GiB) 00:27:58.420 Utilization (in LBAs): 3907029168 (1863GiB) 00:27:58.420 UUID: 3dc2d000-2d6f-44ea-bee5-e91b35f08597 00:27:58.420 Thin Provisioning: Not Supported 00:27:58.420 Per-NS Atomic Units: Yes 00:27:58.420 Atomic Boundary Size (Normal): 0 00:27:58.420 Atomic Boundary Size (PFail): 0 00:27:58.420 Atomic Boundary Offset: 0 00:27:58.420 NGUID/EUI64 Never Reused: No 00:27:58.420 ANA group ID: 1 00:27:58.420 Namespace Write Protected: No 00:27:58.420 Number of LBA Formats: 1 00:27:58.420 Current LBA Format: LBA Format #00 00:27:58.420 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:58.420 00:27:58.420 06:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:27:58.420 06:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:58.420 06:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:27:58.420 06:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:58.420 06:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:27:58.420 06:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:58.420 06:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:58.420 rmmod nvme_tcp 00:27:58.420 rmmod nvme_fabrics 00:27:58.420 06:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:58.420 06:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:27:58.420 06:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:27:58.420 06:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:27:58.420 06:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:58.420 06:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:58.420 06:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:58.420 06:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:27:58.420 06:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:27:58.421 06:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:58.421 06:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:27:58.421 06:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:58.421 06:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:58.421 06:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:58.421 06:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:58.421 06:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:00.964 06:26:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:00.964 06:26:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:28:00.964 06:26:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:28:00.964 06:26:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:28:00.964 06:26:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:00.964 06:26:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:00.964 06:26:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:00.964 06:26:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:00.964 06:26:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:28:00.964 06:26:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:28:00.964 06:26:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:04.260 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:04.260 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:04.260 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:04.260 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:04.260 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:04.260 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:04.260 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:04.260 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:04.260 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:04.260 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:04.260 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:04.260 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:04.260 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:04.260 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:04.260 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:04.260 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:06.168 0000:65:00.0 (8086 0a54): nvme -> vfio-pci 00:28:06.427 00:28:06.427 real 0m21.038s 00:28:06.427 user 0m5.242s 00:28:06.427 sys 0m10.938s 00:28:06.427 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:06.427 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:28:06.427 ************************************ 00:28:06.427 END TEST nvmf_identify_kernel_target 00:28:06.427 ************************************ 00:28:06.687 06:27:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:28:06.687 06:27:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:06.687 06:27:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:06.687 06:27:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.687 ************************************ 00:28:06.687 START TEST nvmf_auth_host 00:28:06.687 ************************************ 00:28:06.687 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:28:06.687 * Looking for test storage... 00:28:06.687 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:06.687 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:06.687 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:28:06.687 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:06.948 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:06.948 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:06.948 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:06.948 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:06.948 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:28:06.948 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:28:06.948 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:28:06.948 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:28:06.948 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:28:06.948 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:28:06.948 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:28:06.948 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:06.948 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:28:06.948 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:28:06.948 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:06.948 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:06.948 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:28:06.948 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:28:06.948 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:06.948 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:28:06.948 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:28:06.948 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:28:06.948 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:28:06.948 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:06.948 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:28:06.948 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:28:06.948 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:06.948 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:06.948 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:28:06.948 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:06.948 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:06.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:06.948 --rc genhtml_branch_coverage=1 00:28:06.948 --rc genhtml_function_coverage=1 00:28:06.948 --rc genhtml_legend=1 00:28:06.948 --rc geninfo_all_blocks=1 00:28:06.948 --rc geninfo_unexecuted_blocks=1 00:28:06.948 00:28:06.948 ' 00:28:06.948 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:06.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:06.948 --rc genhtml_branch_coverage=1 00:28:06.948 --rc genhtml_function_coverage=1 00:28:06.948 --rc genhtml_legend=1 00:28:06.948 --rc geninfo_all_blocks=1 00:28:06.948 --rc geninfo_unexecuted_blocks=1 00:28:06.948 00:28:06.948 ' 00:28:06.948 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:06.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:06.948 --rc genhtml_branch_coverage=1 00:28:06.948 --rc genhtml_function_coverage=1 00:28:06.948 --rc genhtml_legend=1 00:28:06.948 --rc geninfo_all_blocks=1 00:28:06.948 --rc geninfo_unexecuted_blocks=1 00:28:06.948 00:28:06.948 ' 00:28:06.948 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:06.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:06.948 --rc genhtml_branch_coverage=1 00:28:06.948 --rc genhtml_function_coverage=1 00:28:06.948 --rc genhtml_legend=1 00:28:06.948 --rc geninfo_all_blocks=1 00:28:06.948 --rc geninfo_unexecuted_blocks=1 00:28:06.948 00:28:06.948 ' 00:28:06.948 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:06.948 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:28:06.948 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:06.948 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:06.948 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:06.948 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:06.948 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:06.948 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:06.948 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:06.948 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:06.948 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:06.948 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:06.948 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:28:06.948 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:28:06.948 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:06.948 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:06.948 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:06.948 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:06.948 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:06.948 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:28:06.948 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:06.948 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:06.948 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:06.948 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.948 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.948 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.948 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:28:06.949 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.949 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:28:06.949 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:06.949 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:06.949 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:06.949 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:06.949 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:06.949 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:06.949 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:06.949 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:06.949 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:06.949 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:06.949 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:28:06.949 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:28:06.949 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:28:06.949 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:28:06.949 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:06.949 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:06.949 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:28:06.949 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:28:06.949 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:28:06.949 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:06.949 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:06.949 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:06.949 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:06.949 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:06.949 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:06.949 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:06.949 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:06.949 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:06.949 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:06.949 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:28:06.949 06:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.092 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:15.092 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:28:15.092 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:15.092 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:15.092 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:15.092 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:15.093 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:15.093 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:15.093 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:15.093 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:15.093 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:15.093 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.630 ms 00:28:15.093 00:28:15.093 --- 10.0.0.2 ping statistics --- 00:28:15.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:15.093 rtt min/avg/max/mdev = 0.630/0.630/0.630/0.000 ms 00:28:15.093 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:15.093 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:15.093 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.300 ms 00:28:15.093 00:28:15.093 --- 10.0.0.1 ping statistics --- 00:28:15.094 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:15.094 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:28:15.094 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:15.094 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:28:15.094 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:15.094 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:15.094 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:15.094 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:15.094 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:15.094 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:15.094 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:15.094 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:28:15.094 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:15.094 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:15.094 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.094 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=471985 00:28:15.094 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 471985 00:28:15.094 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:28:15.094 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 471985 ']' 00:28:15.094 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:15.094 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:15.094 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:15.094 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:15.094 06:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.094 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:15.094 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:28:15.094 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:15.094 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:15.094 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.094 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:15.094 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:28:15.094 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:28:15.094 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:15.094 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:15.094 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:15.094 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:28:15.094 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:28:15.094 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:15.094 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b6412e0563f76e91e22a779ce923a1e9 00:28:15.094 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:28:15.094 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.8HO 00:28:15.094 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b6412e0563f76e91e22a779ce923a1e9 0 00:28:15.094 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b6412e0563f76e91e22a779ce923a1e9 0 00:28:15.094 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:15.094 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:15.094 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b6412e0563f76e91e22a779ce923a1e9 00:28:15.094 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:28:15.094 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:15.094 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.8HO 00:28:15.094 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.8HO 00:28:15.094 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.8HO 00:28:15.094 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:28:15.094 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:15.094 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:15.094 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:15.094 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:28:15.094 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:28:15.094 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:28:15.094 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e9c18d0a1d0e32b102b5aaf63a5ec9ae32f20dd331a36b1911d4326cb5c15a1c 00:28:15.094 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:28:15.094 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.CgH 00:28:15.094 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e9c18d0a1d0e32b102b5aaf63a5ec9ae32f20dd331a36b1911d4326cb5c15a1c 3 00:28:15.094 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e9c18d0a1d0e32b102b5aaf63a5ec9ae32f20dd331a36b1911d4326cb5c15a1c 3 00:28:15.094 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:15.094 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:15.094 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e9c18d0a1d0e32b102b5aaf63a5ec9ae32f20dd331a36b1911d4326cb5c15a1c 00:28:15.094 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:28:15.094 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:15.094 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.CgH 00:28:15.094 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.CgH 00:28:15.094 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.CgH 00:28:15.094 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:28:15.094 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:15.094 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:15.094 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:15.094 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:28:15.094 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:28:15.094 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:15.094 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d189060e1feb5c073f01e3c8f921c2dc59e827ce5a044e20 00:28:15.094 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:28:15.094 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.EVF 00:28:15.094 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d189060e1feb5c073f01e3c8f921c2dc59e827ce5a044e20 0 00:28:15.094 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d189060e1feb5c073f01e3c8f921c2dc59e827ce5a044e20 0 00:28:15.094 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:15.094 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:15.094 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d189060e1feb5c073f01e3c8f921c2dc59e827ce5a044e20 00:28:15.094 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:28:15.094 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:15.355 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.EVF 00:28:15.355 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.EVF 00:28:15.355 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.EVF 00:28:15.355 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:28:15.355 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:15.355 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:15.355 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:15.355 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:28:15.355 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:28:15.355 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:15.355 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f5b654953afa1e623e59329cdcb257aa2e708b027d288824 00:28:15.355 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:28:15.355 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.IZX 00:28:15.355 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f5b654953afa1e623e59329cdcb257aa2e708b027d288824 2 00:28:15.355 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f5b654953afa1e623e59329cdcb257aa2e708b027d288824 2 00:28:15.355 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:15.355 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:15.355 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f5b654953afa1e623e59329cdcb257aa2e708b027d288824 00:28:15.355 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:28:15.355 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:15.355 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.IZX 00:28:15.355 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.IZX 00:28:15.355 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.IZX 00:28:15.355 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:28:15.355 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:15.355 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:15.355 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:15.355 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:28:15.355 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:28:15.355 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:15.355 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=8ba4b00d5363ffa4322ebf9a81f90714 00:28:15.355 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:28:15.355 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.CKr 00:28:15.355 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 8ba4b00d5363ffa4322ebf9a81f90714 1 00:28:15.355 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 8ba4b00d5363ffa4322ebf9a81f90714 1 00:28:15.355 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:15.355 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:15.355 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=8ba4b00d5363ffa4322ebf9a81f90714 00:28:15.355 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:28:15.355 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:15.355 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.CKr 00:28:15.355 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.CKr 00:28:15.355 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.CKr 00:28:15.355 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:28:15.355 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:15.355 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:15.355 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:15.355 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:28:15.355 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:28:15.355 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:15.356 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d975947052a7b959ccf8653fecee5e19 00:28:15.356 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:28:15.356 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Qp4 00:28:15.356 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d975947052a7b959ccf8653fecee5e19 1 00:28:15.356 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d975947052a7b959ccf8653fecee5e19 1 00:28:15.356 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:15.356 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:15.356 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d975947052a7b959ccf8653fecee5e19 00:28:15.356 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:28:15.356 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:15.356 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Qp4 00:28:15.356 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Qp4 00:28:15.356 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.Qp4 00:28:15.356 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:28:15.356 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:15.356 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:15.356 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:15.356 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:28:15.356 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:28:15.616 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:15.616 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3d4e8018483617198722c1a520ff86e329b3061e410cc979 00:28:15.616 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:28:15.616 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.JiS 00:28:15.616 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3d4e8018483617198722c1a520ff86e329b3061e410cc979 2 00:28:15.616 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3d4e8018483617198722c1a520ff86e329b3061e410cc979 2 00:28:15.616 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:15.616 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:15.616 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3d4e8018483617198722c1a520ff86e329b3061e410cc979 00:28:15.616 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:28:15.617 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:15.617 06:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.JiS 00:28:15.617 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.JiS 00:28:15.617 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.JiS 00:28:15.617 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:28:15.617 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:15.617 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:15.617 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:15.617 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:28:15.617 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:28:15.617 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:15.617 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=12c2d855d3506d60fb1fc6f92e49f97a 00:28:15.617 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:28:15.617 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Be8 00:28:15.617 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 12c2d855d3506d60fb1fc6f92e49f97a 0 00:28:15.617 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 12c2d855d3506d60fb1fc6f92e49f97a 0 00:28:15.617 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:15.617 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:15.617 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=12c2d855d3506d60fb1fc6f92e49f97a 00:28:15.617 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:28:15.617 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:15.617 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Be8 00:28:15.617 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Be8 00:28:15.617 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.Be8 00:28:15.617 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:28:15.617 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:15.617 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:15.617 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:15.617 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:28:15.617 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:28:15.617 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:28:15.617 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=8a930de2f61176cfaf3d54850892132817a4f8e9cb90701af2a2c24bdc0ec630 00:28:15.617 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:28:15.617 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.e2T 00:28:15.617 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 8a930de2f61176cfaf3d54850892132817a4f8e9cb90701af2a2c24bdc0ec630 3 00:28:15.617 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 8a930de2f61176cfaf3d54850892132817a4f8e9cb90701af2a2c24bdc0ec630 3 00:28:15.617 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:15.617 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:15.617 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=8a930de2f61176cfaf3d54850892132817a4f8e9cb90701af2a2c24bdc0ec630 00:28:15.617 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:28:15.617 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:15.617 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.e2T 00:28:15.617 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.e2T 00:28:15.617 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.e2T 00:28:15.617 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:28:15.617 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 471985 00:28:15.617 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 471985 ']' 00:28:15.617 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:15.617 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:15.617 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:15.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:15.617 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:15.617 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.878 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:15.878 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:28:15.878 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:15.878 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.8HO 00:28:15.878 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.878 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.878 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.878 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.CgH ]] 00:28:15.878 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.CgH 00:28:15.878 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.878 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.878 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.878 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:15.878 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.EVF 00:28:15.878 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.878 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.878 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.878 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.IZX ]] 00:28:15.878 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.IZX 00:28:15.878 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.878 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.878 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.878 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:15.878 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.CKr 00:28:15.878 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.878 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.878 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.878 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.Qp4 ]] 00:28:15.878 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Qp4 00:28:15.878 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.878 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.878 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.878 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:15.878 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.JiS 00:28:15.878 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.878 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.878 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.878 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.Be8 ]] 00:28:15.878 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.Be8 00:28:15.878 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.878 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.878 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.878 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:15.878 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.e2T 00:28:15.878 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.878 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.878 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.878 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:28:15.878 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:28:15.878 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:28:15.878 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:15.878 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:15.878 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:15.878 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:15.878 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:15.878 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:15.878 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:15.878 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:15.878 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:15.878 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:15.878 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:28:15.878 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:28:15.878 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:28:15.878 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:15.878 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:15.878 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:15.878 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:28:15.878 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:28:15.878 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:28:16.139 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:16.139 06:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:19.436 Waiting for block devices as requested 00:28:19.436 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:19.436 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:19.436 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:19.698 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:19.698 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:19.698 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:19.698 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:19.959 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:19.959 0000:65:00.0 (8086 0a54): vfio-pci -> nvme 00:28:20.219 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:20.219 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:20.219 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:20.480 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:20.480 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:20.480 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:20.480 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:20.739 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:21.679 06:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:21.679 06:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:21.679 06:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:28:21.679 06:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:28:21.679 06:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:21.679 06:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:21.679 06:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:28:21.679 06:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:28:21.679 06:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:21.679 No valid GPT data, bailing 00:28:21.679 06:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:21.679 06:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:28:21.679 06:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:28:21.679 06:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:28:21.679 06:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:28:21.679 06:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:21.679 06:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:21.679 06:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:21.679 06:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:28:21.679 06:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:28:21.679 06:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:28:21.679 06:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:28:21.679 06:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:28:21.679 06:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:28:21.679 06:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:28:21.679 06:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:28:21.679 06:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:21.679 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -a 10.0.0.1 -t tcp -s 4420 00:28:21.679 00:28:21.679 Discovery Log Number of Records 2, Generation counter 2 00:28:21.679 =====Discovery Log Entry 0====== 00:28:21.679 trtype: tcp 00:28:21.679 adrfam: ipv4 00:28:21.679 subtype: current discovery subsystem 00:28:21.679 treq: not specified, sq flow control disable supported 00:28:21.679 portid: 1 00:28:21.679 trsvcid: 4420 00:28:21.679 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:21.679 traddr: 10.0.0.1 00:28:21.679 eflags: none 00:28:21.679 sectype: none 00:28:21.679 =====Discovery Log Entry 1====== 00:28:21.679 trtype: tcp 00:28:21.679 adrfam: ipv4 00:28:21.679 subtype: nvme subsystem 00:28:21.679 treq: not specified, sq flow control disable supported 00:28:21.679 portid: 1 00:28:21.679 trsvcid: 4420 00:28:21.679 subnqn: nqn.2024-02.io.spdk:cnode0 00:28:21.679 traddr: 10.0.0.1 00:28:21.679 eflags: none 00:28:21.679 sectype: none 00:28:21.679 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:21.679 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:28:21.679 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:21.679 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:21.679 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:21.679 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:21.679 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:21.679 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:21.679 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDE4OTA2MGUxZmViNWMwNzNmMDFlM2M4ZjkyMWMyZGM1OWU4MjdjZTVhMDQ0ZTIwoI/cyg==: 00:28:21.679 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjViNjU0OTUzYWZhMWU2MjNlNTkzMjljZGNiMjU3YWEyZTcwOGIwMjdkMjg4ODI0dQKgXw==: 00:28:21.679 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:21.679 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:21.679 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDE4OTA2MGUxZmViNWMwNzNmMDFlM2M4ZjkyMWMyZGM1OWU4MjdjZTVhMDQ0ZTIwoI/cyg==: 00:28:21.679 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjViNjU0OTUzYWZhMWU2MjNlNTkzMjljZGNiMjU3YWEyZTcwOGIwMjdkMjg4ODI0dQKgXw==: ]] 00:28:21.679 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjViNjU0OTUzYWZhMWU2MjNlNTkzMjljZGNiMjU3YWEyZTcwOGIwMjdkMjg4ODI0dQKgXw==: 00:28:21.679 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:28:21.679 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:28:21.679 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:28:21.679 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:21.679 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:28:21.679 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:21.679 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:28:21.679 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:21.679 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:21.679 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:21.680 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:21.680 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.680 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.680 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.680 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:21.680 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:21.680 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:21.680 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:21.680 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:21.680 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:21.680 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:21.680 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:21.680 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:21.680 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:21.680 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:21.680 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:21.680 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.680 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.940 nvme0n1 00:28:21.940 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.940 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:21.940 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:21.940 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.940 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.940 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.940 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:21.940 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:21.940 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.940 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.940 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.940 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:21.940 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:21.940 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:21.940 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:28:21.940 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:21.940 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:21.940 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:21.940 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:21.940 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjY0MTJlMDU2M2Y3NmU5MWUyMmE3NzljZTkyM2ExZTkRo2FE: 00:28:21.940 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTljMThkMGExZDBlMzJiMTAyYjVhYWY2M2E1ZWM5YWUzMmYyMGRkMzMxYTM2YjE5MTFkNDMyNmNiNWMxNWExY+CLpQI=: 00:28:21.940 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:21.940 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:21.940 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjY0MTJlMDU2M2Y3NmU5MWUyMmE3NzljZTkyM2ExZTkRo2FE: 00:28:21.940 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTljMThkMGExZDBlMzJiMTAyYjVhYWY2M2E1ZWM5YWUzMmYyMGRkMzMxYTM2YjE5MTFkNDMyNmNiNWMxNWExY+CLpQI=: ]] 00:28:21.940 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTljMThkMGExZDBlMzJiMTAyYjVhYWY2M2E1ZWM5YWUzMmYyMGRkMzMxYTM2YjE5MTFkNDMyNmNiNWMxNWExY+CLpQI=: 00:28:21.940 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:28:21.940 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:21.940 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:21.940 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:21.940 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:21.940 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:21.940 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:21.940 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.940 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.940 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.940 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:21.940 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:21.940 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:21.940 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:21.940 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:21.940 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:21.940 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:21.940 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:21.940 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:21.940 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:21.940 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:21.940 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:21.940 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.940 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.200 nvme0n1 00:28:22.200 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.200 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:22.200 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:22.200 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.200 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.200 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.200 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:22.200 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:22.200 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.200 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.200 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.200 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:22.200 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:22.200 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:22.200 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:22.200 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:22.200 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:22.200 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDE4OTA2MGUxZmViNWMwNzNmMDFlM2M4ZjkyMWMyZGM1OWU4MjdjZTVhMDQ0ZTIwoI/cyg==: 00:28:22.200 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjViNjU0OTUzYWZhMWU2MjNlNTkzMjljZGNiMjU3YWEyZTcwOGIwMjdkMjg4ODI0dQKgXw==: 00:28:22.200 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:22.200 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:22.200 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDE4OTA2MGUxZmViNWMwNzNmMDFlM2M4ZjkyMWMyZGM1OWU4MjdjZTVhMDQ0ZTIwoI/cyg==: 00:28:22.200 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjViNjU0OTUzYWZhMWU2MjNlNTkzMjljZGNiMjU3YWEyZTcwOGIwMjdkMjg4ODI0dQKgXw==: ]] 00:28:22.200 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjViNjU0OTUzYWZhMWU2MjNlNTkzMjljZGNiMjU3YWEyZTcwOGIwMjdkMjg4ODI0dQKgXw==: 00:28:22.200 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:28:22.200 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:22.200 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:22.200 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:22.200 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:22.200 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:22.200 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:22.200 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.200 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.200 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.200 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:22.200 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:22.200 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:22.200 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:22.200 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:22.200 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:22.200 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:22.200 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:22.200 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:22.200 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:22.200 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:22.200 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:22.200 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.200 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.459 nvme0n1 00:28:22.459 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.459 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:22.459 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:22.459 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.459 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.459 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.459 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:22.459 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:22.459 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.459 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.459 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.459 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:22.459 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:22.459 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:22.459 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:22.459 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:22.459 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:22.459 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGJhNGIwMGQ1MzYzZmZhNDMyMmViZjlhODFmOTA3MTTnbOjI: 00:28:22.459 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDk3NTk0NzA1MmE3Yjk1OWNjZjg2NTNmZWNlZTVlMTlfK/y5: 00:28:22.459 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:22.459 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:22.459 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGJhNGIwMGQ1MzYzZmZhNDMyMmViZjlhODFmOTA3MTTnbOjI: 00:28:22.459 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDk3NTk0NzA1MmE3Yjk1OWNjZjg2NTNmZWNlZTVlMTlfK/y5: ]] 00:28:22.459 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDk3NTk0NzA1MmE3Yjk1OWNjZjg2NTNmZWNlZTVlMTlfK/y5: 00:28:22.459 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:28:22.459 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:22.459 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:22.459 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:22.459 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:22.459 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:22.459 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:22.459 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.459 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.459 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.459 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:22.459 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:22.459 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:22.459 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:22.459 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:22.459 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:22.459 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:22.459 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:22.459 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:22.459 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:22.459 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:22.459 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:22.459 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.459 06:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.718 nvme0n1 00:28:22.718 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.718 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:22.718 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:22.718 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.718 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.718 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.718 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:22.718 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:22.718 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.718 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.718 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.718 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:22.718 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:28:22.718 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:22.718 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:22.718 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:22.718 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:22.718 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Q0ZTgwMTg0ODM2MTcxOTg3MjJjMWE1MjBmZjg2ZTMyOWIzMDYxZTQxMGNjOTc54HEH5A==: 00:28:22.718 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTJjMmQ4NTVkMzUwNmQ2MGZiMWZjNmY5MmU0OWY5N2Hn6ZOS: 00:28:22.718 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:22.718 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:22.718 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Q0ZTgwMTg0ODM2MTcxOTg3MjJjMWE1MjBmZjg2ZTMyOWIzMDYxZTQxMGNjOTc54HEH5A==: 00:28:22.718 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTJjMmQ4NTVkMzUwNmQ2MGZiMWZjNmY5MmU0OWY5N2Hn6ZOS: ]] 00:28:22.718 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTJjMmQ4NTVkMzUwNmQ2MGZiMWZjNmY5MmU0OWY5N2Hn6ZOS: 00:28:22.718 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:28:22.718 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:22.718 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:22.718 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:22.718 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:22.718 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:22.718 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:22.718 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.718 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.718 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.718 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:22.718 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:22.718 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:22.718 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:22.718 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:22.718 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:22.718 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:22.718 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:22.718 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:22.718 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:22.718 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:22.718 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:22.718 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.718 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.718 nvme0n1 00:28:22.718 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.718 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:22.718 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:22.718 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.718 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.718 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.978 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:22.978 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:22.978 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.978 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.978 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.978 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:22.978 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:28:22.978 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:22.978 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:22.978 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:22.978 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:22.978 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGE5MzBkZTJmNjExNzZjZmFmM2Q1NDg1MDg5MjEzMjgxN2E0ZjhlOWNiOTA3MDFhZjJhMmMyNGJkYzBlYzYzMFeA2jk=: 00:28:22.978 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:22.978 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:22.978 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:22.978 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGE5MzBkZTJmNjExNzZjZmFmM2Q1NDg1MDg5MjEzMjgxN2E0ZjhlOWNiOTA3MDFhZjJhMmMyNGJkYzBlYzYzMFeA2jk=: 00:28:22.978 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:22.978 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:28:22.978 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:22.978 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:22.978 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:22.978 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:22.978 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:22.978 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:22.978 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.978 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.978 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.978 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:22.978 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:22.978 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:22.978 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:22.978 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:22.978 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:22.978 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:22.978 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:22.978 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:22.978 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:22.978 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:22.978 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:22.978 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.978 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.978 nvme0n1 00:28:22.978 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.978 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:22.978 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:22.978 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.978 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.978 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.978 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:22.978 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:22.978 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.978 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.238 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.238 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:23.238 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:23.238 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:28:23.238 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:23.238 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:23.238 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:23.238 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:23.238 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjY0MTJlMDU2M2Y3NmU5MWUyMmE3NzljZTkyM2ExZTkRo2FE: 00:28:23.238 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTljMThkMGExZDBlMzJiMTAyYjVhYWY2M2E1ZWM5YWUzMmYyMGRkMzMxYTM2YjE5MTFkNDMyNmNiNWMxNWExY+CLpQI=: 00:28:23.238 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:23.238 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:23.238 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjY0MTJlMDU2M2Y3NmU5MWUyMmE3NzljZTkyM2ExZTkRo2FE: 00:28:23.238 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTljMThkMGExZDBlMzJiMTAyYjVhYWY2M2E1ZWM5YWUzMmYyMGRkMzMxYTM2YjE5MTFkNDMyNmNiNWMxNWExY+CLpQI=: ]] 00:28:23.238 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTljMThkMGExZDBlMzJiMTAyYjVhYWY2M2E1ZWM5YWUzMmYyMGRkMzMxYTM2YjE5MTFkNDMyNmNiNWMxNWExY+CLpQI=: 00:28:23.238 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:28:23.238 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:23.238 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:23.238 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:23.238 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:23.238 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:23.238 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:23.238 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.238 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.498 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.498 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:23.498 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:23.498 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:23.498 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:23.498 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:23.498 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:23.498 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:23.498 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:23.498 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:23.498 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:23.498 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:23.498 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:23.498 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.498 06:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.498 nvme0n1 00:28:23.498 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.498 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:23.498 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:23.498 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.498 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.498 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.498 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:23.498 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:23.498 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.498 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.498 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.498 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:23.498 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:28:23.498 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:23.498 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:23.498 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:23.498 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:23.498 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDE4OTA2MGUxZmViNWMwNzNmMDFlM2M4ZjkyMWMyZGM1OWU4MjdjZTVhMDQ0ZTIwoI/cyg==: 00:28:23.498 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjViNjU0OTUzYWZhMWU2MjNlNTkzMjljZGNiMjU3YWEyZTcwOGIwMjdkMjg4ODI0dQKgXw==: 00:28:23.498 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:23.498 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:23.498 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDE4OTA2MGUxZmViNWMwNzNmMDFlM2M4ZjkyMWMyZGM1OWU4MjdjZTVhMDQ0ZTIwoI/cyg==: 00:28:23.498 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjViNjU0OTUzYWZhMWU2MjNlNTkzMjljZGNiMjU3YWEyZTcwOGIwMjdkMjg4ODI0dQKgXw==: ]] 00:28:23.498 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjViNjU0OTUzYWZhMWU2MjNlNTkzMjljZGNiMjU3YWEyZTcwOGIwMjdkMjg4ODI0dQKgXw==: 00:28:23.498 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:28:23.498 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:23.498 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:23.498 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:23.498 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:23.498 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:23.498 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:23.498 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.758 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.758 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.758 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:23.758 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:23.758 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:23.758 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:23.758 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:23.758 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:23.758 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:23.758 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:23.758 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:23.758 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:23.758 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:23.758 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:23.758 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.758 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.758 nvme0n1 00:28:23.758 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.758 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:23.758 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:23.758 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.758 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.758 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.758 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:23.758 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:23.758 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.758 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.758 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.758 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:23.758 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:28:23.758 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:23.758 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:23.758 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:23.758 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:23.758 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGJhNGIwMGQ1MzYzZmZhNDMyMmViZjlhODFmOTA3MTTnbOjI: 00:28:24.017 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDk3NTk0NzA1MmE3Yjk1OWNjZjg2NTNmZWNlZTVlMTlfK/y5: 00:28:24.017 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:24.017 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:24.017 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGJhNGIwMGQ1MzYzZmZhNDMyMmViZjlhODFmOTA3MTTnbOjI: 00:28:24.017 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDk3NTk0NzA1MmE3Yjk1OWNjZjg2NTNmZWNlZTVlMTlfK/y5: ]] 00:28:24.017 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDk3NTk0NzA1MmE3Yjk1OWNjZjg2NTNmZWNlZTVlMTlfK/y5: 00:28:24.017 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:28:24.017 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:24.017 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:24.017 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:24.017 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:24.017 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:24.017 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:24.017 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.017 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.017 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.017 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:24.017 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:24.017 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:24.017 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:24.017 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:24.017 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:24.017 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:24.017 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:24.017 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:24.017 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:24.017 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:24.017 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:24.017 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.017 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.017 nvme0n1 00:28:24.017 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.017 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:24.017 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:24.017 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.017 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.017 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.018 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:24.018 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:24.018 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.018 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.277 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.277 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:24.277 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:28:24.277 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:24.277 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:24.277 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:24.277 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:24.277 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Q0ZTgwMTg0ODM2MTcxOTg3MjJjMWE1MjBmZjg2ZTMyOWIzMDYxZTQxMGNjOTc54HEH5A==: 00:28:24.277 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTJjMmQ4NTVkMzUwNmQ2MGZiMWZjNmY5MmU0OWY5N2Hn6ZOS: 00:28:24.277 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:24.277 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:24.277 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Q0ZTgwMTg0ODM2MTcxOTg3MjJjMWE1MjBmZjg2ZTMyOWIzMDYxZTQxMGNjOTc54HEH5A==: 00:28:24.277 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTJjMmQ4NTVkMzUwNmQ2MGZiMWZjNmY5MmU0OWY5N2Hn6ZOS: ]] 00:28:24.277 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTJjMmQ4NTVkMzUwNmQ2MGZiMWZjNmY5MmU0OWY5N2Hn6ZOS: 00:28:24.277 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:28:24.277 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:24.277 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:24.277 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:24.277 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:24.277 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:24.277 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:24.277 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.277 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.277 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.277 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:24.277 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:24.277 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:24.277 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:24.277 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:24.277 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:24.277 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:24.277 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:24.277 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:24.277 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:24.277 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:24.278 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:24.278 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.278 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.278 nvme0n1 00:28:24.278 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.278 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:24.278 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:24.278 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.278 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.278 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.278 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:24.278 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:24.278 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.278 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.537 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.538 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:24.538 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:28:24.538 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:24.538 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:24.538 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:24.538 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:24.538 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGE5MzBkZTJmNjExNzZjZmFmM2Q1NDg1MDg5MjEzMjgxN2E0ZjhlOWNiOTA3MDFhZjJhMmMyNGJkYzBlYzYzMFeA2jk=: 00:28:24.538 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:24.538 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:24.538 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:24.538 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGE5MzBkZTJmNjExNzZjZmFmM2Q1NDg1MDg5MjEzMjgxN2E0ZjhlOWNiOTA3MDFhZjJhMmMyNGJkYzBlYzYzMFeA2jk=: 00:28:24.538 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:24.538 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:28:24.538 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:24.538 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:24.538 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:24.538 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:24.538 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:24.538 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:24.538 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.538 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.538 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.538 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:24.538 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:24.538 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:24.538 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:24.538 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:24.538 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:24.538 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:24.538 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:24.538 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:24.538 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:24.538 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:24.538 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:24.538 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.538 06:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.538 nvme0n1 00:28:24.538 06:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.538 06:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:24.538 06:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:24.538 06:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.538 06:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.538 06:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.538 06:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:24.538 06:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:24.538 06:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.538 06:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.798 06:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.798 06:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:24.798 06:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:24.798 06:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:28:24.798 06:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:24.798 06:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:24.798 06:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:24.798 06:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:24.798 06:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjY0MTJlMDU2M2Y3NmU5MWUyMmE3NzljZTkyM2ExZTkRo2FE: 00:28:24.798 06:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTljMThkMGExZDBlMzJiMTAyYjVhYWY2M2E1ZWM5YWUzMmYyMGRkMzMxYTM2YjE5MTFkNDMyNmNiNWMxNWExY+CLpQI=: 00:28:24.798 06:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:24.798 06:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:25.366 06:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjY0MTJlMDU2M2Y3NmU5MWUyMmE3NzljZTkyM2ExZTkRo2FE: 00:28:25.366 06:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTljMThkMGExZDBlMzJiMTAyYjVhYWY2M2E1ZWM5YWUzMmYyMGRkMzMxYTM2YjE5MTFkNDMyNmNiNWMxNWExY+CLpQI=: ]] 00:28:25.366 06:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTljMThkMGExZDBlMzJiMTAyYjVhYWY2M2E1ZWM5YWUzMmYyMGRkMzMxYTM2YjE5MTFkNDMyNmNiNWMxNWExY+CLpQI=: 00:28:25.366 06:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:28:25.366 06:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:25.366 06:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:25.366 06:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:25.366 06:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:25.366 06:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:25.366 06:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:25.366 06:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.366 06:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.366 06:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.366 06:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:25.366 06:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:25.366 06:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:25.366 06:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:25.366 06:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:25.366 06:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:25.366 06:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:25.366 06:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:25.366 06:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:25.366 06:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:25.366 06:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:25.366 06:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:25.366 06:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.366 06:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.366 nvme0n1 00:28:25.366 06:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.366 06:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:25.366 06:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:25.366 06:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.366 06:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.366 06:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.626 06:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:25.626 06:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:25.626 06:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.626 06:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.626 06:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.626 06:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:25.626 06:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:28:25.626 06:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:25.626 06:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:25.626 06:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:25.626 06:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:25.626 06:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDE4OTA2MGUxZmViNWMwNzNmMDFlM2M4ZjkyMWMyZGM1OWU4MjdjZTVhMDQ0ZTIwoI/cyg==: 00:28:25.626 06:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjViNjU0OTUzYWZhMWU2MjNlNTkzMjljZGNiMjU3YWEyZTcwOGIwMjdkMjg4ODI0dQKgXw==: 00:28:25.626 06:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:25.626 06:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:25.626 06:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDE4OTA2MGUxZmViNWMwNzNmMDFlM2M4ZjkyMWMyZGM1OWU4MjdjZTVhMDQ0ZTIwoI/cyg==: 00:28:25.626 06:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjViNjU0OTUzYWZhMWU2MjNlNTkzMjljZGNiMjU3YWEyZTcwOGIwMjdkMjg4ODI0dQKgXw==: ]] 00:28:25.626 06:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjViNjU0OTUzYWZhMWU2MjNlNTkzMjljZGNiMjU3YWEyZTcwOGIwMjdkMjg4ODI0dQKgXw==: 00:28:25.626 06:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:28:25.626 06:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:25.626 06:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:25.626 06:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:25.626 06:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:25.626 06:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:25.626 06:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:25.626 06:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.626 06:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.626 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.626 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:25.626 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:25.626 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:25.626 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:25.626 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:25.626 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:25.626 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:25.626 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:25.626 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:25.626 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:25.626 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:25.626 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:25.626 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.626 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.886 nvme0n1 00:28:25.886 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.886 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:25.886 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:25.886 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.886 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.886 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.886 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:25.886 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:25.886 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.886 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.886 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.886 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:25.886 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:28:25.886 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:25.886 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:25.886 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:25.886 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:25.886 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGJhNGIwMGQ1MzYzZmZhNDMyMmViZjlhODFmOTA3MTTnbOjI: 00:28:25.886 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDk3NTk0NzA1MmE3Yjk1OWNjZjg2NTNmZWNlZTVlMTlfK/y5: 00:28:25.886 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:25.886 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:25.886 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGJhNGIwMGQ1MzYzZmZhNDMyMmViZjlhODFmOTA3MTTnbOjI: 00:28:25.886 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDk3NTk0NzA1MmE3Yjk1OWNjZjg2NTNmZWNlZTVlMTlfK/y5: ]] 00:28:25.886 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDk3NTk0NzA1MmE3Yjk1OWNjZjg2NTNmZWNlZTVlMTlfK/y5: 00:28:25.886 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:28:25.886 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:25.886 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:25.886 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:25.886 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:25.886 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:25.886 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:25.886 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.886 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.886 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.886 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:25.886 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:25.886 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:25.886 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:25.886 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:25.886 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:25.886 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:25.886 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:25.886 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:25.886 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:25.886 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:25.886 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:25.886 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.886 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.147 nvme0n1 00:28:26.147 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.147 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:26.147 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:26.147 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.147 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.147 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.147 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:26.147 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:26.147 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.147 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.147 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.147 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:26.147 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:28:26.147 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:26.147 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:26.147 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:26.147 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:26.147 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Q0ZTgwMTg0ODM2MTcxOTg3MjJjMWE1MjBmZjg2ZTMyOWIzMDYxZTQxMGNjOTc54HEH5A==: 00:28:26.147 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTJjMmQ4NTVkMzUwNmQ2MGZiMWZjNmY5MmU0OWY5N2Hn6ZOS: 00:28:26.147 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:26.147 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:26.147 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Q0ZTgwMTg0ODM2MTcxOTg3MjJjMWE1MjBmZjg2ZTMyOWIzMDYxZTQxMGNjOTc54HEH5A==: 00:28:26.147 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTJjMmQ4NTVkMzUwNmQ2MGZiMWZjNmY5MmU0OWY5N2Hn6ZOS: ]] 00:28:26.147 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTJjMmQ4NTVkMzUwNmQ2MGZiMWZjNmY5MmU0OWY5N2Hn6ZOS: 00:28:26.147 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:28:26.147 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:26.147 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:26.147 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:26.147 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:26.147 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:26.148 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:26.148 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.148 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.148 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.148 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:26.148 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:26.148 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:26.148 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:26.148 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:26.148 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:26.148 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:26.148 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:26.148 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:26.148 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:26.148 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:26.148 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:26.148 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.148 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.408 nvme0n1 00:28:26.408 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.408 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:26.408 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:26.408 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.408 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.408 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.408 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:26.408 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:26.408 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.408 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.408 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.408 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:26.408 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:28:26.408 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:26.408 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:26.408 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:26.408 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:26.408 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGE5MzBkZTJmNjExNzZjZmFmM2Q1NDg1MDg5MjEzMjgxN2E0ZjhlOWNiOTA3MDFhZjJhMmMyNGJkYzBlYzYzMFeA2jk=: 00:28:26.408 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:26.408 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:26.408 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:26.408 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGE5MzBkZTJmNjExNzZjZmFmM2Q1NDg1MDg5MjEzMjgxN2E0ZjhlOWNiOTA3MDFhZjJhMmMyNGJkYzBlYzYzMFeA2jk=: 00:28:26.408 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:26.408 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:28:26.408 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:26.408 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:26.408 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:26.408 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:26.408 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:26.408 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:26.408 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.408 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.408 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.408 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:26.408 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:26.408 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:26.408 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:26.408 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:26.408 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:26.408 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:26.408 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:26.408 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:26.408 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:26.408 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:26.408 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:26.668 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.668 06:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.668 nvme0n1 00:28:26.668 06:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.668 06:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:26.668 06:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:26.668 06:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.668 06:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.928 06:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.928 06:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:26.928 06:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:26.928 06:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.928 06:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.928 06:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.928 06:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:26.928 06:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:26.928 06:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:28:26.928 06:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:26.928 06:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:26.928 06:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:26.928 06:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:26.928 06:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjY0MTJlMDU2M2Y3NmU5MWUyMmE3NzljZTkyM2ExZTkRo2FE: 00:28:26.928 06:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTljMThkMGExZDBlMzJiMTAyYjVhYWY2M2E1ZWM5YWUzMmYyMGRkMzMxYTM2YjE5MTFkNDMyNmNiNWMxNWExY+CLpQI=: 00:28:26.928 06:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:26.928 06:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:28.310 06:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjY0MTJlMDU2M2Y3NmU5MWUyMmE3NzljZTkyM2ExZTkRo2FE: 00:28:28.310 06:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTljMThkMGExZDBlMzJiMTAyYjVhYWY2M2E1ZWM5YWUzMmYyMGRkMzMxYTM2YjE5MTFkNDMyNmNiNWMxNWExY+CLpQI=: ]] 00:28:28.310 06:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTljMThkMGExZDBlMzJiMTAyYjVhYWY2M2E1ZWM5YWUzMmYyMGRkMzMxYTM2YjE5MTFkNDMyNmNiNWMxNWExY+CLpQI=: 00:28:28.310 06:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:28:28.310 06:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:28.310 06:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:28.310 06:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:28.310 06:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:28.310 06:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:28.310 06:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:28.310 06:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.310 06:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.310 06:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.569 06:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:28.569 06:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:28.569 06:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:28.569 06:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:28.569 06:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:28.569 06:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:28.569 06:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:28.569 06:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:28.569 06:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:28.569 06:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:28.569 06:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:28.569 06:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:28.569 06:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.569 06:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.829 nvme0n1 00:28:28.829 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.829 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:28.829 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:28.829 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.829 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.829 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.829 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:28.829 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:28.829 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.829 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.829 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.829 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:28.829 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:28:28.829 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:28.829 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:28.829 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:28.829 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:28.829 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDE4OTA2MGUxZmViNWMwNzNmMDFlM2M4ZjkyMWMyZGM1OWU4MjdjZTVhMDQ0ZTIwoI/cyg==: 00:28:28.829 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjViNjU0OTUzYWZhMWU2MjNlNTkzMjljZGNiMjU3YWEyZTcwOGIwMjdkMjg4ODI0dQKgXw==: 00:28:28.829 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:28.829 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:28.829 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDE4OTA2MGUxZmViNWMwNzNmMDFlM2M4ZjkyMWMyZGM1OWU4MjdjZTVhMDQ0ZTIwoI/cyg==: 00:28:28.829 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjViNjU0OTUzYWZhMWU2MjNlNTkzMjljZGNiMjU3YWEyZTcwOGIwMjdkMjg4ODI0dQKgXw==: ]] 00:28:28.829 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjViNjU0OTUzYWZhMWU2MjNlNTkzMjljZGNiMjU3YWEyZTcwOGIwMjdkMjg4ODI0dQKgXw==: 00:28:28.829 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:28:28.829 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:28.829 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:28.829 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:28.829 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:28.829 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:28.829 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:28.829 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.829 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.829 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.829 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:28.829 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:28.829 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:28.829 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:28.829 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:28.830 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:28.830 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:28.830 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:28.830 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:28.830 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:28.830 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:28.830 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:28.830 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.830 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.400 nvme0n1 00:28:29.400 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.400 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:29.400 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:29.400 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.400 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.400 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.400 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:29.400 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:29.400 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.400 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.400 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.400 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:29.400 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:28:29.400 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:29.400 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:29.400 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:29.400 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:29.400 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGJhNGIwMGQ1MzYzZmZhNDMyMmViZjlhODFmOTA3MTTnbOjI: 00:28:29.401 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDk3NTk0NzA1MmE3Yjk1OWNjZjg2NTNmZWNlZTVlMTlfK/y5: 00:28:29.401 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:29.401 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:29.401 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGJhNGIwMGQ1MzYzZmZhNDMyMmViZjlhODFmOTA3MTTnbOjI: 00:28:29.401 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDk3NTk0NzA1MmE3Yjk1OWNjZjg2NTNmZWNlZTVlMTlfK/y5: ]] 00:28:29.401 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDk3NTk0NzA1MmE3Yjk1OWNjZjg2NTNmZWNlZTVlMTlfK/y5: 00:28:29.401 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:28:29.401 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:29.401 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:29.401 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:29.401 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:29.401 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:29.401 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:29.401 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.401 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.401 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.401 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:29.401 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:29.401 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:29.401 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:29.401 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:29.401 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:29.401 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:29.401 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:29.401 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:29.401 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:29.401 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:29.401 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:29.401 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.401 06:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.973 nvme0n1 00:28:29.973 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.973 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:29.973 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:29.973 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.973 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.973 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.973 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:29.973 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:29.973 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.973 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.973 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.973 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:29.973 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:28:29.973 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:29.973 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:29.973 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:29.973 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:29.973 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Q0ZTgwMTg0ODM2MTcxOTg3MjJjMWE1MjBmZjg2ZTMyOWIzMDYxZTQxMGNjOTc54HEH5A==: 00:28:29.973 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTJjMmQ4NTVkMzUwNmQ2MGZiMWZjNmY5MmU0OWY5N2Hn6ZOS: 00:28:29.973 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:29.973 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:29.973 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Q0ZTgwMTg0ODM2MTcxOTg3MjJjMWE1MjBmZjg2ZTMyOWIzMDYxZTQxMGNjOTc54HEH5A==: 00:28:29.973 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTJjMmQ4NTVkMzUwNmQ2MGZiMWZjNmY5MmU0OWY5N2Hn6ZOS: ]] 00:28:29.973 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTJjMmQ4NTVkMzUwNmQ2MGZiMWZjNmY5MmU0OWY5N2Hn6ZOS: 00:28:29.973 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:28:29.973 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:29.973 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:29.973 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:29.973 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:29.973 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:29.973 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:29.973 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.973 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.973 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.973 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:29.973 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:29.973 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:29.973 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:29.973 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:29.973 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:29.973 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:29.973 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:29.973 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:29.973 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:29.973 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:29.973 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:29.973 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.973 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.234 nvme0n1 00:28:30.234 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.234 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:30.234 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:30.234 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.234 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.234 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.234 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:30.234 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:30.234 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.234 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.234 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.234 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:30.234 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:28:30.234 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:30.234 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:30.234 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:30.234 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:30.234 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGE5MzBkZTJmNjExNzZjZmFmM2Q1NDg1MDg5MjEzMjgxN2E0ZjhlOWNiOTA3MDFhZjJhMmMyNGJkYzBlYzYzMFeA2jk=: 00:28:30.234 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:30.234 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:30.234 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:30.234 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGE5MzBkZTJmNjExNzZjZmFmM2Q1NDg1MDg5MjEzMjgxN2E0ZjhlOWNiOTA3MDFhZjJhMmMyNGJkYzBlYzYzMFeA2jk=: 00:28:30.234 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:30.234 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:28:30.234 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:30.234 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:30.234 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:30.234 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:30.234 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:30.234 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:30.234 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.234 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.495 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.495 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:30.495 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:30.495 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:30.495 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:30.495 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:30.495 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:30.495 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:30.495 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:30.495 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:30.495 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:30.495 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:30.495 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:30.495 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.495 06:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.755 nvme0n1 00:28:30.755 06:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.755 06:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:30.755 06:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:30.755 06:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.755 06:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.755 06:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.755 06:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:30.755 06:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:30.755 06:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.755 06:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.755 06:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.755 06:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:30.755 06:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:30.755 06:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:28:30.755 06:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:30.755 06:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:30.755 06:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:30.755 06:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:30.755 06:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjY0MTJlMDU2M2Y3NmU5MWUyMmE3NzljZTkyM2ExZTkRo2FE: 00:28:30.755 06:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTljMThkMGExZDBlMzJiMTAyYjVhYWY2M2E1ZWM5YWUzMmYyMGRkMzMxYTM2YjE5MTFkNDMyNmNiNWMxNWExY+CLpQI=: 00:28:30.755 06:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:30.755 06:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:30.755 06:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjY0MTJlMDU2M2Y3NmU5MWUyMmE3NzljZTkyM2ExZTkRo2FE: 00:28:30.755 06:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTljMThkMGExZDBlMzJiMTAyYjVhYWY2M2E1ZWM5YWUzMmYyMGRkMzMxYTM2YjE5MTFkNDMyNmNiNWMxNWExY+CLpQI=: ]] 00:28:30.755 06:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTljMThkMGExZDBlMzJiMTAyYjVhYWY2M2E1ZWM5YWUzMmYyMGRkMzMxYTM2YjE5MTFkNDMyNmNiNWMxNWExY+CLpQI=: 00:28:30.755 06:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:28:30.755 06:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:30.755 06:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:30.755 06:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:30.755 06:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:30.755 06:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:30.755 06:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:30.755 06:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.755 06:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.755 06:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.755 06:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:30.755 06:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:30.755 06:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:30.755 06:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:30.755 06:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:30.755 06:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:30.755 06:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:30.755 06:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:30.755 06:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:30.755 06:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:30.755 06:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:30.755 06:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:30.755 06:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.755 06:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.695 nvme0n1 00:28:31.695 06:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.695 06:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:31.695 06:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:31.695 06:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.695 06:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.695 06:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.695 06:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:31.695 06:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:31.695 06:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.695 06:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.695 06:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.696 06:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:31.696 06:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:28:31.696 06:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:31.696 06:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:31.696 06:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:31.696 06:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:31.696 06:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDE4OTA2MGUxZmViNWMwNzNmMDFlM2M4ZjkyMWMyZGM1OWU4MjdjZTVhMDQ0ZTIwoI/cyg==: 00:28:31.696 06:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjViNjU0OTUzYWZhMWU2MjNlNTkzMjljZGNiMjU3YWEyZTcwOGIwMjdkMjg4ODI0dQKgXw==: 00:28:31.696 06:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:31.696 06:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:31.696 06:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDE4OTA2MGUxZmViNWMwNzNmMDFlM2M4ZjkyMWMyZGM1OWU4MjdjZTVhMDQ0ZTIwoI/cyg==: 00:28:31.696 06:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjViNjU0OTUzYWZhMWU2MjNlNTkzMjljZGNiMjU3YWEyZTcwOGIwMjdkMjg4ODI0dQKgXw==: ]] 00:28:31.696 06:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjViNjU0OTUzYWZhMWU2MjNlNTkzMjljZGNiMjU3YWEyZTcwOGIwMjdkMjg4ODI0dQKgXw==: 00:28:31.696 06:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:28:31.696 06:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:31.696 06:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:31.696 06:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:31.696 06:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:31.696 06:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:31.696 06:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:31.696 06:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.696 06:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.696 06:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.696 06:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:31.696 06:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:31.696 06:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:31.696 06:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:31.696 06:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:31.696 06:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:31.696 06:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:31.696 06:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:31.696 06:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:31.696 06:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:31.696 06:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:31.696 06:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:31.696 06:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.696 06:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.266 nvme0n1 00:28:32.266 06:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.266 06:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:32.266 06:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:32.266 06:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.266 06:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.266 06:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.266 06:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:32.266 06:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:32.266 06:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.266 06:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.266 06:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.266 06:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:32.266 06:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:28:32.266 06:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:32.266 06:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:32.266 06:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:32.266 06:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:32.266 06:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGJhNGIwMGQ1MzYzZmZhNDMyMmViZjlhODFmOTA3MTTnbOjI: 00:28:32.266 06:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDk3NTk0NzA1MmE3Yjk1OWNjZjg2NTNmZWNlZTVlMTlfK/y5: 00:28:32.266 06:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:32.266 06:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:32.266 06:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGJhNGIwMGQ1MzYzZmZhNDMyMmViZjlhODFmOTA3MTTnbOjI: 00:28:32.266 06:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDk3NTk0NzA1MmE3Yjk1OWNjZjg2NTNmZWNlZTVlMTlfK/y5: ]] 00:28:32.266 06:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDk3NTk0NzA1MmE3Yjk1OWNjZjg2NTNmZWNlZTVlMTlfK/y5: 00:28:32.266 06:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:28:32.266 06:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:32.266 06:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:32.266 06:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:32.266 06:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:32.266 06:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:32.266 06:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:32.266 06:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.266 06:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.266 06:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.266 06:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:32.266 06:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:32.266 06:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:32.266 06:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:32.266 06:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:32.266 06:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:32.266 06:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:32.266 06:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:32.266 06:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:32.266 06:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:32.266 06:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:32.266 06:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:32.266 06:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.266 06:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.835 nvme0n1 00:28:32.835 06:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.835 06:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:32.835 06:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:32.835 06:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.835 06:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.835 06:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.836 06:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:32.836 06:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:32.836 06:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.836 06:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.836 06:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.836 06:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:32.836 06:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:28:32.836 06:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:32.836 06:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:32.836 06:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:32.836 06:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:32.836 06:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Q0ZTgwMTg0ODM2MTcxOTg3MjJjMWE1MjBmZjg2ZTMyOWIzMDYxZTQxMGNjOTc54HEH5A==: 00:28:32.836 06:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTJjMmQ4NTVkMzUwNmQ2MGZiMWZjNmY5MmU0OWY5N2Hn6ZOS: 00:28:32.836 06:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:32.836 06:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:32.836 06:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Q0ZTgwMTg0ODM2MTcxOTg3MjJjMWE1MjBmZjg2ZTMyOWIzMDYxZTQxMGNjOTc54HEH5A==: 00:28:32.836 06:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTJjMmQ4NTVkMzUwNmQ2MGZiMWZjNmY5MmU0OWY5N2Hn6ZOS: ]] 00:28:32.836 06:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTJjMmQ4NTVkMzUwNmQ2MGZiMWZjNmY5MmU0OWY5N2Hn6ZOS: 00:28:32.836 06:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:28:32.836 06:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:32.836 06:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:32.836 06:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:32.836 06:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:32.836 06:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:32.836 06:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:32.836 06:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.836 06:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.836 06:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.836 06:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:32.836 06:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:32.836 06:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:32.836 06:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:32.836 06:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:32.836 06:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:32.836 06:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:32.836 06:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:32.836 06:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:32.836 06:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:32.836 06:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:32.836 06:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:32.836 06:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.836 06:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.774 nvme0n1 00:28:33.774 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.774 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:33.774 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:33.774 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.774 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.774 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.774 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:33.774 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:33.774 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.774 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.774 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.774 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:33.774 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:28:33.774 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:33.774 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:33.774 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:33.774 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:33.774 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGE5MzBkZTJmNjExNzZjZmFmM2Q1NDg1MDg5MjEzMjgxN2E0ZjhlOWNiOTA3MDFhZjJhMmMyNGJkYzBlYzYzMFeA2jk=: 00:28:33.774 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:33.774 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:33.774 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:33.774 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGE5MzBkZTJmNjExNzZjZmFmM2Q1NDg1MDg5MjEzMjgxN2E0ZjhlOWNiOTA3MDFhZjJhMmMyNGJkYzBlYzYzMFeA2jk=: 00:28:33.774 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:33.774 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:28:33.774 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:33.774 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:33.774 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:33.774 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:33.774 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:33.774 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:33.774 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.774 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.775 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.775 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:33.775 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:33.775 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:33.775 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:33.775 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:33.775 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:33.775 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:33.775 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:33.775 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:33.775 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:33.775 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:33.775 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:33.775 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.775 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.344 nvme0n1 00:28:34.344 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.344 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:34.344 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:34.344 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.344 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.344 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.344 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:34.344 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:34.344 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.344 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.344 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.344 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:34.344 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:34.344 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:34.344 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:28:34.344 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:34.344 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:34.344 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:34.344 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:34.344 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjY0MTJlMDU2M2Y3NmU5MWUyMmE3NzljZTkyM2ExZTkRo2FE: 00:28:34.344 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTljMThkMGExZDBlMzJiMTAyYjVhYWY2M2E1ZWM5YWUzMmYyMGRkMzMxYTM2YjE5MTFkNDMyNmNiNWMxNWExY+CLpQI=: 00:28:34.344 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:34.344 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:34.344 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjY0MTJlMDU2M2Y3NmU5MWUyMmE3NzljZTkyM2ExZTkRo2FE: 00:28:34.344 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTljMThkMGExZDBlMzJiMTAyYjVhYWY2M2E1ZWM5YWUzMmYyMGRkMzMxYTM2YjE5MTFkNDMyNmNiNWMxNWExY+CLpQI=: ]] 00:28:34.344 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTljMThkMGExZDBlMzJiMTAyYjVhYWY2M2E1ZWM5YWUzMmYyMGRkMzMxYTM2YjE5MTFkNDMyNmNiNWMxNWExY+CLpQI=: 00:28:34.344 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:28:34.344 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:34.344 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:34.344 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:34.344 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:34.344 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:34.344 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:34.344 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.344 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.344 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.344 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:34.344 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:34.344 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:34.344 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:34.344 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:34.344 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:34.344 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:34.344 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:34.344 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:34.344 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:34.344 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:34.344 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:34.344 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.344 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.604 nvme0n1 00:28:34.604 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.604 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:34.604 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:34.604 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.604 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.604 06:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.604 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:34.604 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:34.604 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.604 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.604 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.604 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:34.604 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:28:34.604 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:34.604 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:34.604 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:34.604 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:34.604 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDE4OTA2MGUxZmViNWMwNzNmMDFlM2M4ZjkyMWMyZGM1OWU4MjdjZTVhMDQ0ZTIwoI/cyg==: 00:28:34.604 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjViNjU0OTUzYWZhMWU2MjNlNTkzMjljZGNiMjU3YWEyZTcwOGIwMjdkMjg4ODI0dQKgXw==: 00:28:34.604 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:34.604 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:34.604 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDE4OTA2MGUxZmViNWMwNzNmMDFlM2M4ZjkyMWMyZGM1OWU4MjdjZTVhMDQ0ZTIwoI/cyg==: 00:28:34.604 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjViNjU0OTUzYWZhMWU2MjNlNTkzMjljZGNiMjU3YWEyZTcwOGIwMjdkMjg4ODI0dQKgXw==: ]] 00:28:34.604 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjViNjU0OTUzYWZhMWU2MjNlNTkzMjljZGNiMjU3YWEyZTcwOGIwMjdkMjg4ODI0dQKgXw==: 00:28:34.604 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:28:34.604 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:34.604 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:34.604 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:34.604 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:34.604 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:34.604 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:34.604 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.604 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.604 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.604 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:34.604 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:34.604 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:34.604 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:34.604 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:34.604 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:34.604 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:34.604 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:34.604 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:34.605 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:34.605 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:34.605 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:34.605 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.605 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.865 nvme0n1 00:28:34.865 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.865 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:34.865 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:34.865 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.865 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.866 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.866 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:34.866 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:34.866 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.866 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.866 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.866 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:34.866 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:28:34.866 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:34.866 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:34.866 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:34.866 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:34.866 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGJhNGIwMGQ1MzYzZmZhNDMyMmViZjlhODFmOTA3MTTnbOjI: 00:28:34.866 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDk3NTk0NzA1MmE3Yjk1OWNjZjg2NTNmZWNlZTVlMTlfK/y5: 00:28:34.866 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:34.866 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:34.866 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGJhNGIwMGQ1MzYzZmZhNDMyMmViZjlhODFmOTA3MTTnbOjI: 00:28:34.866 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDk3NTk0NzA1MmE3Yjk1OWNjZjg2NTNmZWNlZTVlMTlfK/y5: ]] 00:28:34.866 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDk3NTk0NzA1MmE3Yjk1OWNjZjg2NTNmZWNlZTVlMTlfK/y5: 00:28:34.866 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:28:34.866 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:34.866 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:34.866 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:34.866 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:34.866 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:34.866 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:34.866 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.866 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.866 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.866 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:34.866 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:34.866 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:34.866 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:34.866 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:34.866 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:34.866 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:34.866 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:34.866 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:34.866 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:34.866 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:34.866 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:34.866 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.866 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.866 nvme0n1 00:28:34.866 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.866 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:34.866 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:34.866 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.866 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.127 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.127 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:35.127 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:35.127 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.127 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.127 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.127 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:35.127 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:28:35.127 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:35.127 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:35.127 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:35.127 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:35.127 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Q0ZTgwMTg0ODM2MTcxOTg3MjJjMWE1MjBmZjg2ZTMyOWIzMDYxZTQxMGNjOTc54HEH5A==: 00:28:35.127 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTJjMmQ4NTVkMzUwNmQ2MGZiMWZjNmY5MmU0OWY5N2Hn6ZOS: 00:28:35.127 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:35.127 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:35.127 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Q0ZTgwMTg0ODM2MTcxOTg3MjJjMWE1MjBmZjg2ZTMyOWIzMDYxZTQxMGNjOTc54HEH5A==: 00:28:35.127 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTJjMmQ4NTVkMzUwNmQ2MGZiMWZjNmY5MmU0OWY5N2Hn6ZOS: ]] 00:28:35.127 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTJjMmQ4NTVkMzUwNmQ2MGZiMWZjNmY5MmU0OWY5N2Hn6ZOS: 00:28:35.127 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:28:35.127 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:35.127 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:35.127 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:35.127 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:35.127 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:35.127 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:35.127 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.127 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.127 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.127 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:35.127 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:35.127 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:35.127 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:35.127 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:35.127 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:35.127 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:35.127 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:35.127 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:35.127 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:35.127 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:35.127 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:35.127 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.127 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.127 nvme0n1 00:28:35.127 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.127 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:35.127 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:35.127 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.127 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.127 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.127 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:35.127 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:35.127 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.127 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.387 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.387 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:35.387 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:28:35.387 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:35.387 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:35.387 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:35.387 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:35.387 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGE5MzBkZTJmNjExNzZjZmFmM2Q1NDg1MDg5MjEzMjgxN2E0ZjhlOWNiOTA3MDFhZjJhMmMyNGJkYzBlYzYzMFeA2jk=: 00:28:35.387 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:35.387 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:35.387 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:35.387 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGE5MzBkZTJmNjExNzZjZmFmM2Q1NDg1MDg5MjEzMjgxN2E0ZjhlOWNiOTA3MDFhZjJhMmMyNGJkYzBlYzYzMFeA2jk=: 00:28:35.387 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:35.387 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:28:35.387 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:35.387 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:35.387 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:35.387 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:35.387 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:35.387 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:35.387 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.387 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.387 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.387 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:35.387 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:35.387 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:35.387 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:35.387 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:35.387 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:35.387 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:35.387 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:35.387 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:35.387 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:35.387 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:35.387 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:35.387 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.387 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.387 nvme0n1 00:28:35.387 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.387 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:35.387 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:35.387 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.387 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.388 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.388 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:35.388 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:35.388 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.388 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.388 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.388 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:35.388 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:35.388 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:28:35.388 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:35.388 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:35.388 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:35.388 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:35.388 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjY0MTJlMDU2M2Y3NmU5MWUyMmE3NzljZTkyM2ExZTkRo2FE: 00:28:35.388 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTljMThkMGExZDBlMzJiMTAyYjVhYWY2M2E1ZWM5YWUzMmYyMGRkMzMxYTM2YjE5MTFkNDMyNmNiNWMxNWExY+CLpQI=: 00:28:35.388 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:35.388 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:35.388 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjY0MTJlMDU2M2Y3NmU5MWUyMmE3NzljZTkyM2ExZTkRo2FE: 00:28:35.388 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTljMThkMGExZDBlMzJiMTAyYjVhYWY2M2E1ZWM5YWUzMmYyMGRkMzMxYTM2YjE5MTFkNDMyNmNiNWMxNWExY+CLpQI=: ]] 00:28:35.388 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTljMThkMGExZDBlMzJiMTAyYjVhYWY2M2E1ZWM5YWUzMmYyMGRkMzMxYTM2YjE5MTFkNDMyNmNiNWMxNWExY+CLpQI=: 00:28:35.388 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:28:35.388 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:35.388 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:35.388 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:35.388 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:35.388 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:35.388 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:35.388 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.388 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.648 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.648 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:35.648 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:35.648 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:35.648 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:35.648 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:35.648 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:35.648 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:35.648 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:35.648 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:35.648 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:35.648 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:35.648 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:35.648 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.648 06:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.648 nvme0n1 00:28:35.648 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.648 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:35.648 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.648 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:35.648 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.648 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.648 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:35.648 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:35.648 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.648 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.648 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.648 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:35.648 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:28:35.648 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:35.648 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:35.648 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:35.648 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:35.648 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDE4OTA2MGUxZmViNWMwNzNmMDFlM2M4ZjkyMWMyZGM1OWU4MjdjZTVhMDQ0ZTIwoI/cyg==: 00:28:35.648 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjViNjU0OTUzYWZhMWU2MjNlNTkzMjljZGNiMjU3YWEyZTcwOGIwMjdkMjg4ODI0dQKgXw==: 00:28:35.648 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:35.648 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:35.648 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDE4OTA2MGUxZmViNWMwNzNmMDFlM2M4ZjkyMWMyZGM1OWU4MjdjZTVhMDQ0ZTIwoI/cyg==: 00:28:35.648 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjViNjU0OTUzYWZhMWU2MjNlNTkzMjljZGNiMjU3YWEyZTcwOGIwMjdkMjg4ODI0dQKgXw==: ]] 00:28:35.648 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjViNjU0OTUzYWZhMWU2MjNlNTkzMjljZGNiMjU3YWEyZTcwOGIwMjdkMjg4ODI0dQKgXw==: 00:28:35.648 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:28:35.648 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:35.648 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:35.648 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:35.648 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:35.648 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:35.648 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:35.648 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.648 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.909 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.909 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:35.909 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:35.909 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:35.909 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:35.909 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:35.909 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:35.909 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:35.909 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:35.909 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:35.909 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:35.909 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:35.909 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:35.909 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.909 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.909 nvme0n1 00:28:35.909 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.909 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:35.909 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:35.909 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.909 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.909 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.909 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:35.909 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:35.909 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.909 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.909 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.909 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:35.909 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:28:35.909 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:35.909 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:35.909 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:35.909 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:35.909 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGJhNGIwMGQ1MzYzZmZhNDMyMmViZjlhODFmOTA3MTTnbOjI: 00:28:35.909 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDk3NTk0NzA1MmE3Yjk1OWNjZjg2NTNmZWNlZTVlMTlfK/y5: 00:28:35.909 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:35.909 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:35.909 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGJhNGIwMGQ1MzYzZmZhNDMyMmViZjlhODFmOTA3MTTnbOjI: 00:28:35.909 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDk3NTk0NzA1MmE3Yjk1OWNjZjg2NTNmZWNlZTVlMTlfK/y5: ]] 00:28:35.909 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDk3NTk0NzA1MmE3Yjk1OWNjZjg2NTNmZWNlZTVlMTlfK/y5: 00:28:35.909 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:28:35.909 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:35.909 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:35.909 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:35.909 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:35.909 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:35.909 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:35.909 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.909 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.909 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.909 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:35.909 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:35.909 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:35.909 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:35.909 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:36.170 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:36.170 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:36.170 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:36.170 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:36.170 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:36.170 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:36.170 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:36.170 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.170 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.170 nvme0n1 00:28:36.170 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.170 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:36.170 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:36.170 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.170 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.170 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.170 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:36.170 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:36.170 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.170 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.170 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.170 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:36.170 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:28:36.170 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:36.170 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:36.170 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:36.170 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:36.170 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Q0ZTgwMTg0ODM2MTcxOTg3MjJjMWE1MjBmZjg2ZTMyOWIzMDYxZTQxMGNjOTc54HEH5A==: 00:28:36.170 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTJjMmQ4NTVkMzUwNmQ2MGZiMWZjNmY5MmU0OWY5N2Hn6ZOS: 00:28:36.170 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:36.170 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:36.170 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Q0ZTgwMTg0ODM2MTcxOTg3MjJjMWE1MjBmZjg2ZTMyOWIzMDYxZTQxMGNjOTc54HEH5A==: 00:28:36.170 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTJjMmQ4NTVkMzUwNmQ2MGZiMWZjNmY5MmU0OWY5N2Hn6ZOS: ]] 00:28:36.170 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTJjMmQ4NTVkMzUwNmQ2MGZiMWZjNmY5MmU0OWY5N2Hn6ZOS: 00:28:36.170 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:28:36.170 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:36.170 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:36.170 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:36.170 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:36.170 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:36.170 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:36.170 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.170 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.431 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.431 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:36.431 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:36.431 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:36.431 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:36.431 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:36.431 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:36.431 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:36.431 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:36.431 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:36.431 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:36.431 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:36.431 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:36.431 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.431 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.431 nvme0n1 00:28:36.431 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.431 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:36.431 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:36.431 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.431 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.431 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.431 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:36.431 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:36.431 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.431 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.431 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.431 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:36.431 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:28:36.431 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:36.431 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:36.431 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:36.431 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:36.431 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGE5MzBkZTJmNjExNzZjZmFmM2Q1NDg1MDg5MjEzMjgxN2E0ZjhlOWNiOTA3MDFhZjJhMmMyNGJkYzBlYzYzMFeA2jk=: 00:28:36.431 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:36.431 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:36.431 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:36.431 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGE5MzBkZTJmNjExNzZjZmFmM2Q1NDg1MDg5MjEzMjgxN2E0ZjhlOWNiOTA3MDFhZjJhMmMyNGJkYzBlYzYzMFeA2jk=: 00:28:36.431 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:36.431 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:28:36.431 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:36.431 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:36.431 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:36.431 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:36.431 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:36.431 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:36.431 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.431 06:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.431 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.431 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:36.431 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:36.431 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:36.431 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:36.431 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:36.431 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:36.431 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:36.431 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:36.431 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:36.431 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:36.431 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:36.431 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:36.431 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.432 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.692 nvme0n1 00:28:36.692 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.692 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:36.692 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:36.692 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.692 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.692 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.692 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:36.692 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:36.692 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.692 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.692 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.692 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:36.692 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:36.692 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:28:36.692 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:36.692 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:36.692 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:36.692 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:36.692 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjY0MTJlMDU2M2Y3NmU5MWUyMmE3NzljZTkyM2ExZTkRo2FE: 00:28:36.692 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTljMThkMGExZDBlMzJiMTAyYjVhYWY2M2E1ZWM5YWUzMmYyMGRkMzMxYTM2YjE5MTFkNDMyNmNiNWMxNWExY+CLpQI=: 00:28:36.692 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:36.692 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:36.692 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjY0MTJlMDU2M2Y3NmU5MWUyMmE3NzljZTkyM2ExZTkRo2FE: 00:28:36.692 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTljMThkMGExZDBlMzJiMTAyYjVhYWY2M2E1ZWM5YWUzMmYyMGRkMzMxYTM2YjE5MTFkNDMyNmNiNWMxNWExY+CLpQI=: ]] 00:28:36.692 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTljMThkMGExZDBlMzJiMTAyYjVhYWY2M2E1ZWM5YWUzMmYyMGRkMzMxYTM2YjE5MTFkNDMyNmNiNWMxNWExY+CLpQI=: 00:28:36.692 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:28:36.692 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:36.692 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:36.692 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:36.692 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:36.692 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:36.692 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:36.692 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.692 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.692 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.692 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:36.692 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:36.692 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:36.692 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:36.692 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:36.692 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:36.692 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:36.692 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:36.692 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:36.692 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:36.692 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:36.953 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:36.953 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.953 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.953 nvme0n1 00:28:36.953 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.953 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:36.953 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:36.953 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.953 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.213 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.213 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:37.213 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:37.213 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.213 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.213 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.214 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:37.214 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:28:37.214 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:37.214 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:37.214 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:37.214 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:37.214 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDE4OTA2MGUxZmViNWMwNzNmMDFlM2M4ZjkyMWMyZGM1OWU4MjdjZTVhMDQ0ZTIwoI/cyg==: 00:28:37.214 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjViNjU0OTUzYWZhMWU2MjNlNTkzMjljZGNiMjU3YWEyZTcwOGIwMjdkMjg4ODI0dQKgXw==: 00:28:37.214 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:37.214 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:37.214 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDE4OTA2MGUxZmViNWMwNzNmMDFlM2M4ZjkyMWMyZGM1OWU4MjdjZTVhMDQ0ZTIwoI/cyg==: 00:28:37.214 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjViNjU0OTUzYWZhMWU2MjNlNTkzMjljZGNiMjU3YWEyZTcwOGIwMjdkMjg4ODI0dQKgXw==: ]] 00:28:37.214 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjViNjU0OTUzYWZhMWU2MjNlNTkzMjljZGNiMjU3YWEyZTcwOGIwMjdkMjg4ODI0dQKgXw==: 00:28:37.214 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:28:37.214 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:37.214 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:37.214 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:37.214 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:37.214 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:37.214 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:37.214 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.214 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.214 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.214 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:37.214 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:37.214 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:37.214 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:37.214 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:37.214 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:37.214 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:37.214 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:37.214 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:37.214 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:37.214 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:37.214 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:37.214 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.214 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.474 nvme0n1 00:28:37.474 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.474 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:37.474 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:37.474 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.474 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.474 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.474 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:37.474 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:37.474 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.474 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.474 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.474 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:37.474 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:28:37.474 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:37.475 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:37.475 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:37.475 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:37.475 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGJhNGIwMGQ1MzYzZmZhNDMyMmViZjlhODFmOTA3MTTnbOjI: 00:28:37.475 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDk3NTk0NzA1MmE3Yjk1OWNjZjg2NTNmZWNlZTVlMTlfK/y5: 00:28:37.475 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:37.475 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:37.475 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGJhNGIwMGQ1MzYzZmZhNDMyMmViZjlhODFmOTA3MTTnbOjI: 00:28:37.475 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDk3NTk0NzA1MmE3Yjk1OWNjZjg2NTNmZWNlZTVlMTlfK/y5: ]] 00:28:37.475 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDk3NTk0NzA1MmE3Yjk1OWNjZjg2NTNmZWNlZTVlMTlfK/y5: 00:28:37.475 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:28:37.475 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:37.475 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:37.475 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:37.475 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:37.475 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:37.475 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:37.475 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.475 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.475 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.475 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:37.475 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:37.475 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:37.475 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:37.475 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:37.475 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:37.475 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:37.475 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:37.475 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:37.475 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:37.475 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:37.475 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:37.475 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.475 06:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.735 nvme0n1 00:28:37.735 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.735 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:37.735 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:37.735 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.735 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.735 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.735 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:37.735 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:37.735 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.735 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.735 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.735 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:37.735 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:28:37.735 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:37.735 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:37.735 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:37.736 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:37.736 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Q0ZTgwMTg0ODM2MTcxOTg3MjJjMWE1MjBmZjg2ZTMyOWIzMDYxZTQxMGNjOTc54HEH5A==: 00:28:37.736 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTJjMmQ4NTVkMzUwNmQ2MGZiMWZjNmY5MmU0OWY5N2Hn6ZOS: 00:28:37.736 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:37.736 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:37.736 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Q0ZTgwMTg0ODM2MTcxOTg3MjJjMWE1MjBmZjg2ZTMyOWIzMDYxZTQxMGNjOTc54HEH5A==: 00:28:37.736 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTJjMmQ4NTVkMzUwNmQ2MGZiMWZjNmY5MmU0OWY5N2Hn6ZOS: ]] 00:28:37.736 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTJjMmQ4NTVkMzUwNmQ2MGZiMWZjNmY5MmU0OWY5N2Hn6ZOS: 00:28:37.736 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:28:37.736 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:37.736 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:37.736 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:37.736 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:37.736 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:37.736 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:37.736 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.736 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.736 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.736 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:37.736 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:37.736 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:37.736 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:37.736 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:37.736 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:37.736 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:37.736 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:37.736 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:37.736 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:37.736 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:37.736 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:37.736 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.736 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.998 nvme0n1 00:28:37.998 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.998 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:37.998 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:37.998 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.998 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.998 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.998 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:37.998 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:37.998 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.998 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.260 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.260 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:38.260 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:28:38.260 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:38.260 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:38.260 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:38.260 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:38.260 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGE5MzBkZTJmNjExNzZjZmFmM2Q1NDg1MDg5MjEzMjgxN2E0ZjhlOWNiOTA3MDFhZjJhMmMyNGJkYzBlYzYzMFeA2jk=: 00:28:38.260 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:38.260 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:38.260 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:38.260 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGE5MzBkZTJmNjExNzZjZmFmM2Q1NDg1MDg5MjEzMjgxN2E0ZjhlOWNiOTA3MDFhZjJhMmMyNGJkYzBlYzYzMFeA2jk=: 00:28:38.260 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:38.260 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:28:38.260 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:38.260 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:38.260 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:38.260 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:38.260 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:38.260 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:38.260 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.260 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.260 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.260 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:38.260 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:38.260 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:38.260 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:38.260 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:38.260 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:38.260 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:38.260 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:38.260 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:38.260 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:38.260 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:38.260 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:38.261 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.261 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.522 nvme0n1 00:28:38.522 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.522 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:38.522 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:38.522 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.522 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.522 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.522 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:38.522 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:38.522 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.522 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.522 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.522 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:38.522 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:38.522 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:28:38.522 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:38.522 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:38.522 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:38.522 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:38.522 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjY0MTJlMDU2M2Y3NmU5MWUyMmE3NzljZTkyM2ExZTkRo2FE: 00:28:38.522 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTljMThkMGExZDBlMzJiMTAyYjVhYWY2M2E1ZWM5YWUzMmYyMGRkMzMxYTM2YjE5MTFkNDMyNmNiNWMxNWExY+CLpQI=: 00:28:38.522 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:38.522 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:38.522 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjY0MTJlMDU2M2Y3NmU5MWUyMmE3NzljZTkyM2ExZTkRo2FE: 00:28:38.522 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTljMThkMGExZDBlMzJiMTAyYjVhYWY2M2E1ZWM5YWUzMmYyMGRkMzMxYTM2YjE5MTFkNDMyNmNiNWMxNWExY+CLpQI=: ]] 00:28:38.522 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTljMThkMGExZDBlMzJiMTAyYjVhYWY2M2E1ZWM5YWUzMmYyMGRkMzMxYTM2YjE5MTFkNDMyNmNiNWMxNWExY+CLpQI=: 00:28:38.522 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:28:38.522 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:38.522 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:38.522 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:38.522 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:38.522 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:38.522 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:38.522 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.522 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.522 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.522 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:38.522 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:38.522 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:38.522 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:38.523 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:38.523 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:38.523 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:38.523 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:38.523 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:38.523 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:38.523 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:38.523 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:38.523 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.523 06:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.096 nvme0n1 00:28:39.096 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.096 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:39.096 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:39.096 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.096 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.096 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.096 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:39.096 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:39.096 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.096 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.096 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.096 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:39.096 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:28:39.096 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:39.096 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:39.096 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:39.096 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:39.096 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDE4OTA2MGUxZmViNWMwNzNmMDFlM2M4ZjkyMWMyZGM1OWU4MjdjZTVhMDQ0ZTIwoI/cyg==: 00:28:39.096 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjViNjU0OTUzYWZhMWU2MjNlNTkzMjljZGNiMjU3YWEyZTcwOGIwMjdkMjg4ODI0dQKgXw==: 00:28:39.096 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:39.096 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:39.096 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDE4OTA2MGUxZmViNWMwNzNmMDFlM2M4ZjkyMWMyZGM1OWU4MjdjZTVhMDQ0ZTIwoI/cyg==: 00:28:39.096 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjViNjU0OTUzYWZhMWU2MjNlNTkzMjljZGNiMjU3YWEyZTcwOGIwMjdkMjg4ODI0dQKgXw==: ]] 00:28:39.096 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjViNjU0OTUzYWZhMWU2MjNlNTkzMjljZGNiMjU3YWEyZTcwOGIwMjdkMjg4ODI0dQKgXw==: 00:28:39.096 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:28:39.096 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:39.096 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:39.096 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:39.096 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:39.096 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:39.096 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:39.096 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.096 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.096 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.096 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:39.096 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:39.096 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:39.096 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:39.096 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:39.096 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:39.096 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:39.096 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:39.096 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:39.096 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:39.096 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:39.096 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:39.096 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.096 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.358 nvme0n1 00:28:39.358 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.358 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:39.358 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:39.358 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.358 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.358 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.358 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:39.358 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:39.358 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.358 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.358 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.358 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:39.358 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:28:39.358 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:39.358 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:39.358 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:39.358 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:39.358 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGJhNGIwMGQ1MzYzZmZhNDMyMmViZjlhODFmOTA3MTTnbOjI: 00:28:39.358 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDk3NTk0NzA1MmE3Yjk1OWNjZjg2NTNmZWNlZTVlMTlfK/y5: 00:28:39.358 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:39.358 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:39.358 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGJhNGIwMGQ1MzYzZmZhNDMyMmViZjlhODFmOTA3MTTnbOjI: 00:28:39.358 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDk3NTk0NzA1MmE3Yjk1OWNjZjg2NTNmZWNlZTVlMTlfK/y5: ]] 00:28:39.358 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDk3NTk0NzA1MmE3Yjk1OWNjZjg2NTNmZWNlZTVlMTlfK/y5: 00:28:39.358 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:28:39.358 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:39.358 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:39.358 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:39.358 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:39.358 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:39.359 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:39.359 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.359 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.619 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.619 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:39.619 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:39.619 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:39.619 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:39.619 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:39.619 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:39.619 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:39.619 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:39.619 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:39.619 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:39.619 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:39.619 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:39.619 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.619 06:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.879 nvme0n1 00:28:39.879 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.879 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:39.879 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:39.879 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.879 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.879 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.879 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:39.880 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:39.880 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.880 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.880 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.880 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:39.880 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:28:39.880 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:39.880 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:39.880 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:39.880 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:39.880 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Q0ZTgwMTg0ODM2MTcxOTg3MjJjMWE1MjBmZjg2ZTMyOWIzMDYxZTQxMGNjOTc54HEH5A==: 00:28:39.880 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTJjMmQ4NTVkMzUwNmQ2MGZiMWZjNmY5MmU0OWY5N2Hn6ZOS: 00:28:39.880 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:39.880 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:39.880 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Q0ZTgwMTg0ODM2MTcxOTg3MjJjMWE1MjBmZjg2ZTMyOWIzMDYxZTQxMGNjOTc54HEH5A==: 00:28:39.880 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTJjMmQ4NTVkMzUwNmQ2MGZiMWZjNmY5MmU0OWY5N2Hn6ZOS: ]] 00:28:39.880 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTJjMmQ4NTVkMzUwNmQ2MGZiMWZjNmY5MmU0OWY5N2Hn6ZOS: 00:28:39.880 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:28:39.880 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:39.880 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:39.880 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:39.880 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:39.880 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:39.880 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:39.880 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.880 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.880 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.880 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:39.880 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:39.880 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:39.880 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:39.880 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:39.880 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:39.880 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:39.880 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:39.880 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:39.880 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:39.880 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:39.880 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:39.880 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.880 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.453 nvme0n1 00:28:40.453 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.453 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:40.453 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:40.453 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.453 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.453 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.453 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:40.453 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:40.453 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.453 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.453 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.453 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:40.453 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:28:40.453 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:40.453 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:40.453 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:40.453 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:40.453 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGE5MzBkZTJmNjExNzZjZmFmM2Q1NDg1MDg5MjEzMjgxN2E0ZjhlOWNiOTA3MDFhZjJhMmMyNGJkYzBlYzYzMFeA2jk=: 00:28:40.453 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:40.453 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:40.453 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:40.453 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGE5MzBkZTJmNjExNzZjZmFmM2Q1NDg1MDg5MjEzMjgxN2E0ZjhlOWNiOTA3MDFhZjJhMmMyNGJkYzBlYzYzMFeA2jk=: 00:28:40.453 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:40.453 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:28:40.453 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:40.453 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:40.453 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:40.453 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:40.453 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:40.453 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:40.453 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.453 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.454 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.454 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:40.454 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:40.454 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:40.454 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:40.454 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:40.454 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:40.454 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:40.454 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:40.454 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:40.454 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:40.454 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:40.454 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:40.454 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.454 06:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.713 nvme0n1 00:28:40.713 06:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.713 06:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:40.713 06:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:40.713 06:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.713 06:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.973 06:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.973 06:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:40.973 06:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:40.973 06:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.973 06:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.973 06:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.973 06:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:40.973 06:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:40.973 06:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:28:40.973 06:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:40.973 06:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:40.973 06:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:40.973 06:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:40.973 06:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjY0MTJlMDU2M2Y3NmU5MWUyMmE3NzljZTkyM2ExZTkRo2FE: 00:28:40.973 06:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTljMThkMGExZDBlMzJiMTAyYjVhYWY2M2E1ZWM5YWUzMmYyMGRkMzMxYTM2YjE5MTFkNDMyNmNiNWMxNWExY+CLpQI=: 00:28:40.973 06:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:40.973 06:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:40.973 06:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjY0MTJlMDU2M2Y3NmU5MWUyMmE3NzljZTkyM2ExZTkRo2FE: 00:28:40.973 06:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTljMThkMGExZDBlMzJiMTAyYjVhYWY2M2E1ZWM5YWUzMmYyMGRkMzMxYTM2YjE5MTFkNDMyNmNiNWMxNWExY+CLpQI=: ]] 00:28:40.973 06:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTljMThkMGExZDBlMzJiMTAyYjVhYWY2M2E1ZWM5YWUzMmYyMGRkMzMxYTM2YjE5MTFkNDMyNmNiNWMxNWExY+CLpQI=: 00:28:40.973 06:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:28:40.973 06:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:40.973 06:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:40.973 06:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:40.973 06:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:40.973 06:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:40.973 06:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:40.973 06:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.973 06:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.973 06:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.973 06:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:40.973 06:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:40.973 06:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:40.973 06:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:40.973 06:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:40.973 06:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:40.973 06:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:40.973 06:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:40.974 06:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:40.974 06:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:40.974 06:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:40.974 06:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:40.974 06:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.974 06:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.544 nvme0n1 00:28:41.544 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.544 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:41.544 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:41.544 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.544 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.544 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.544 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:41.544 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:41.544 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.544 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.544 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.544 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:41.544 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:28:41.544 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:41.544 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:41.544 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:41.544 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:41.544 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDE4OTA2MGUxZmViNWMwNzNmMDFlM2M4ZjkyMWMyZGM1OWU4MjdjZTVhMDQ0ZTIwoI/cyg==: 00:28:41.544 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjViNjU0OTUzYWZhMWU2MjNlNTkzMjljZGNiMjU3YWEyZTcwOGIwMjdkMjg4ODI0dQKgXw==: 00:28:41.544 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:41.544 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:41.544 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDE4OTA2MGUxZmViNWMwNzNmMDFlM2M4ZjkyMWMyZGM1OWU4MjdjZTVhMDQ0ZTIwoI/cyg==: 00:28:41.544 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjViNjU0OTUzYWZhMWU2MjNlNTkzMjljZGNiMjU3YWEyZTcwOGIwMjdkMjg4ODI0dQKgXw==: ]] 00:28:41.544 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjViNjU0OTUzYWZhMWU2MjNlNTkzMjljZGNiMjU3YWEyZTcwOGIwMjdkMjg4ODI0dQKgXw==: 00:28:41.544 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:28:41.544 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:41.544 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:41.544 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:41.544 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:41.544 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:41.544 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:41.544 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.544 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.544 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.544 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:41.544 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:41.544 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:41.544 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:41.544 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:41.544 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:41.544 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:41.544 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:41.544 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:41.544 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:41.544 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:41.544 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:41.544 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.544 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.485 nvme0n1 00:28:42.485 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.485 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:42.485 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:42.485 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.485 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.485 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.485 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:42.485 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:42.485 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.485 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.485 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.485 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:42.485 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:28:42.485 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:42.485 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:42.485 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:42.485 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:42.486 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGJhNGIwMGQ1MzYzZmZhNDMyMmViZjlhODFmOTA3MTTnbOjI: 00:28:42.486 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDk3NTk0NzA1MmE3Yjk1OWNjZjg2NTNmZWNlZTVlMTlfK/y5: 00:28:42.486 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:42.486 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:42.486 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGJhNGIwMGQ1MzYzZmZhNDMyMmViZjlhODFmOTA3MTTnbOjI: 00:28:42.486 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDk3NTk0NzA1MmE3Yjk1OWNjZjg2NTNmZWNlZTVlMTlfK/y5: ]] 00:28:42.486 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDk3NTk0NzA1MmE3Yjk1OWNjZjg2NTNmZWNlZTVlMTlfK/y5: 00:28:42.486 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:28:42.486 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:42.486 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:42.486 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:42.486 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:42.486 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:42.486 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:42.486 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.486 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.486 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.486 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:42.486 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:42.486 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:42.486 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:42.486 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:42.486 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:42.486 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:42.486 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:42.486 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:42.486 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:42.486 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:42.486 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:42.486 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.486 06:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.055 nvme0n1 00:28:43.055 06:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.055 06:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:43.055 06:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:43.055 06:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.055 06:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.055 06:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.055 06:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:43.055 06:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:43.055 06:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.055 06:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.055 06:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.055 06:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:43.055 06:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:28:43.055 06:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:43.055 06:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:43.055 06:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:43.055 06:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:43.055 06:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Q0ZTgwMTg0ODM2MTcxOTg3MjJjMWE1MjBmZjg2ZTMyOWIzMDYxZTQxMGNjOTc54HEH5A==: 00:28:43.055 06:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTJjMmQ4NTVkMzUwNmQ2MGZiMWZjNmY5MmU0OWY5N2Hn6ZOS: 00:28:43.055 06:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:43.055 06:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:43.055 06:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Q0ZTgwMTg0ODM2MTcxOTg3MjJjMWE1MjBmZjg2ZTMyOWIzMDYxZTQxMGNjOTc54HEH5A==: 00:28:43.055 06:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTJjMmQ4NTVkMzUwNmQ2MGZiMWZjNmY5MmU0OWY5N2Hn6ZOS: ]] 00:28:43.055 06:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTJjMmQ4NTVkMzUwNmQ2MGZiMWZjNmY5MmU0OWY5N2Hn6ZOS: 00:28:43.055 06:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:28:43.055 06:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:43.055 06:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:43.055 06:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:43.055 06:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:43.055 06:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:43.055 06:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:43.055 06:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.055 06:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.055 06:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.055 06:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:43.055 06:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:43.055 06:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:43.055 06:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:43.055 06:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:43.055 06:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:43.055 06:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:43.055 06:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:43.055 06:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:43.055 06:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:43.055 06:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:43.055 06:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:43.055 06:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.055 06:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.624 nvme0n1 00:28:43.624 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.624 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:43.624 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:43.624 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.624 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.624 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.624 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:43.624 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:43.624 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.624 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.624 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.624 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:43.624 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:28:43.624 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:43.624 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:43.624 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:43.624 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:43.624 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGE5MzBkZTJmNjExNzZjZmFmM2Q1NDg1MDg5MjEzMjgxN2E0ZjhlOWNiOTA3MDFhZjJhMmMyNGJkYzBlYzYzMFeA2jk=: 00:28:43.624 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:43.624 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:43.624 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:43.624 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGE5MzBkZTJmNjExNzZjZmFmM2Q1NDg1MDg5MjEzMjgxN2E0ZjhlOWNiOTA3MDFhZjJhMmMyNGJkYzBlYzYzMFeA2jk=: 00:28:43.624 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:43.624 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:28:43.624 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:43.624 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:43.624 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:43.624 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:43.624 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:43.624 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:43.624 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.624 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.884 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.884 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:43.884 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:43.884 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:43.884 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:43.884 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:43.884 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:43.884 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:43.884 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:43.884 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:43.884 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:43.884 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:43.884 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:43.884 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.884 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.455 nvme0n1 00:28:44.455 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.455 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:44.455 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:44.455 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.455 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.455 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.455 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:44.455 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:44.455 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.455 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.455 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.455 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:44.455 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:44.455 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:44.455 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:28:44.455 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:44.455 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:44.455 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:44.455 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:44.455 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjY0MTJlMDU2M2Y3NmU5MWUyMmE3NzljZTkyM2ExZTkRo2FE: 00:28:44.455 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTljMThkMGExZDBlMzJiMTAyYjVhYWY2M2E1ZWM5YWUzMmYyMGRkMzMxYTM2YjE5MTFkNDMyNmNiNWMxNWExY+CLpQI=: 00:28:44.455 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:44.455 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:44.455 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjY0MTJlMDU2M2Y3NmU5MWUyMmE3NzljZTkyM2ExZTkRo2FE: 00:28:44.455 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTljMThkMGExZDBlMzJiMTAyYjVhYWY2M2E1ZWM5YWUzMmYyMGRkMzMxYTM2YjE5MTFkNDMyNmNiNWMxNWExY+CLpQI=: ]] 00:28:44.455 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTljMThkMGExZDBlMzJiMTAyYjVhYWY2M2E1ZWM5YWUzMmYyMGRkMzMxYTM2YjE5MTFkNDMyNmNiNWMxNWExY+CLpQI=: 00:28:44.455 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:28:44.455 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:44.455 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:44.455 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:44.455 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:44.455 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:44.455 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:44.455 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.455 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.455 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.455 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:44.455 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:44.455 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:44.455 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:44.455 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:44.455 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:44.456 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:44.456 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:44.456 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:44.456 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:44.456 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:44.456 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:44.456 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.456 06:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.716 nvme0n1 00:28:44.716 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.716 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:44.716 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:44.716 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.716 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.716 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.716 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:44.716 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:44.716 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.716 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.716 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.716 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:44.716 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:28:44.716 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:44.716 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:44.716 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:44.716 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:44.716 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDE4OTA2MGUxZmViNWMwNzNmMDFlM2M4ZjkyMWMyZGM1OWU4MjdjZTVhMDQ0ZTIwoI/cyg==: 00:28:44.716 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjViNjU0OTUzYWZhMWU2MjNlNTkzMjljZGNiMjU3YWEyZTcwOGIwMjdkMjg4ODI0dQKgXw==: 00:28:44.716 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:44.716 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:44.716 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDE4OTA2MGUxZmViNWMwNzNmMDFlM2M4ZjkyMWMyZGM1OWU4MjdjZTVhMDQ0ZTIwoI/cyg==: 00:28:44.716 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjViNjU0OTUzYWZhMWU2MjNlNTkzMjljZGNiMjU3YWEyZTcwOGIwMjdkMjg4ODI0dQKgXw==: ]] 00:28:44.716 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjViNjU0OTUzYWZhMWU2MjNlNTkzMjljZGNiMjU3YWEyZTcwOGIwMjdkMjg4ODI0dQKgXw==: 00:28:44.716 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:28:44.716 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:44.716 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:44.716 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:44.716 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:44.716 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:44.716 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:44.716 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.716 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.716 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.716 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:44.716 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:44.716 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:44.716 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:44.716 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:44.716 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:44.716 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:44.716 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:44.716 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:44.716 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:44.716 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:44.716 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:44.716 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.716 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.977 nvme0n1 00:28:44.977 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.977 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:44.977 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:44.977 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.977 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.977 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.977 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:44.977 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:44.977 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.977 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.977 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.977 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:44.977 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:28:44.977 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:44.977 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:44.977 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:44.977 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:44.977 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGJhNGIwMGQ1MzYzZmZhNDMyMmViZjlhODFmOTA3MTTnbOjI: 00:28:44.977 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDk3NTk0NzA1MmE3Yjk1OWNjZjg2NTNmZWNlZTVlMTlfK/y5: 00:28:44.977 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:44.977 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:44.977 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGJhNGIwMGQ1MzYzZmZhNDMyMmViZjlhODFmOTA3MTTnbOjI: 00:28:44.977 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDk3NTk0NzA1MmE3Yjk1OWNjZjg2NTNmZWNlZTVlMTlfK/y5: ]] 00:28:44.977 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDk3NTk0NzA1MmE3Yjk1OWNjZjg2NTNmZWNlZTVlMTlfK/y5: 00:28:44.977 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:28:44.977 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:44.977 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:44.977 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:44.977 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:44.977 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:44.977 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:44.977 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.977 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.977 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.977 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:44.977 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:44.977 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:44.977 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:44.977 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:44.977 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:44.977 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:44.977 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:44.977 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:44.977 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:44.977 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:44.977 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:44.977 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.977 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.977 nvme0n1 00:28:44.977 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.977 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:44.977 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:44.977 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.977 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.977 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.237 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:45.238 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:45.238 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.238 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.238 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.238 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:45.238 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:28:45.238 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:45.238 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:45.238 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:45.238 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:45.238 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Q0ZTgwMTg0ODM2MTcxOTg3MjJjMWE1MjBmZjg2ZTMyOWIzMDYxZTQxMGNjOTc54HEH5A==: 00:28:45.238 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTJjMmQ4NTVkMzUwNmQ2MGZiMWZjNmY5MmU0OWY5N2Hn6ZOS: 00:28:45.238 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:45.238 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:45.238 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Q0ZTgwMTg0ODM2MTcxOTg3MjJjMWE1MjBmZjg2ZTMyOWIzMDYxZTQxMGNjOTc54HEH5A==: 00:28:45.238 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTJjMmQ4NTVkMzUwNmQ2MGZiMWZjNmY5MmU0OWY5N2Hn6ZOS: ]] 00:28:45.238 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTJjMmQ4NTVkMzUwNmQ2MGZiMWZjNmY5MmU0OWY5N2Hn6ZOS: 00:28:45.238 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:28:45.238 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:45.238 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:45.238 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:45.238 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:45.238 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:45.238 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:45.238 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.238 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.238 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.238 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:45.238 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:45.238 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:45.238 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:45.238 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:45.238 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:45.238 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:45.238 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:45.238 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:45.238 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:45.238 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:45.238 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:45.238 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.238 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.238 nvme0n1 00:28:45.238 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.238 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:45.238 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:45.238 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.238 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.238 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.498 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:45.498 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:45.498 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.498 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.498 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.498 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:45.498 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:28:45.498 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:45.498 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:45.498 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:45.498 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:45.498 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGE5MzBkZTJmNjExNzZjZmFmM2Q1NDg1MDg5MjEzMjgxN2E0ZjhlOWNiOTA3MDFhZjJhMmMyNGJkYzBlYzYzMFeA2jk=: 00:28:45.498 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:45.498 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:45.498 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:45.498 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGE5MzBkZTJmNjExNzZjZmFmM2Q1NDg1MDg5MjEzMjgxN2E0ZjhlOWNiOTA3MDFhZjJhMmMyNGJkYzBlYzYzMFeA2jk=: 00:28:45.498 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:45.498 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:28:45.498 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:45.498 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:45.498 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:45.498 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:45.498 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:45.498 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:45.498 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.498 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.498 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.498 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:45.498 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:45.498 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:45.498 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:45.498 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:45.498 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:45.498 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:45.498 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:45.498 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:45.498 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:45.498 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:45.498 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:45.498 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.498 06:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.498 nvme0n1 00:28:45.498 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.498 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:45.498 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:45.498 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.498 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.498 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.498 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:45.498 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:45.498 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.498 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.498 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.498 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:45.498 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:45.498 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:28:45.498 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:45.499 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:45.499 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:45.499 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:45.499 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjY0MTJlMDU2M2Y3NmU5MWUyMmE3NzljZTkyM2ExZTkRo2FE: 00:28:45.499 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTljMThkMGExZDBlMzJiMTAyYjVhYWY2M2E1ZWM5YWUzMmYyMGRkMzMxYTM2YjE5MTFkNDMyNmNiNWMxNWExY+CLpQI=: 00:28:45.499 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:45.499 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:45.499 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjY0MTJlMDU2M2Y3NmU5MWUyMmE3NzljZTkyM2ExZTkRo2FE: 00:28:45.499 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTljMThkMGExZDBlMzJiMTAyYjVhYWY2M2E1ZWM5YWUzMmYyMGRkMzMxYTM2YjE5MTFkNDMyNmNiNWMxNWExY+CLpQI=: ]] 00:28:45.499 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTljMThkMGExZDBlMzJiMTAyYjVhYWY2M2E1ZWM5YWUzMmYyMGRkMzMxYTM2YjE5MTFkNDMyNmNiNWMxNWExY+CLpQI=: 00:28:45.499 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:28:45.499 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:45.499 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:45.499 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:45.499 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:45.499 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:45.499 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:45.499 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.499 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.759 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.759 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:45.759 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:45.759 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:45.759 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:45.759 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:45.759 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:45.759 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:45.759 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:45.759 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:45.759 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:45.759 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:45.759 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:45.759 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.759 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.759 nvme0n1 00:28:45.759 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.759 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:45.759 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:45.759 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.759 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.759 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.759 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:45.759 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:45.759 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.759 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.759 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.759 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:45.759 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:28:45.759 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:45.759 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:45.759 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:45.759 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:45.759 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDE4OTA2MGUxZmViNWMwNzNmMDFlM2M4ZjkyMWMyZGM1OWU4MjdjZTVhMDQ0ZTIwoI/cyg==: 00:28:45.759 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjViNjU0OTUzYWZhMWU2MjNlNTkzMjljZGNiMjU3YWEyZTcwOGIwMjdkMjg4ODI0dQKgXw==: 00:28:45.759 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:45.759 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:45.759 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDE4OTA2MGUxZmViNWMwNzNmMDFlM2M4ZjkyMWMyZGM1OWU4MjdjZTVhMDQ0ZTIwoI/cyg==: 00:28:45.759 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjViNjU0OTUzYWZhMWU2MjNlNTkzMjljZGNiMjU3YWEyZTcwOGIwMjdkMjg4ODI0dQKgXw==: ]] 00:28:45.759 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjViNjU0OTUzYWZhMWU2MjNlNTkzMjljZGNiMjU3YWEyZTcwOGIwMjdkMjg4ODI0dQKgXw==: 00:28:45.759 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:28:45.759 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:45.759 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:45.759 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:45.759 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:45.759 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:45.759 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:45.759 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.759 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.019 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.019 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:46.019 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:46.019 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:46.019 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:46.019 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:46.019 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:46.019 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:46.019 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:46.019 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:46.019 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:46.019 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:46.019 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:46.019 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.019 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.019 nvme0n1 00:28:46.019 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.019 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:46.019 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:46.019 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.019 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.019 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.019 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:46.019 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:46.019 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.019 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.281 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.281 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:46.281 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:28:46.281 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:46.281 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:46.281 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:46.281 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:46.281 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGJhNGIwMGQ1MzYzZmZhNDMyMmViZjlhODFmOTA3MTTnbOjI: 00:28:46.281 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDk3NTk0NzA1MmE3Yjk1OWNjZjg2NTNmZWNlZTVlMTlfK/y5: 00:28:46.281 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:46.281 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:46.281 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGJhNGIwMGQ1MzYzZmZhNDMyMmViZjlhODFmOTA3MTTnbOjI: 00:28:46.281 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDk3NTk0NzA1MmE3Yjk1OWNjZjg2NTNmZWNlZTVlMTlfK/y5: ]] 00:28:46.281 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDk3NTk0NzA1MmE3Yjk1OWNjZjg2NTNmZWNlZTVlMTlfK/y5: 00:28:46.281 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:28:46.281 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:46.281 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:46.281 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:46.281 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:46.281 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:46.281 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:46.281 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.281 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.281 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.281 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:46.281 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:46.281 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:46.281 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:46.281 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:46.281 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:46.281 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:46.281 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:46.281 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:46.281 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:46.281 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:46.281 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:46.281 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.281 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.281 nvme0n1 00:28:46.281 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.281 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:46.281 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:46.281 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.281 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.281 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.281 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:46.281 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:46.281 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.281 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.542 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.543 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:46.543 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:28:46.543 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:46.543 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:46.543 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:46.543 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:46.543 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Q0ZTgwMTg0ODM2MTcxOTg3MjJjMWE1MjBmZjg2ZTMyOWIzMDYxZTQxMGNjOTc54HEH5A==: 00:28:46.543 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTJjMmQ4NTVkMzUwNmQ2MGZiMWZjNmY5MmU0OWY5N2Hn6ZOS: 00:28:46.543 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:46.543 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:46.543 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Q0ZTgwMTg0ODM2MTcxOTg3MjJjMWE1MjBmZjg2ZTMyOWIzMDYxZTQxMGNjOTc54HEH5A==: 00:28:46.543 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTJjMmQ4NTVkMzUwNmQ2MGZiMWZjNmY5MmU0OWY5N2Hn6ZOS: ]] 00:28:46.543 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTJjMmQ4NTVkMzUwNmQ2MGZiMWZjNmY5MmU0OWY5N2Hn6ZOS: 00:28:46.543 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:28:46.543 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:46.543 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:46.543 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:46.543 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:46.543 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:46.543 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:46.543 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.543 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.543 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.543 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:46.543 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:46.543 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:46.543 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:46.543 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:46.543 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:46.543 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:46.543 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:46.543 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:46.543 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:46.543 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:46.543 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:46.543 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.543 06:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.543 nvme0n1 00:28:46.543 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.543 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:46.543 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:46.543 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.543 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.543 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.543 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:46.543 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:46.543 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.543 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.803 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.803 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:46.803 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:28:46.803 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:46.803 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:46.803 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:46.803 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:46.803 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGE5MzBkZTJmNjExNzZjZmFmM2Q1NDg1MDg5MjEzMjgxN2E0ZjhlOWNiOTA3MDFhZjJhMmMyNGJkYzBlYzYzMFeA2jk=: 00:28:46.803 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:46.803 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:46.803 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:46.803 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGE5MzBkZTJmNjExNzZjZmFmM2Q1NDg1MDg5MjEzMjgxN2E0ZjhlOWNiOTA3MDFhZjJhMmMyNGJkYzBlYzYzMFeA2jk=: 00:28:46.803 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:46.803 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:28:46.803 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:46.803 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:46.803 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:46.803 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:46.803 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:46.803 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:46.803 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.803 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.803 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.803 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:46.803 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:46.803 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:46.803 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:46.803 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:46.803 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:46.803 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:46.803 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:46.803 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:46.803 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:46.803 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:46.803 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:46.803 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.803 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.803 nvme0n1 00:28:46.803 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.803 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:46.803 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:46.803 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.803 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.803 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.804 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:46.804 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:46.804 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.804 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.063 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.064 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:47.064 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:47.064 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:28:47.064 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:47.064 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:47.064 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:47.064 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:47.064 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjY0MTJlMDU2M2Y3NmU5MWUyMmE3NzljZTkyM2ExZTkRo2FE: 00:28:47.064 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTljMThkMGExZDBlMzJiMTAyYjVhYWY2M2E1ZWM5YWUzMmYyMGRkMzMxYTM2YjE5MTFkNDMyNmNiNWMxNWExY+CLpQI=: 00:28:47.064 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:47.064 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:47.064 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjY0MTJlMDU2M2Y3NmU5MWUyMmE3NzljZTkyM2ExZTkRo2FE: 00:28:47.064 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTljMThkMGExZDBlMzJiMTAyYjVhYWY2M2E1ZWM5YWUzMmYyMGRkMzMxYTM2YjE5MTFkNDMyNmNiNWMxNWExY+CLpQI=: ]] 00:28:47.064 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTljMThkMGExZDBlMzJiMTAyYjVhYWY2M2E1ZWM5YWUzMmYyMGRkMzMxYTM2YjE5MTFkNDMyNmNiNWMxNWExY+CLpQI=: 00:28:47.064 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:28:47.064 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:47.064 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:47.064 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:47.064 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:47.064 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:47.064 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:47.064 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.064 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.064 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.064 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:47.064 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:47.064 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:47.064 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:47.064 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:47.064 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:47.064 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:47.064 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:47.064 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:47.064 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:47.064 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:47.064 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:47.064 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.064 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.324 nvme0n1 00:28:47.324 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.324 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:47.324 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:47.324 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.324 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.324 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.324 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:47.324 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:47.324 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.324 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.324 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.324 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:47.324 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:28:47.324 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:47.324 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:47.324 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:47.324 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:47.324 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDE4OTA2MGUxZmViNWMwNzNmMDFlM2M4ZjkyMWMyZGM1OWU4MjdjZTVhMDQ0ZTIwoI/cyg==: 00:28:47.324 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjViNjU0OTUzYWZhMWU2MjNlNTkzMjljZGNiMjU3YWEyZTcwOGIwMjdkMjg4ODI0dQKgXw==: 00:28:47.324 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:47.324 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:47.324 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDE4OTA2MGUxZmViNWMwNzNmMDFlM2M4ZjkyMWMyZGM1OWU4MjdjZTVhMDQ0ZTIwoI/cyg==: 00:28:47.325 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjViNjU0OTUzYWZhMWU2MjNlNTkzMjljZGNiMjU3YWEyZTcwOGIwMjdkMjg4ODI0dQKgXw==: ]] 00:28:47.325 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjViNjU0OTUzYWZhMWU2MjNlNTkzMjljZGNiMjU3YWEyZTcwOGIwMjdkMjg4ODI0dQKgXw==: 00:28:47.325 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:28:47.325 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:47.325 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:47.325 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:47.325 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:47.325 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:47.325 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:47.325 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.325 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.325 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.325 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:47.325 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:47.325 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:47.325 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:47.325 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:47.325 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:47.325 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:47.325 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:47.325 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:47.325 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:47.325 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:47.325 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:47.325 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.325 06:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.585 nvme0n1 00:28:47.585 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.585 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:47.585 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:47.585 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.585 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.585 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.585 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:47.585 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:47.585 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.585 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.585 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.585 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:47.585 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:28:47.585 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:47.585 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:47.585 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:47.585 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:47.585 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGJhNGIwMGQ1MzYzZmZhNDMyMmViZjlhODFmOTA3MTTnbOjI: 00:28:47.585 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDk3NTk0NzA1MmE3Yjk1OWNjZjg2NTNmZWNlZTVlMTlfK/y5: 00:28:47.585 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:47.585 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:47.585 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGJhNGIwMGQ1MzYzZmZhNDMyMmViZjlhODFmOTA3MTTnbOjI: 00:28:47.585 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDk3NTk0NzA1MmE3Yjk1OWNjZjg2NTNmZWNlZTVlMTlfK/y5: ]] 00:28:47.585 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDk3NTk0NzA1MmE3Yjk1OWNjZjg2NTNmZWNlZTVlMTlfK/y5: 00:28:47.585 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:28:47.585 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:47.585 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:47.585 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:47.585 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:47.585 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:47.585 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:47.585 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.585 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.585 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.585 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:47.585 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:47.585 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:47.585 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:47.585 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:47.585 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:47.585 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:47.585 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:47.585 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:47.585 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:47.585 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:47.585 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:47.585 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.585 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.846 nvme0n1 00:28:47.846 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.846 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:47.846 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:47.846 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.846 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.846 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.846 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:47.846 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:47.846 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.846 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.846 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.846 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:47.846 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:28:47.846 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:47.846 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:47.846 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:47.846 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:47.846 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Q0ZTgwMTg0ODM2MTcxOTg3MjJjMWE1MjBmZjg2ZTMyOWIzMDYxZTQxMGNjOTc54HEH5A==: 00:28:47.846 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTJjMmQ4NTVkMzUwNmQ2MGZiMWZjNmY5MmU0OWY5N2Hn6ZOS: 00:28:47.846 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:47.846 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:47.846 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Q0ZTgwMTg0ODM2MTcxOTg3MjJjMWE1MjBmZjg2ZTMyOWIzMDYxZTQxMGNjOTc54HEH5A==: 00:28:47.846 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTJjMmQ4NTVkMzUwNmQ2MGZiMWZjNmY5MmU0OWY5N2Hn6ZOS: ]] 00:28:47.846 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTJjMmQ4NTVkMzUwNmQ2MGZiMWZjNmY5MmU0OWY5N2Hn6ZOS: 00:28:47.846 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:28:47.846 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:47.846 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:47.846 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:47.846 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:47.846 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:47.846 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:47.846 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.846 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.106 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.106 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:48.106 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:48.106 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:48.106 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:48.106 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:48.106 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:48.106 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:48.106 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:48.106 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:48.106 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:48.106 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:48.106 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:48.107 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.107 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.367 nvme0n1 00:28:48.367 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.367 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:48.367 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:48.367 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.367 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.367 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.367 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:48.367 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:48.367 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.367 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.367 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.367 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:48.367 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:28:48.367 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:48.367 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:48.367 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:48.367 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:48.367 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGE5MzBkZTJmNjExNzZjZmFmM2Q1NDg1MDg5MjEzMjgxN2E0ZjhlOWNiOTA3MDFhZjJhMmMyNGJkYzBlYzYzMFeA2jk=: 00:28:48.367 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:48.367 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:48.367 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:48.367 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGE5MzBkZTJmNjExNzZjZmFmM2Q1NDg1MDg5MjEzMjgxN2E0ZjhlOWNiOTA3MDFhZjJhMmMyNGJkYzBlYzYzMFeA2jk=: 00:28:48.367 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:48.367 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:28:48.367 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:48.367 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:48.367 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:48.367 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:48.368 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:48.368 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:48.368 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.368 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.368 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.368 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:48.368 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:48.368 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:48.368 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:48.368 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:48.368 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:48.368 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:48.368 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:48.368 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:48.368 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:48.368 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:48.368 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:48.368 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.368 06:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.628 nvme0n1 00:28:48.628 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.628 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:48.628 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:48.628 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.628 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.628 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.628 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:48.628 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:48.628 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.628 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.628 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.628 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:48.628 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:48.628 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:28:48.628 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:48.628 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:48.628 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:48.628 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:48.628 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjY0MTJlMDU2M2Y3NmU5MWUyMmE3NzljZTkyM2ExZTkRo2FE: 00:28:48.628 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTljMThkMGExZDBlMzJiMTAyYjVhYWY2M2E1ZWM5YWUzMmYyMGRkMzMxYTM2YjE5MTFkNDMyNmNiNWMxNWExY+CLpQI=: 00:28:48.628 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:48.628 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:48.628 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjY0MTJlMDU2M2Y3NmU5MWUyMmE3NzljZTkyM2ExZTkRo2FE: 00:28:48.628 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTljMThkMGExZDBlMzJiMTAyYjVhYWY2M2E1ZWM5YWUzMmYyMGRkMzMxYTM2YjE5MTFkNDMyNmNiNWMxNWExY+CLpQI=: ]] 00:28:48.628 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTljMThkMGExZDBlMzJiMTAyYjVhYWY2M2E1ZWM5YWUzMmYyMGRkMzMxYTM2YjE5MTFkNDMyNmNiNWMxNWExY+CLpQI=: 00:28:48.628 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:28:48.628 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:48.628 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:48.628 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:48.628 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:48.628 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:48.628 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:48.628 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.628 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.628 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.628 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:48.628 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:48.628 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:48.628 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:48.628 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:48.628 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:48.628 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:48.628 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:48.628 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:48.628 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:48.628 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:48.628 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:48.628 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.628 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.198 nvme0n1 00:28:49.198 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.198 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:49.198 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:49.198 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.198 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.198 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.198 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:49.199 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:49.199 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.199 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.199 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.199 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:49.199 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:28:49.199 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:49.199 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:49.199 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:49.199 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:49.199 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDE4OTA2MGUxZmViNWMwNzNmMDFlM2M4ZjkyMWMyZGM1OWU4MjdjZTVhMDQ0ZTIwoI/cyg==: 00:28:49.199 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjViNjU0OTUzYWZhMWU2MjNlNTkzMjljZGNiMjU3YWEyZTcwOGIwMjdkMjg4ODI0dQKgXw==: 00:28:49.199 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:49.199 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:49.199 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDE4OTA2MGUxZmViNWMwNzNmMDFlM2M4ZjkyMWMyZGM1OWU4MjdjZTVhMDQ0ZTIwoI/cyg==: 00:28:49.199 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjViNjU0OTUzYWZhMWU2MjNlNTkzMjljZGNiMjU3YWEyZTcwOGIwMjdkMjg4ODI0dQKgXw==: ]] 00:28:49.199 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjViNjU0OTUzYWZhMWU2MjNlNTkzMjljZGNiMjU3YWEyZTcwOGIwMjdkMjg4ODI0dQKgXw==: 00:28:49.199 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:28:49.199 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:49.199 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:49.199 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:49.199 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:49.199 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:49.199 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:49.199 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.199 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.199 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.199 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:49.199 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:49.199 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:49.199 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:49.199 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:49.199 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:49.199 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:49.199 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:49.199 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:49.199 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:49.199 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:49.199 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:49.199 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.199 06:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.460 nvme0n1 00:28:49.460 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.460 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:49.460 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:49.460 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.460 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.460 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.721 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:49.721 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:49.721 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.721 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.721 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.721 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:49.721 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:28:49.721 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:49.721 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:49.721 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:49.721 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:49.721 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGJhNGIwMGQ1MzYzZmZhNDMyMmViZjlhODFmOTA3MTTnbOjI: 00:28:49.721 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDk3NTk0NzA1MmE3Yjk1OWNjZjg2NTNmZWNlZTVlMTlfK/y5: 00:28:49.721 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:49.721 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:49.721 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGJhNGIwMGQ1MzYzZmZhNDMyMmViZjlhODFmOTA3MTTnbOjI: 00:28:49.721 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDk3NTk0NzA1MmE3Yjk1OWNjZjg2NTNmZWNlZTVlMTlfK/y5: ]] 00:28:49.721 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDk3NTk0NzA1MmE3Yjk1OWNjZjg2NTNmZWNlZTVlMTlfK/y5: 00:28:49.721 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:28:49.722 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:49.722 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:49.722 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:49.722 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:49.722 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:49.722 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:49.722 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.722 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.722 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.722 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:49.722 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:49.722 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:49.722 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:49.722 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:49.722 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:49.722 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:49.722 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:49.722 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:49.722 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:49.722 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:49.722 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:49.722 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.722 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.982 nvme0n1 00:28:49.982 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.982 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:49.982 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:49.982 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.982 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.982 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.982 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:49.982 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:49.982 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.982 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.243 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.243 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:50.243 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:28:50.243 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:50.243 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:50.243 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:50.243 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:50.243 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Q0ZTgwMTg0ODM2MTcxOTg3MjJjMWE1MjBmZjg2ZTMyOWIzMDYxZTQxMGNjOTc54HEH5A==: 00:28:50.243 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTJjMmQ4NTVkMzUwNmQ2MGZiMWZjNmY5MmU0OWY5N2Hn6ZOS: 00:28:50.243 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:50.243 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:50.243 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Q0ZTgwMTg0ODM2MTcxOTg3MjJjMWE1MjBmZjg2ZTMyOWIzMDYxZTQxMGNjOTc54HEH5A==: 00:28:50.243 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTJjMmQ4NTVkMzUwNmQ2MGZiMWZjNmY5MmU0OWY5N2Hn6ZOS: ]] 00:28:50.243 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTJjMmQ4NTVkMzUwNmQ2MGZiMWZjNmY5MmU0OWY5N2Hn6ZOS: 00:28:50.243 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:28:50.243 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:50.243 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:50.243 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:50.243 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:50.243 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:50.243 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:50.243 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.243 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.243 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.243 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:50.243 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:50.243 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:50.243 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:50.243 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:50.243 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:50.243 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:50.243 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:50.243 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:50.243 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:50.243 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:50.243 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:50.243 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.243 06:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.504 nvme0n1 00:28:50.504 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.504 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:50.504 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:50.505 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.505 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.505 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.505 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:50.505 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:50.505 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.505 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.505 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.505 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:50.505 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:28:50.505 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:50.505 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:50.505 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:50.505 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:50.505 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGE5MzBkZTJmNjExNzZjZmFmM2Q1NDg1MDg5MjEzMjgxN2E0ZjhlOWNiOTA3MDFhZjJhMmMyNGJkYzBlYzYzMFeA2jk=: 00:28:50.505 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:50.505 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:50.505 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:50.505 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGE5MzBkZTJmNjExNzZjZmFmM2Q1NDg1MDg5MjEzMjgxN2E0ZjhlOWNiOTA3MDFhZjJhMmMyNGJkYzBlYzYzMFeA2jk=: 00:28:50.505 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:50.505 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:28:50.505 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:50.505 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:50.505 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:50.505 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:50.505 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:50.505 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:50.505 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.505 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.765 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.765 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:50.765 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:50.765 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:50.765 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:50.765 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:50.765 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:50.765 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:50.765 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:50.765 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:50.765 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:50.765 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:50.765 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:50.765 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.765 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.025 nvme0n1 00:28:51.025 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.025 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:51.025 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:51.025 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.025 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.025 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.025 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:51.025 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:51.025 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.025 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.025 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.025 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:51.025 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:51.025 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:28:51.025 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:51.025 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:51.025 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:51.025 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:51.025 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjY0MTJlMDU2M2Y3NmU5MWUyMmE3NzljZTkyM2ExZTkRo2FE: 00:28:51.025 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTljMThkMGExZDBlMzJiMTAyYjVhYWY2M2E1ZWM5YWUzMmYyMGRkMzMxYTM2YjE5MTFkNDMyNmNiNWMxNWExY+CLpQI=: 00:28:51.025 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:51.025 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:51.025 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjY0MTJlMDU2M2Y3NmU5MWUyMmE3NzljZTkyM2ExZTkRo2FE: 00:28:51.025 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTljMThkMGExZDBlMzJiMTAyYjVhYWY2M2E1ZWM5YWUzMmYyMGRkMzMxYTM2YjE5MTFkNDMyNmNiNWMxNWExY+CLpQI=: ]] 00:28:51.025 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTljMThkMGExZDBlMzJiMTAyYjVhYWY2M2E1ZWM5YWUzMmYyMGRkMzMxYTM2YjE5MTFkNDMyNmNiNWMxNWExY+CLpQI=: 00:28:51.025 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:28:51.025 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:51.025 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:51.025 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:51.025 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:51.025 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:51.025 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:51.025 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.025 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.025 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.025 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:51.025 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:51.025 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:51.025 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:51.025 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:51.025 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:51.025 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:51.025 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:51.025 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:51.025 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:51.025 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:51.025 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:51.025 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.025 06:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.595 nvme0n1 00:28:51.595 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.854 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:51.854 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:51.854 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.854 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.854 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.854 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:51.854 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:51.854 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.854 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.854 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.854 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:51.854 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:28:51.854 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:51.854 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:51.854 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:51.854 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:51.854 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDE4OTA2MGUxZmViNWMwNzNmMDFlM2M4ZjkyMWMyZGM1OWU4MjdjZTVhMDQ0ZTIwoI/cyg==: 00:28:51.854 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjViNjU0OTUzYWZhMWU2MjNlNTkzMjljZGNiMjU3YWEyZTcwOGIwMjdkMjg4ODI0dQKgXw==: 00:28:51.854 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:51.854 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:51.854 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDE4OTA2MGUxZmViNWMwNzNmMDFlM2M4ZjkyMWMyZGM1OWU4MjdjZTVhMDQ0ZTIwoI/cyg==: 00:28:51.854 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjViNjU0OTUzYWZhMWU2MjNlNTkzMjljZGNiMjU3YWEyZTcwOGIwMjdkMjg4ODI0dQKgXw==: ]] 00:28:51.854 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjViNjU0OTUzYWZhMWU2MjNlNTkzMjljZGNiMjU3YWEyZTcwOGIwMjdkMjg4ODI0dQKgXw==: 00:28:51.854 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:28:51.854 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:51.854 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:51.854 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:51.854 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:51.854 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:51.854 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:51.854 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.854 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.854 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.854 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:51.854 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:51.854 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:51.854 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:51.854 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:51.854 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:51.854 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:51.854 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:51.854 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:51.854 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:51.854 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:51.854 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:51.854 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.854 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.423 nvme0n1 00:28:52.423 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.423 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:52.423 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:52.423 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.423 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.423 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.423 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:52.423 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:52.423 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.423 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.423 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.423 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:52.423 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:28:52.423 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:52.423 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:52.423 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:52.423 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:52.423 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGJhNGIwMGQ1MzYzZmZhNDMyMmViZjlhODFmOTA3MTTnbOjI: 00:28:52.423 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDk3NTk0NzA1MmE3Yjk1OWNjZjg2NTNmZWNlZTVlMTlfK/y5: 00:28:52.423 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:52.423 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:52.423 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGJhNGIwMGQ1MzYzZmZhNDMyMmViZjlhODFmOTA3MTTnbOjI: 00:28:52.423 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDk3NTk0NzA1MmE3Yjk1OWNjZjg2NTNmZWNlZTVlMTlfK/y5: ]] 00:28:52.423 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDk3NTk0NzA1MmE3Yjk1OWNjZjg2NTNmZWNlZTVlMTlfK/y5: 00:28:52.423 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:28:52.423 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:52.423 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:52.423 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:52.423 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:52.423 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:52.423 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:52.423 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.423 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.423 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.423 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:52.423 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:52.423 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:52.423 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:52.423 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:52.423 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:52.423 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:52.423 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:52.424 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:52.424 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:52.424 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:52.424 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:52.424 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.424 06:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.362 nvme0n1 00:28:53.362 06:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.362 06:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:53.362 06:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:53.362 06:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.362 06:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.362 06:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.362 06:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:53.362 06:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:53.362 06:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.362 06:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.362 06:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.362 06:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:53.362 06:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:28:53.362 06:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:53.362 06:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:53.362 06:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:53.362 06:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:53.362 06:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Q0ZTgwMTg0ODM2MTcxOTg3MjJjMWE1MjBmZjg2ZTMyOWIzMDYxZTQxMGNjOTc54HEH5A==: 00:28:53.362 06:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTJjMmQ4NTVkMzUwNmQ2MGZiMWZjNmY5MmU0OWY5N2Hn6ZOS: 00:28:53.362 06:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:53.362 06:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:53.362 06:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Q0ZTgwMTg0ODM2MTcxOTg3MjJjMWE1MjBmZjg2ZTMyOWIzMDYxZTQxMGNjOTc54HEH5A==: 00:28:53.362 06:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTJjMmQ4NTVkMzUwNmQ2MGZiMWZjNmY5MmU0OWY5N2Hn6ZOS: ]] 00:28:53.362 06:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTJjMmQ4NTVkMzUwNmQ2MGZiMWZjNmY5MmU0OWY5N2Hn6ZOS: 00:28:53.362 06:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:28:53.362 06:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:53.362 06:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:53.362 06:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:53.362 06:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:53.362 06:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:53.362 06:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:53.362 06:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.362 06:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.362 06:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.362 06:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:53.362 06:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:53.362 06:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:53.362 06:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:53.362 06:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:53.362 06:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:53.362 06:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:53.362 06:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:53.362 06:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:53.362 06:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:53.362 06:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:53.362 06:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:53.362 06:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.362 06:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.930 nvme0n1 00:28:53.930 06:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.930 06:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:53.930 06:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:53.930 06:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.930 06:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.930 06:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.930 06:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:53.930 06:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:53.930 06:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.930 06:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.930 06:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.930 06:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:53.930 06:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:28:53.930 06:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:53.930 06:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:53.930 06:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:53.930 06:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:53.930 06:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGE5MzBkZTJmNjExNzZjZmFmM2Q1NDg1MDg5MjEzMjgxN2E0ZjhlOWNiOTA3MDFhZjJhMmMyNGJkYzBlYzYzMFeA2jk=: 00:28:53.930 06:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:53.930 06:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:53.930 06:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:53.930 06:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGE5MzBkZTJmNjExNzZjZmFmM2Q1NDg1MDg5MjEzMjgxN2E0ZjhlOWNiOTA3MDFhZjJhMmMyNGJkYzBlYzYzMFeA2jk=: 00:28:53.930 06:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:53.930 06:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:28:53.930 06:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:53.930 06:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:53.930 06:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:53.930 06:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:53.930 06:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:53.930 06:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:53.930 06:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.930 06:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.930 06:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.930 06:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:53.930 06:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:53.930 06:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:53.930 06:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:53.930 06:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:53.930 06:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:53.930 06:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:53.930 06:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:53.930 06:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:53.930 06:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:53.930 06:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:53.930 06:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:53.930 06:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.930 06:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.498 nvme0n1 00:28:54.498 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.498 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:54.498 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:54.498 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.498 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.498 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.759 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:54.759 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:54.759 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.759 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.759 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.759 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:54.759 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:54.759 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:54.759 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:54.759 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:54.759 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDE4OTA2MGUxZmViNWMwNzNmMDFlM2M4ZjkyMWMyZGM1OWU4MjdjZTVhMDQ0ZTIwoI/cyg==: 00:28:54.759 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjViNjU0OTUzYWZhMWU2MjNlNTkzMjljZGNiMjU3YWEyZTcwOGIwMjdkMjg4ODI0dQKgXw==: 00:28:54.759 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:54.759 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:54.759 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDE4OTA2MGUxZmViNWMwNzNmMDFlM2M4ZjkyMWMyZGM1OWU4MjdjZTVhMDQ0ZTIwoI/cyg==: 00:28:54.759 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjViNjU0OTUzYWZhMWU2MjNlNTkzMjljZGNiMjU3YWEyZTcwOGIwMjdkMjg4ODI0dQKgXw==: ]] 00:28:54.759 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjViNjU0OTUzYWZhMWU2MjNlNTkzMjljZGNiMjU3YWEyZTcwOGIwMjdkMjg4ODI0dQKgXw==: 00:28:54.759 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:54.759 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.759 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.759 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.759 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:28:54.759 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:54.759 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:54.759 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:54.759 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:54.759 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:54.759 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:54.759 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:54.759 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:54.759 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:54.759 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:54.759 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:54.759 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:54.759 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:54.759 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:54.759 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:54.759 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:54.759 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:54.759 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:54.759 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.759 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.759 request: 00:28:54.759 { 00:28:54.759 "name": "nvme0", 00:28:54.759 "trtype": "tcp", 00:28:54.759 "traddr": "10.0.0.1", 00:28:54.759 "adrfam": "ipv4", 00:28:54.759 "trsvcid": "4420", 00:28:54.759 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:54.759 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:54.759 "prchk_reftag": false, 00:28:54.759 "prchk_guard": false, 00:28:54.759 "hdgst": false, 00:28:54.759 "ddgst": false, 00:28:54.759 "allow_unrecognized_csi": false, 00:28:54.759 "method": "bdev_nvme_attach_controller", 00:28:54.759 "req_id": 1 00:28:54.759 } 00:28:54.759 Got JSON-RPC error response 00:28:54.759 response: 00:28:54.759 { 00:28:54.759 "code": -5, 00:28:54.759 "message": "Input/output error" 00:28:54.759 } 00:28:54.759 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:54.759 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:54.759 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:54.759 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:54.759 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:54.759 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:28:54.759 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:28:54.759 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.759 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.759 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.759 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:28:54.759 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:28:54.759 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:54.759 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:54.759 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:54.759 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:54.759 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:54.759 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:54.759 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:54.759 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:54.759 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:54.759 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:54.759 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:54.759 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:54.759 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:54.759 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:54.759 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:54.759 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:54.759 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:54.759 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:54.759 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.759 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.759 request: 00:28:54.759 { 00:28:54.759 "name": "nvme0", 00:28:54.759 "trtype": "tcp", 00:28:54.759 "traddr": "10.0.0.1", 00:28:54.759 "adrfam": "ipv4", 00:28:54.759 "trsvcid": "4420", 00:28:54.759 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:54.759 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:54.759 "prchk_reftag": false, 00:28:54.759 "prchk_guard": false, 00:28:54.759 "hdgst": false, 00:28:54.759 "ddgst": false, 00:28:54.759 "dhchap_key": "key2", 00:28:54.759 "allow_unrecognized_csi": false, 00:28:54.759 "method": "bdev_nvme_attach_controller", 00:28:54.759 "req_id": 1 00:28:54.759 } 00:28:54.759 Got JSON-RPC error response 00:28:54.759 response: 00:28:54.759 { 00:28:54.759 "code": -5, 00:28:54.759 "message": "Input/output error" 00:28:54.759 } 00:28:54.759 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:54.759 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:54.760 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:54.760 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:54.760 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:54.760 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:28:54.760 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:28:54.760 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.760 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.760 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.020 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:28:55.020 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:28:55.020 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:55.020 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:55.020 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:55.020 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:55.020 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:55.020 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:55.020 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:55.020 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:55.020 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:55.020 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:55.020 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:55.020 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:55.020 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:55.020 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:55.020 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:55.020 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:55.020 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:55.020 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:55.020 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.020 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.020 request: 00:28:55.020 { 00:28:55.020 "name": "nvme0", 00:28:55.020 "trtype": "tcp", 00:28:55.020 "traddr": "10.0.0.1", 00:28:55.020 "adrfam": "ipv4", 00:28:55.020 "trsvcid": "4420", 00:28:55.020 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:55.020 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:55.020 "prchk_reftag": false, 00:28:55.020 "prchk_guard": false, 00:28:55.020 "hdgst": false, 00:28:55.020 "ddgst": false, 00:28:55.020 "dhchap_key": "key1", 00:28:55.020 "dhchap_ctrlr_key": "ckey2", 00:28:55.020 "allow_unrecognized_csi": false, 00:28:55.020 "method": "bdev_nvme_attach_controller", 00:28:55.020 "req_id": 1 00:28:55.020 } 00:28:55.020 Got JSON-RPC error response 00:28:55.020 response: 00:28:55.020 { 00:28:55.020 "code": -5, 00:28:55.020 "message": "Input/output error" 00:28:55.020 } 00:28:55.020 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:55.020 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:55.020 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:55.020 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:55.020 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:55.020 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:28:55.020 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:55.020 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:55.020 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:55.020 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:55.020 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:55.020 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:55.020 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:55.020 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:55.020 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:55.020 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:55.020 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:28:55.020 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.020 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.020 nvme0n1 00:28:55.020 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.020 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:55.020 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:55.020 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:55.020 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:55.020 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:55.020 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGJhNGIwMGQ1MzYzZmZhNDMyMmViZjlhODFmOTA3MTTnbOjI: 00:28:55.020 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDk3NTk0NzA1MmE3Yjk1OWNjZjg2NTNmZWNlZTVlMTlfK/y5: 00:28:55.020 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:55.020 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:55.020 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGJhNGIwMGQ1MzYzZmZhNDMyMmViZjlhODFmOTA3MTTnbOjI: 00:28:55.020 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDk3NTk0NzA1MmE3Yjk1OWNjZjg2NTNmZWNlZTVlMTlfK/y5: ]] 00:28:55.020 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDk3NTk0NzA1MmE3Yjk1OWNjZjg2NTNmZWNlZTVlMTlfK/y5: 00:28:55.020 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:55.020 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.020 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.280 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.280 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:28:55.280 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:28:55.280 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.280 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.280 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.280 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:55.280 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:55.280 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:55.280 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:55.280 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:55.280 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:55.280 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:55.280 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:55.280 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:55.280 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.280 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.280 request: 00:28:55.280 { 00:28:55.280 "name": "nvme0", 00:28:55.280 "dhchap_key": "key1", 00:28:55.280 "dhchap_ctrlr_key": "ckey2", 00:28:55.280 "method": "bdev_nvme_set_keys", 00:28:55.280 "req_id": 1 00:28:55.280 } 00:28:55.280 Got JSON-RPC error response 00:28:55.280 response: 00:28:55.280 { 00:28:55.280 "code": -13, 00:28:55.280 "message": "Permission denied" 00:28:55.280 } 00:28:55.280 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:55.280 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:55.280 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:55.280 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:55.280 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:55.280 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:55.280 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:55.280 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.280 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.280 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.280 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:28:55.280 06:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:28:56.661 06:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:56.661 06:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:56.661 06:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.661 06:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.661 06:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.661 06:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:28:56.661 06:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:28:57.602 06:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:57.602 06:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:57.602 06:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.602 06:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.602 06:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.602 06:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:28:57.602 06:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:57.602 06:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:57.602 06:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:57.602 06:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:57.602 06:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:57.602 06:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDE4OTA2MGUxZmViNWMwNzNmMDFlM2M4ZjkyMWMyZGM1OWU4MjdjZTVhMDQ0ZTIwoI/cyg==: 00:28:57.602 06:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjViNjU0OTUzYWZhMWU2MjNlNTkzMjljZGNiMjU3YWEyZTcwOGIwMjdkMjg4ODI0dQKgXw==: 00:28:57.602 06:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:57.602 06:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:57.602 06:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDE4OTA2MGUxZmViNWMwNzNmMDFlM2M4ZjkyMWMyZGM1OWU4MjdjZTVhMDQ0ZTIwoI/cyg==: 00:28:57.602 06:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjViNjU0OTUzYWZhMWU2MjNlNTkzMjljZGNiMjU3YWEyZTcwOGIwMjdkMjg4ODI0dQKgXw==: ]] 00:28:57.602 06:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjViNjU0OTUzYWZhMWU2MjNlNTkzMjljZGNiMjU3YWEyZTcwOGIwMjdkMjg4ODI0dQKgXw==: 00:28:57.602 06:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:28:57.602 06:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:57.602 06:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:57.602 06:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:57.602 06:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:57.602 06:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:57.602 06:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:57.602 06:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:57.602 06:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:57.602 06:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:57.602 06:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:57.602 06:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:28:57.602 06:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.602 06:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.602 nvme0n1 00:28:57.602 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.602 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:57.602 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:57.602 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:57.602 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:57.602 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:57.602 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGJhNGIwMGQ1MzYzZmZhNDMyMmViZjlhODFmOTA3MTTnbOjI: 00:28:57.602 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDk3NTk0NzA1MmE3Yjk1OWNjZjg2NTNmZWNlZTVlMTlfK/y5: 00:28:57.602 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:57.602 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:57.602 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGJhNGIwMGQ1MzYzZmZhNDMyMmViZjlhODFmOTA3MTTnbOjI: 00:28:57.602 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDk3NTk0NzA1MmE3Yjk1OWNjZjg2NTNmZWNlZTVlMTlfK/y5: ]] 00:28:57.603 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDk3NTk0NzA1MmE3Yjk1OWNjZjg2NTNmZWNlZTVlMTlfK/y5: 00:28:57.603 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:57.603 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:57.603 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:57.603 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:57.603 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:57.603 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:57.603 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:57.603 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:57.603 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.603 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.603 request: 00:28:57.603 { 00:28:57.603 "name": "nvme0", 00:28:57.603 "dhchap_key": "key2", 00:28:57.603 "dhchap_ctrlr_key": "ckey1", 00:28:57.603 "method": "bdev_nvme_set_keys", 00:28:57.603 "req_id": 1 00:28:57.603 } 00:28:57.603 Got JSON-RPC error response 00:28:57.603 response: 00:28:57.603 { 00:28:57.603 "code": -13, 00:28:57.603 "message": "Permission denied" 00:28:57.603 } 00:28:57.603 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:57.603 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:57.603 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:57.603 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:57.603 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:57.603 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:28:57.603 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.603 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.603 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:28:57.603 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.603 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:28:57.603 06:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:28:58.986 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:28:58.986 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.986 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.986 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:28:58.986 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.986 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:28:58.986 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:28:58.986 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:28:58.986 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:28:58.986 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:58.986 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:28:58.986 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:58.986 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:28:58.986 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:58.986 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:58.986 rmmod nvme_tcp 00:28:58.986 rmmod nvme_fabrics 00:28:58.986 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:58.986 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:28:58.986 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:28:58.986 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 471985 ']' 00:28:58.986 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 471985 00:28:58.986 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 471985 ']' 00:28:58.986 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 471985 00:28:58.986 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:28:58.986 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:58.986 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 471985 00:28:58.986 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:58.986 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:58.986 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 471985' 00:28:58.986 killing process with pid 471985 00:28:58.986 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 471985 00:28:58.986 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 471985 00:28:58.986 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:58.986 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:58.986 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:58.986 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:28:58.986 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:28:58.986 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:58.986 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:28:58.986 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:58.986 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:58.986 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:58.986 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:58.986 06:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:01.527 06:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:01.527 06:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:29:01.527 06:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:29:01.527 06:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:29:01.527 06:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:29:01.527 06:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:29:01.527 06:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:01.527 06:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:29:01.527 06:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:01.527 06:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:01.527 06:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:29:01.527 06:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:29:01.527 06:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:04.821 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:29:04.821 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:29:04.821 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:29:04.821 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:29:04.821 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:29:04.821 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:29:04.821 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:29:04.821 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:29:04.821 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:29:04.821 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:29:04.821 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:29:04.821 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:29:04.821 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:29:04.821 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:29:04.821 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:29:04.821 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:29:06.734 0000:65:00.0 (8086 0a54): nvme -> vfio-pci 00:29:06.993 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.8HO /tmp/spdk.key-null.EVF /tmp/spdk.key-sha256.CKr /tmp/spdk.key-sha384.JiS /tmp/spdk.key-sha512.e2T /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:29:06.993 06:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:10.290 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:29:10.290 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:29:10.290 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:29:10.290 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:29:10.290 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:29:10.290 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:29:10.290 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:29:10.290 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:29:10.550 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:29:10.550 0000:65:00.0 (8086 0a54): Already using the vfio-pci driver 00:29:10.550 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:29:10.550 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:29:10.550 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:29:10.550 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:29:10.550 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:29:10.550 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:29:10.550 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:29:10.810 00:29:10.810 real 1m4.177s 00:29:10.810 user 0m56.213s 00:29:10.810 sys 0m15.879s 00:29:10.810 06:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:10.810 06:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.810 ************************************ 00:29:10.810 END TEST nvmf_auth_host 00:29:10.810 ************************************ 00:29:10.810 06:28:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:29:10.810 06:28:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:29:10.810 06:28:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:10.810 06:28:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:10.810 06:28:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.810 ************************************ 00:29:10.810 START TEST nvmf_digest 00:29:10.810 ************************************ 00:29:10.810 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:29:11.073 * Looking for test storage... 00:29:11.073 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:11.073 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:11.073 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:29:11.073 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:11.073 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:11.073 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:11.073 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:11.073 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:11.073 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:29:11.073 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:29:11.073 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:29:11.073 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:29:11.073 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:29:11.073 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:29:11.073 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:29:11.073 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:11.073 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:29:11.073 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:29:11.073 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:11.073 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:11.073 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:29:11.073 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:29:11.073 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:11.073 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:29:11.073 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:29:11.073 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:29:11.073 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:29:11.073 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:11.073 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:29:11.073 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:29:11.073 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:11.073 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:11.073 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:29:11.073 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:11.073 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:11.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:11.073 --rc genhtml_branch_coverage=1 00:29:11.073 --rc genhtml_function_coverage=1 00:29:11.073 --rc genhtml_legend=1 00:29:11.073 --rc geninfo_all_blocks=1 00:29:11.073 --rc geninfo_unexecuted_blocks=1 00:29:11.073 00:29:11.073 ' 00:29:11.073 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:11.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:11.073 --rc genhtml_branch_coverage=1 00:29:11.073 --rc genhtml_function_coverage=1 00:29:11.073 --rc genhtml_legend=1 00:29:11.073 --rc geninfo_all_blocks=1 00:29:11.073 --rc geninfo_unexecuted_blocks=1 00:29:11.073 00:29:11.073 ' 00:29:11.073 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:11.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:11.073 --rc genhtml_branch_coverage=1 00:29:11.073 --rc genhtml_function_coverage=1 00:29:11.073 --rc genhtml_legend=1 00:29:11.073 --rc geninfo_all_blocks=1 00:29:11.073 --rc geninfo_unexecuted_blocks=1 00:29:11.073 00:29:11.073 ' 00:29:11.073 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:11.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:11.073 --rc genhtml_branch_coverage=1 00:29:11.073 --rc genhtml_function_coverage=1 00:29:11.073 --rc genhtml_legend=1 00:29:11.073 --rc geninfo_all_blocks=1 00:29:11.073 --rc geninfo_unexecuted_blocks=1 00:29:11.073 00:29:11.073 ' 00:29:11.073 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:11.073 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:29:11.073 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:11.073 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:11.073 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:11.073 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:11.073 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:11.073 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:11.073 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:11.073 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:11.073 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:11.073 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:11.073 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:29:11.073 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:29:11.073 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:11.073 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:11.073 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:11.073 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:11.073 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:11.073 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:29:11.073 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:11.073 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:11.073 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:11.073 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.073 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.073 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.073 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:29:11.073 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.073 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:29:11.073 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:11.073 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:11.073 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:11.073 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:11.073 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:11.073 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:11.073 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:11.073 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:11.073 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:11.073 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:11.073 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:29:11.074 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:29:11.074 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:29:11.074 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:29:11.074 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:29:11.074 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:11.074 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:11.074 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:11.074 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:11.074 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:11.074 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:11.074 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:11.074 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:11.074 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:11.074 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:11.074 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:29:11.074 06:28:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:19.212 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:19.212 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:29:19.212 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:19.212 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:19.212 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:19.212 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:19.212 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:19.212 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:29:19.212 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:19.212 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:29:19.212 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:29:19.212 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:29:19.212 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:29:19.212 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:29:19.212 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:29:19.212 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:19.212 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:19.212 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:19.212 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:19.212 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:19.212 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:19.212 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:19.212 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:19.212 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:19.212 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:19.212 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:19.212 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:19.212 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:19.212 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:19.212 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:19.212 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:19.212 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:19.212 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:19.212 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:19.212 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:19.212 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:19.212 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:19.212 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:19.212 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:19.212 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:19.212 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:19.212 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:19.212 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:19.212 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:19.212 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:19.212 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:19.212 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:19.212 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:19.212 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:19.212 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:19.212 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:19.212 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:19.213 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:19.213 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:19.213 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:19.213 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:19.213 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:19.213 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:19.213 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:19.213 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:19.213 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:19.213 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:19.213 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:19.213 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:19.213 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:19.213 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:19.213 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:19.213 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:19.213 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:19.213 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:19.213 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:19.213 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:19.213 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:19.213 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:29:19.213 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:19.213 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:19.213 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:19.213 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:19.213 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:19.213 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:19.213 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:19.213 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:19.213 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:19.213 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:19.213 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:19.213 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:19.213 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:19.213 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:19.213 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:19.213 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:19.213 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:19.213 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:19.213 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:19.213 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:19.213 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:19.213 06:28:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:19.213 06:28:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:19.213 06:28:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:19.213 06:28:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:19.213 06:28:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:19.213 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:19.213 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.648 ms 00:29:19.213 00:29:19.213 --- 10.0.0.2 ping statistics --- 00:29:19.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:19.213 rtt min/avg/max/mdev = 0.648/0.648/0.648/0.000 ms 00:29:19.213 06:28:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:19.213 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:19.213 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:29:19.213 00:29:19.213 --- 10.0.0.1 ping statistics --- 00:29:19.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:19.213 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:29:19.213 06:28:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:19.213 06:28:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:29:19.213 06:28:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:19.213 06:28:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:19.213 06:28:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:19.213 06:28:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:19.213 06:28:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:19.213 06:28:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:19.213 06:28:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:19.213 06:28:13 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:19.213 06:28:13 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:29:19.213 06:28:13 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:29:19.213 06:28:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:19.213 06:28:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:19.213 06:28:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:19.213 ************************************ 00:29:19.213 START TEST nvmf_digest_clean 00:29:19.213 ************************************ 00:29:19.213 06:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:29:19.213 06:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:29:19.213 06:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:29:19.213 06:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:29:19.213 06:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:29:19.213 06:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:29:19.213 06:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:19.213 06:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:19.213 06:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:19.213 06:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=488402 00:29:19.213 06:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 488402 00:29:19.213 06:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:19.213 06:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 488402 ']' 00:29:19.213 06:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:19.213 06:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:19.213 06:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:19.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:19.213 06:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:19.213 06:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:19.213 [2024-12-09 06:28:13.188082] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:29:19.213 [2024-12-09 06:28:13.188142] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:19.213 [2024-12-09 06:28:13.283933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:19.213 [2024-12-09 06:28:13.334140] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:19.213 [2024-12-09 06:28:13.334193] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:19.213 [2024-12-09 06:28:13.334201] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:19.213 [2024-12-09 06:28:13.334207] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:19.213 [2024-12-09 06:28:13.334213] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:19.213 [2024-12-09 06:28:13.334963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:19.474 06:28:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:19.474 06:28:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:19.474 06:28:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:19.474 06:28:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:19.474 06:28:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:19.474 06:28:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:19.474 06:28:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:29:19.474 06:28:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:29:19.474 06:28:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:29:19.474 06:28:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.474 06:28:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:19.735 null0 00:29:19.735 [2024-12-09 06:28:14.145497] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:19.735 [2024-12-09 06:28:14.169758] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:19.735 06:28:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.735 06:28:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:29:19.735 06:28:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:19.735 06:28:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:19.735 06:28:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:29:19.735 06:28:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:29:19.735 06:28:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:29:19.735 06:28:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:19.735 06:28:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=488543 00:29:19.735 06:28:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 488543 /var/tmp/bperf.sock 00:29:19.735 06:28:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 488543 ']' 00:29:19.735 06:28:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:19.735 06:28:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:19.736 06:28:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:19.736 06:28:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:19.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:19.736 06:28:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:19.736 06:28:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:19.736 [2024-12-09 06:28:14.228870] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:29:19.736 [2024-12-09 06:28:14.228929] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid488543 ] 00:29:19.736 [2024-12-09 06:28:14.300531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:19.996 [2024-12-09 06:28:14.350975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:20.567 06:28:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:20.567 06:28:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:20.567 06:28:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:20.567 06:28:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:20.567 06:28:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:20.827 06:28:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:20.827 06:28:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:21.087 nvme0n1 00:29:21.346 06:28:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:21.346 06:28:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:21.346 Running I/O for 2 seconds... 00:29:23.221 21157.00 IOPS, 82.64 MiB/s [2024-12-09T05:28:17.808Z] 22373.00 IOPS, 87.39 MiB/s 00:29:23.221 Latency(us) 00:29:23.221 [2024-12-09T05:28:17.808Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:23.221 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:23.221 nvme0n1 : 2.00 22407.41 87.53 0.00 0.00 5706.87 1966.08 14115.45 00:29:23.221 [2024-12-09T05:28:17.808Z] =================================================================================================================== 00:29:23.221 [2024-12-09T05:28:17.808Z] Total : 22407.41 87.53 0.00 0.00 5706.87 1966.08 14115.45 00:29:23.221 { 00:29:23.221 "results": [ 00:29:23.221 { 00:29:23.221 "job": "nvme0n1", 00:29:23.221 "core_mask": "0x2", 00:29:23.221 "workload": "randread", 00:29:23.221 "status": "finished", 00:29:23.221 "queue_depth": 128, 00:29:23.221 "io_size": 4096, 00:29:23.221 "runtime": 2.003578, 00:29:23.221 "iops": 22407.413137896303, 00:29:23.221 "mibps": 87.52895756990743, 00:29:23.221 "io_failed": 0, 00:29:23.221 "io_timeout": 0, 00:29:23.221 "avg_latency_us": 5706.869737147361, 00:29:23.221 "min_latency_us": 1966.08, 00:29:23.221 "max_latency_us": 14115.446153846155 00:29:23.221 } 00:29:23.221 ], 00:29:23.221 "core_count": 1 00:29:23.221 } 00:29:23.481 06:28:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:23.481 06:28:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:23.481 06:28:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:23.481 06:28:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:23.481 | select(.opcode=="crc32c") 00:29:23.481 | "\(.module_name) \(.executed)"' 00:29:23.481 06:28:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:23.481 06:28:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:23.481 06:28:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:23.481 06:28:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:23.481 06:28:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:23.481 06:28:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 488543 00:29:23.481 06:28:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 488543 ']' 00:29:23.481 06:28:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 488543 00:29:23.481 06:28:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:23.481 06:28:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:23.481 06:28:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 488543 00:29:23.481 06:28:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:23.481 06:28:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:23.481 06:28:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 488543' 00:29:23.481 killing process with pid 488543 00:29:23.481 06:28:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 488543 00:29:23.481 Received shutdown signal, test time was about 2.000000 seconds 00:29:23.481 00:29:23.481 Latency(us) 00:29:23.481 [2024-12-09T05:28:18.068Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:23.481 [2024-12-09T05:28:18.068Z] =================================================================================================================== 00:29:23.481 [2024-12-09T05:28:18.068Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:23.481 06:28:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 488543 00:29:23.740 06:28:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:29:23.740 06:28:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:23.740 06:28:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:23.740 06:28:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:29:23.740 06:28:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:29:23.740 06:28:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:29:23.740 06:28:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:23.740 06:28:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=489169 00:29:23.740 06:28:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 489169 /var/tmp/bperf.sock 00:29:23.740 06:28:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 489169 ']' 00:29:23.740 06:28:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:23.740 06:28:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:23.740 06:28:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:23.740 06:28:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:23.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:23.740 06:28:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:23.740 06:28:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:23.740 [2024-12-09 06:28:18.212421] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:29:23.740 [2024-12-09 06:28:18.212479] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid489169 ] 00:29:23.740 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:23.740 Zero copy mechanism will not be used. 00:29:23.740 [2024-12-09 06:28:18.270824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:23.740 [2024-12-09 06:28:18.300888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:23.999 06:28:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:23.999 06:28:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:23.999 06:28:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:23.999 06:28:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:23.999 06:28:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:24.000 06:28:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:24.000 06:28:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:24.569 nvme0n1 00:29:24.569 06:28:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:24.569 06:28:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:24.569 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:24.569 Zero copy mechanism will not be used. 00:29:24.569 Running I/O for 2 seconds... 00:29:26.891 3331.00 IOPS, 416.38 MiB/s [2024-12-09T05:28:21.478Z] 3717.00 IOPS, 464.62 MiB/s 00:29:26.891 Latency(us) 00:29:26.891 [2024-12-09T05:28:21.478Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:26.891 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:26.891 nvme0n1 : 2.00 3717.36 464.67 0.00 0.00 4301.14 586.04 11443.59 00:29:26.891 [2024-12-09T05:28:21.478Z] =================================================================================================================== 00:29:26.891 [2024-12-09T05:28:21.478Z] Total : 3717.36 464.67 0.00 0.00 4301.14 586.04 11443.59 00:29:26.891 { 00:29:26.891 "results": [ 00:29:26.891 { 00:29:26.891 "job": "nvme0n1", 00:29:26.891 "core_mask": "0x2", 00:29:26.891 "workload": "randread", 00:29:26.891 "status": "finished", 00:29:26.891 "queue_depth": 16, 00:29:26.891 "io_size": 131072, 00:29:26.891 "runtime": 2.004113, 00:29:26.891 "iops": 3717.3552589100514, 00:29:26.891 "mibps": 464.6694073637564, 00:29:26.891 "io_failed": 0, 00:29:26.891 "io_timeout": 0, 00:29:26.891 "avg_latency_us": 4301.135165720186, 00:29:26.891 "min_latency_us": 586.0430769230769, 00:29:26.891 "max_latency_us": 11443.593846153846 00:29:26.891 } 00:29:26.891 ], 00:29:26.891 "core_count": 1 00:29:26.891 } 00:29:26.891 06:28:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:26.891 06:28:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:26.891 06:28:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:26.891 06:28:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:26.891 | select(.opcode=="crc32c") 00:29:26.891 | "\(.module_name) \(.executed)"' 00:29:26.891 06:28:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:26.891 06:28:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:26.891 06:28:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:26.891 06:28:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:26.891 06:28:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:26.891 06:28:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 489169 00:29:26.891 06:28:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 489169 ']' 00:29:26.891 06:28:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 489169 00:29:26.891 06:28:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:26.891 06:28:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:26.891 06:28:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 489169 00:29:26.891 06:28:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:26.891 06:28:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:26.891 06:28:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 489169' 00:29:26.891 killing process with pid 489169 00:29:26.891 06:28:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 489169 00:29:26.891 Received shutdown signal, test time was about 2.000000 seconds 00:29:26.891 00:29:26.891 Latency(us) 00:29:26.891 [2024-12-09T05:28:21.479Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:26.892 [2024-12-09T05:28:21.479Z] =================================================================================================================== 00:29:26.892 [2024-12-09T05:28:21.479Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:26.892 06:28:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 489169 00:29:26.892 06:28:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:29:26.892 06:28:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:26.892 06:28:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:26.892 06:28:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:29:26.892 06:28:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:29:26.892 06:28:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:29:26.892 06:28:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:26.892 06:28:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=489779 00:29:26.892 06:28:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 489779 /var/tmp/bperf.sock 00:29:26.892 06:28:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 489779 ']' 00:29:26.892 06:28:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:26.892 06:28:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:26.892 06:28:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:26.892 06:28:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:26.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:26.892 06:28:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:26.892 06:28:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:27.152 [2024-12-09 06:28:21.492938] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:29:27.152 [2024-12-09 06:28:21.492991] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid489779 ] 00:29:27.152 [2024-12-09 06:28:21.551878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:27.152 [2024-12-09 06:28:21.581552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:27.152 06:28:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:27.152 06:28:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:27.152 06:28:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:27.152 06:28:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:27.152 06:28:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:27.413 06:28:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:27.413 06:28:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:27.673 nvme0n1 00:29:27.673 06:28:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:27.673 06:28:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:27.673 Running I/O for 2 seconds... 00:29:29.999 28958.00 IOPS, 113.12 MiB/s [2024-12-09T05:28:24.586Z] 28891.00 IOPS, 112.86 MiB/s 00:29:29.999 Latency(us) 00:29:29.999 [2024-12-09T05:28:24.586Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:29.999 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:29.999 nvme0n1 : 2.01 28892.21 112.86 0.00 0.00 4422.65 2003.89 7713.08 00:29:29.999 [2024-12-09T05:28:24.586Z] =================================================================================================================== 00:29:29.999 [2024-12-09T05:28:24.586Z] Total : 28892.21 112.86 0.00 0.00 4422.65 2003.89 7713.08 00:29:29.999 { 00:29:29.999 "results": [ 00:29:29.999 { 00:29:29.999 "job": "nvme0n1", 00:29:29.999 "core_mask": "0x2", 00:29:29.999 "workload": "randwrite", 00:29:29.999 "status": "finished", 00:29:29.999 "queue_depth": 128, 00:29:29.999 "io_size": 4096, 00:29:29.999 "runtime": 2.005731, 00:29:29.999 "iops": 28892.20937403869, 00:29:29.999 "mibps": 112.86019286733864, 00:29:29.999 "io_failed": 0, 00:29:29.999 "io_timeout": 0, 00:29:29.999 "avg_latency_us": 4422.651854251012, 00:29:29.999 "min_latency_us": 2003.8892307692308, 00:29:29.999 "max_latency_us": 7713.083076923077 00:29:29.999 } 00:29:29.999 ], 00:29:29.999 "core_count": 1 00:29:29.999 } 00:29:29.999 06:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:29.999 06:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:29.999 06:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:29.999 | select(.opcode=="crc32c") 00:29:29.999 | "\(.module_name) \(.executed)"' 00:29:29.999 06:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:29.999 06:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:29.999 06:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:29.999 06:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:29.999 06:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:29.999 06:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:29.999 06:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 489779 00:29:29.999 06:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 489779 ']' 00:29:29.999 06:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 489779 00:29:29.999 06:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:29.999 06:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:29.999 06:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 489779 00:29:29.999 06:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:29.999 06:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:29.999 06:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 489779' 00:29:29.999 killing process with pid 489779 00:29:29.999 06:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 489779 00:29:29.999 Received shutdown signal, test time was about 2.000000 seconds 00:29:29.999 00:29:29.999 Latency(us) 00:29:29.999 [2024-12-09T05:28:24.586Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:29.999 [2024-12-09T05:28:24.586Z] =================================================================================================================== 00:29:29.999 [2024-12-09T05:28:24.586Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:29.999 06:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 489779 00:29:29.999 06:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:29:29.999 06:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:29.999 06:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:29.999 06:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:29:29.999 06:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:29:29.999 06:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:29:29.999 06:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:29.999 06:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=490379 00:29:29.999 06:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 490379 /var/tmp/bperf.sock 00:29:29.999 06:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 490379 ']' 00:29:30.000 06:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:30.000 06:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:30.000 06:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:30.000 06:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:30.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:30.000 06:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:30.000 06:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:30.260 [2024-12-09 06:28:24.585697] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:29:30.260 [2024-12-09 06:28:24.585751] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid490379 ] 00:29:30.260 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:30.260 Zero copy mechanism will not be used. 00:29:30.260 [2024-12-09 06:28:24.643157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:30.260 [2024-12-09 06:28:24.672693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:30.260 06:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:30.260 06:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:30.260 06:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:30.260 06:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:30.260 06:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:30.522 06:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:30.522 06:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:30.781 nvme0n1 00:29:30.781 06:28:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:30.781 06:28:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:31.041 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:31.041 Zero copy mechanism will not be used. 00:29:31.041 Running I/O for 2 seconds... 00:29:32.918 6321.00 IOPS, 790.12 MiB/s [2024-12-09T05:28:27.505Z] 5933.50 IOPS, 741.69 MiB/s 00:29:32.918 Latency(us) 00:29:32.918 [2024-12-09T05:28:27.505Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:32.918 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:32.918 nvme0n1 : 2.01 5925.80 740.73 0.00 0.00 2694.43 1172.09 12098.95 00:29:32.918 [2024-12-09T05:28:27.505Z] =================================================================================================================== 00:29:32.918 [2024-12-09T05:28:27.505Z] Total : 5925.80 740.73 0.00 0.00 2694.43 1172.09 12098.95 00:29:32.918 { 00:29:32.918 "results": [ 00:29:32.918 { 00:29:32.918 "job": "nvme0n1", 00:29:32.918 "core_mask": "0x2", 00:29:32.918 "workload": "randwrite", 00:29:32.918 "status": "finished", 00:29:32.918 "queue_depth": 16, 00:29:32.918 "io_size": 131072, 00:29:32.918 "runtime": 2.005298, 00:29:32.918 "iops": 5925.802549047573, 00:29:32.918 "mibps": 740.7253186309466, 00:29:32.918 "io_failed": 0, 00:29:32.918 "io_timeout": 0, 00:29:32.918 "avg_latency_us": 2694.433218236783, 00:29:32.918 "min_latency_us": 1172.0861538461538, 00:29:32.918 "max_latency_us": 12098.953846153847 00:29:32.918 } 00:29:32.918 ], 00:29:32.918 "core_count": 1 00:29:32.918 } 00:29:32.918 06:28:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:32.918 06:28:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:32.918 06:28:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:32.918 06:28:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:32.918 | select(.opcode=="crc32c") 00:29:32.918 | "\(.module_name) \(.executed)"' 00:29:32.918 06:28:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:33.177 06:28:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:33.177 06:28:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:33.177 06:28:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:33.177 06:28:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:33.177 06:28:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 490379 00:29:33.177 06:28:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 490379 ']' 00:29:33.177 06:28:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 490379 00:29:33.177 06:28:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:33.177 06:28:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:33.177 06:28:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 490379 00:29:33.177 06:28:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:33.177 06:28:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:33.177 06:28:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 490379' 00:29:33.177 killing process with pid 490379 00:29:33.177 06:28:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 490379 00:29:33.177 Received shutdown signal, test time was about 2.000000 seconds 00:29:33.177 00:29:33.177 Latency(us) 00:29:33.177 [2024-12-09T05:28:27.764Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:33.177 [2024-12-09T05:28:27.764Z] =================================================================================================================== 00:29:33.177 [2024-12-09T05:28:27.764Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:33.177 06:28:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 490379 00:29:33.437 06:28:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 488402 00:29:33.437 06:28:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 488402 ']' 00:29:33.437 06:28:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 488402 00:29:33.437 06:28:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:33.437 06:28:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:33.437 06:28:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 488402 00:29:33.437 06:28:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:33.437 06:28:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:33.437 06:28:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 488402' 00:29:33.437 killing process with pid 488402 00:29:33.437 06:28:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 488402 00:29:33.437 06:28:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 488402 00:29:33.437 00:29:33.437 real 0m14.863s 00:29:33.437 user 0m28.997s 00:29:33.437 sys 0m3.592s 00:29:33.437 06:28:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:33.437 06:28:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:33.437 ************************************ 00:29:33.437 END TEST nvmf_digest_clean 00:29:33.437 ************************************ 00:29:33.696 06:28:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:29:33.696 06:28:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:33.696 06:28:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:33.696 06:28:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:33.696 ************************************ 00:29:33.696 START TEST nvmf_digest_error 00:29:33.696 ************************************ 00:29:33.696 06:28:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:29:33.696 06:28:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:29:33.696 06:28:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:33.696 06:28:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:33.696 06:28:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:33.696 06:28:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=490926 00:29:33.696 06:28:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 490926 00:29:33.696 06:28:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:33.696 06:28:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 490926 ']' 00:29:33.696 06:28:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:33.696 06:28:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:33.696 06:28:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:33.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:33.696 06:28:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:33.696 06:28:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:33.696 [2024-12-09 06:28:28.128384] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:29:33.696 [2024-12-09 06:28:28.128439] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:33.696 [2024-12-09 06:28:28.221552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:33.696 [2024-12-09 06:28:28.254765] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:33.696 [2024-12-09 06:28:28.254799] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:33.696 [2024-12-09 06:28:28.254805] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:33.696 [2024-12-09 06:28:28.254810] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:33.696 [2024-12-09 06:28:28.254815] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:33.696 [2024-12-09 06:28:28.255297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:34.633 06:28:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:34.633 06:28:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:34.633 06:28:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:34.633 06:28:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:34.633 06:28:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:34.633 06:28:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:34.633 06:28:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:29:34.633 06:28:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.633 06:28:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:34.633 [2024-12-09 06:28:28.969277] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:29:34.633 06:28:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.633 06:28:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:29:34.633 06:28:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:29:34.633 06:28:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.633 06:28:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:34.633 null0 00:29:34.633 [2024-12-09 06:28:29.043352] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:34.633 [2024-12-09 06:28:29.067536] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:34.633 06:28:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.633 06:28:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:29:34.633 06:28:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:34.633 06:28:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:34.633 06:28:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:34.633 06:28:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:34.633 06:28:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=491066 00:29:34.633 06:28:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 491066 /var/tmp/bperf.sock 00:29:34.633 06:28:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 491066 ']' 00:29:34.633 06:28:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:29:34.633 06:28:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:34.633 06:28:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:34.633 06:28:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:34.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:34.633 06:28:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:34.633 06:28:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:34.633 [2024-12-09 06:28:29.122662] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:29:34.633 [2024-12-09 06:28:29.122706] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid491066 ] 00:29:34.633 [2024-12-09 06:28:29.180817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:34.633 [2024-12-09 06:28:29.211103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:34.892 06:28:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:34.892 06:28:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:34.892 06:28:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:34.892 06:28:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:34.892 06:28:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:34.892 06:28:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.892 06:28:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:34.892 06:28:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.892 06:28:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:34.892 06:28:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:35.461 nvme0n1 00:29:35.461 06:28:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:35.461 06:28:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.461 06:28:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:35.461 06:28:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.461 06:28:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:35.461 06:28:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:35.461 Running I/O for 2 seconds... 00:29:35.461 [2024-12-09 06:28:29.969368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:35.461 [2024-12-09 06:28:29.969397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.461 [2024-12-09 06:28:29.969407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.461 [2024-12-09 06:28:29.980026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:35.461 [2024-12-09 06:28:29.980048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:16626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.461 [2024-12-09 06:28:29.980055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.461 [2024-12-09 06:28:29.990420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:35.461 [2024-12-09 06:28:29.990439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.461 [2024-12-09 06:28:29.990446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.461 [2024-12-09 06:28:29.999044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:35.461 [2024-12-09 06:28:29.999061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:11943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.461 [2024-12-09 06:28:29.999069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.461 [2024-12-09 06:28:30.009001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:35.461 [2024-12-09 06:28:30.009020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:20696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.461 [2024-12-09 06:28:30.009027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.461 [2024-12-09 06:28:30.018590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:35.461 [2024-12-09 06:28:30.018607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.461 [2024-12-09 06:28:30.018614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.461 [2024-12-09 06:28:30.027307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:35.461 [2024-12-09 06:28:30.027324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:1400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.461 [2024-12-09 06:28:30.027330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.461 [2024-12-09 06:28:30.038811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:35.461 [2024-12-09 06:28:30.038829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:10064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.461 [2024-12-09 06:28:30.038835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.721 [2024-12-09 06:28:30.049088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:35.721 [2024-12-09 06:28:30.049106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.721 [2024-12-09 06:28:30.049115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.721 [2024-12-09 06:28:30.057348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:35.721 [2024-12-09 06:28:30.057365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.721 [2024-12-09 06:28:30.057372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.721 [2024-12-09 06:28:30.069441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:35.721 [2024-12-09 06:28:30.069462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:1537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.721 [2024-12-09 06:28:30.069469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.721 [2024-12-09 06:28:30.077902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:35.721 [2024-12-09 06:28:30.077919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:18222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.721 [2024-12-09 06:28:30.077926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.721 [2024-12-09 06:28:30.088004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:35.721 [2024-12-09 06:28:30.088021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:19276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.721 [2024-12-09 06:28:30.088028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.721 [2024-12-09 06:28:30.097423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:35.721 [2024-12-09 06:28:30.097440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:1184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.721 [2024-12-09 06:28:30.097456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.721 [2024-12-09 06:28:30.106542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:35.721 [2024-12-09 06:28:30.106560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:12413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.721 [2024-12-09 06:28:30.106567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.721 [2024-12-09 06:28:30.114593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:35.721 [2024-12-09 06:28:30.114610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:14422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.721 [2024-12-09 06:28:30.114617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.721 [2024-12-09 06:28:30.124402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:35.721 [2024-12-09 06:28:30.124419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.721 [2024-12-09 06:28:30.124426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.721 [2024-12-09 06:28:30.133079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:35.721 [2024-12-09 06:28:30.133096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:2994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.721 [2024-12-09 06:28:30.133103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.721 [2024-12-09 06:28:30.143194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:35.721 [2024-12-09 06:28:30.143213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.721 [2024-12-09 06:28:30.143220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.721 [2024-12-09 06:28:30.155738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:35.721 [2024-12-09 06:28:30.155756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:9873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.721 [2024-12-09 06:28:30.155762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.721 [2024-12-09 06:28:30.164496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:35.721 [2024-12-09 06:28:30.164513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:14386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.721 [2024-12-09 06:28:30.164519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.721 [2024-12-09 06:28:30.175315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:35.721 [2024-12-09 06:28:30.175333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.721 [2024-12-09 06:28:30.175341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.721 [2024-12-09 06:28:30.185535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:35.721 [2024-12-09 06:28:30.185555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:2980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.721 [2024-12-09 06:28:30.185562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.721 [2024-12-09 06:28:30.194569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:35.721 [2024-12-09 06:28:30.194586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:13655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.721 [2024-12-09 06:28:30.194593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.721 [2024-12-09 06:28:30.204468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:35.722 [2024-12-09 06:28:30.204485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:6215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.722 [2024-12-09 06:28:30.204492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.722 [2024-12-09 06:28:30.213613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:35.722 [2024-12-09 06:28:30.213630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.722 [2024-12-09 06:28:30.213636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.722 [2024-12-09 06:28:30.222955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:35.722 [2024-12-09 06:28:30.222972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:4087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.722 [2024-12-09 06:28:30.222978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.722 [2024-12-09 06:28:30.231859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:35.722 [2024-12-09 06:28:30.231876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:19919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.722 [2024-12-09 06:28:30.231883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.722 [2024-12-09 06:28:30.241261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:35.722 [2024-12-09 06:28:30.241278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:4967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.722 [2024-12-09 06:28:30.241284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.722 [2024-12-09 06:28:30.250197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:35.722 [2024-12-09 06:28:30.250214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:6826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.722 [2024-12-09 06:28:30.250220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.722 [2024-12-09 06:28:30.260507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:35.722 [2024-12-09 06:28:30.260524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:6537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.722 [2024-12-09 06:28:30.260530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.722 [2024-12-09 06:28:30.269881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:35.722 [2024-12-09 06:28:30.269897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:9424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.722 [2024-12-09 06:28:30.269904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.722 [2024-12-09 06:28:30.279092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:35.722 [2024-12-09 06:28:30.279109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.722 [2024-12-09 06:28:30.279116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.722 [2024-12-09 06:28:30.287932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:35.722 [2024-12-09 06:28:30.287948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.722 [2024-12-09 06:28:30.287954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.722 [2024-12-09 06:28:30.297674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:35.722 [2024-12-09 06:28:30.297691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:9893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.722 [2024-12-09 06:28:30.297697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.982 [2024-12-09 06:28:30.307388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:35.982 [2024-12-09 06:28:30.307406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.982 [2024-12-09 06:28:30.307413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.982 [2024-12-09 06:28:30.315484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:35.982 [2024-12-09 06:28:30.315501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:24518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.982 [2024-12-09 06:28:30.315508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.982 [2024-12-09 06:28:30.326304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:35.982 [2024-12-09 06:28:30.326321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:1955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.982 [2024-12-09 06:28:30.326328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.982 [2024-12-09 06:28:30.336231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:35.982 [2024-12-09 06:28:30.336248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:11208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.982 [2024-12-09 06:28:30.336255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.982 [2024-12-09 06:28:30.346441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:35.982 [2024-12-09 06:28:30.346465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:7227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.982 [2024-12-09 06:28:30.346472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.983 [2024-12-09 06:28:30.356510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:35.983 [2024-12-09 06:28:30.356527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.983 [2024-12-09 06:28:30.356533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.983 [2024-12-09 06:28:30.367316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:35.983 [2024-12-09 06:28:30.367333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.983 [2024-12-09 06:28:30.367339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.983 [2024-12-09 06:28:30.376509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:35.983 [2024-12-09 06:28:30.376526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:18844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.983 [2024-12-09 06:28:30.376532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.983 [2024-12-09 06:28:30.385649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:35.983 [2024-12-09 06:28:30.385665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:25286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.983 [2024-12-09 06:28:30.385672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.983 [2024-12-09 06:28:30.393762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:35.983 [2024-12-09 06:28:30.393779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:11053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.983 [2024-12-09 06:28:30.393786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.983 [2024-12-09 06:28:30.403674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:35.983 [2024-12-09 06:28:30.403690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:14821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.983 [2024-12-09 06:28:30.403697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.983 [2024-12-09 06:28:30.412339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:35.983 [2024-12-09 06:28:30.412355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.983 [2024-12-09 06:28:30.412361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.983 [2024-12-09 06:28:30.421577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:35.983 [2024-12-09 06:28:30.421594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.983 [2024-12-09 06:28:30.421600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.983 [2024-12-09 06:28:30.431011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:35.983 [2024-12-09 06:28:30.431027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:6665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.983 [2024-12-09 06:28:30.431034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.983 [2024-12-09 06:28:30.439567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:35.983 [2024-12-09 06:28:30.439584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:5869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.983 [2024-12-09 06:28:30.439590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.983 [2024-12-09 06:28:30.448980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:35.983 [2024-12-09 06:28:30.448996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:2806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.983 [2024-12-09 06:28:30.449003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.983 [2024-12-09 06:28:30.458209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:35.983 [2024-12-09 06:28:30.458225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:13686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.983 [2024-12-09 06:28:30.458231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.983 [2024-12-09 06:28:30.467138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:35.983 [2024-12-09 06:28:30.467155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:6774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.983 [2024-12-09 06:28:30.467162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.983 [2024-12-09 06:28:30.477074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:35.983 [2024-12-09 06:28:30.477092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.983 [2024-12-09 06:28:30.477098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.983 [2024-12-09 06:28:30.486535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:35.983 [2024-12-09 06:28:30.486551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:21187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.983 [2024-12-09 06:28:30.486558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.983 [2024-12-09 06:28:30.494480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:35.983 [2024-12-09 06:28:30.494496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.983 [2024-12-09 06:28:30.494503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.983 [2024-12-09 06:28:30.504812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:35.983 [2024-12-09 06:28:30.504829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.983 [2024-12-09 06:28:30.504838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.983 [2024-12-09 06:28:30.514954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:35.983 [2024-12-09 06:28:30.514970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:23468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.983 [2024-12-09 06:28:30.514977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.983 [2024-12-09 06:28:30.523351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:35.983 [2024-12-09 06:28:30.523368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:3023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.983 [2024-12-09 06:28:30.523374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.983 [2024-12-09 06:28:30.532216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:35.983 [2024-12-09 06:28:30.532233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:8689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.983 [2024-12-09 06:28:30.532239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.983 [2024-12-09 06:28:30.540905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:35.983 [2024-12-09 06:28:30.540922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:13693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.983 [2024-12-09 06:28:30.540928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.983 [2024-12-09 06:28:30.550905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:35.983 [2024-12-09 06:28:30.550922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:19504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.983 [2024-12-09 06:28:30.550929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.983 [2024-12-09 06:28:30.559381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:35.983 [2024-12-09 06:28:30.559398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.983 [2024-12-09 06:28:30.559405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.243 [2024-12-09 06:28:30.568933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.243 [2024-12-09 06:28:30.568949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.243 [2024-12-09 06:28:30.568956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.243 [2024-12-09 06:28:30.578900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.243 [2024-12-09 06:28:30.578916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.243 [2024-12-09 06:28:30.578923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.243 [2024-12-09 06:28:30.588459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.243 [2024-12-09 06:28:30.588478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:12072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.243 [2024-12-09 06:28:30.588485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.243 [2024-12-09 06:28:30.596296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.243 [2024-12-09 06:28:30.596312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:16452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.243 [2024-12-09 06:28:30.596319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.243 [2024-12-09 06:28:30.606835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.243 [2024-12-09 06:28:30.606852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:7215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.243 [2024-12-09 06:28:30.606858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.243 [2024-12-09 06:28:30.617068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.243 [2024-12-09 06:28:30.617085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.243 [2024-12-09 06:28:30.617091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.243 [2024-12-09 06:28:30.628072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.243 [2024-12-09 06:28:30.628089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:18098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.243 [2024-12-09 06:28:30.628095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.243 [2024-12-09 06:28:30.638890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.243 [2024-12-09 06:28:30.638906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:6516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.243 [2024-12-09 06:28:30.638913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.243 [2024-12-09 06:28:30.646905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.243 [2024-12-09 06:28:30.646922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:8218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.243 [2024-12-09 06:28:30.646928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.243 [2024-12-09 06:28:30.657595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.243 [2024-12-09 06:28:30.657612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:2241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.243 [2024-12-09 06:28:30.657618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.243 [2024-12-09 06:28:30.667008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.243 [2024-12-09 06:28:30.667025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:16287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.243 [2024-12-09 06:28:30.667031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.243 [2024-12-09 06:28:30.675472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.243 [2024-12-09 06:28:30.675489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:13169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.243 [2024-12-09 06:28:30.675495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.243 [2024-12-09 06:28:30.684535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.243 [2024-12-09 06:28:30.684551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.243 [2024-12-09 06:28:30.684558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.243 [2024-12-09 06:28:30.694228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.243 [2024-12-09 06:28:30.694245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:11577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.243 [2024-12-09 06:28:30.694252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.243 [2024-12-09 06:28:30.703565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.243 [2024-12-09 06:28:30.703582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:1900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.243 [2024-12-09 06:28:30.703589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.243 [2024-12-09 06:28:30.712132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.243 [2024-12-09 06:28:30.712149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.243 [2024-12-09 06:28:30.712155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.243 [2024-12-09 06:28:30.722754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.243 [2024-12-09 06:28:30.722771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:9866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.243 [2024-12-09 06:28:30.722777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.243 [2024-12-09 06:28:30.731503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.243 [2024-12-09 06:28:30.731520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:19256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.243 [2024-12-09 06:28:30.731526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.243 [2024-12-09 06:28:30.740616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.243 [2024-12-09 06:28:30.740633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:5707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.243 [2024-12-09 06:28:30.740640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.243 [2024-12-09 06:28:30.750392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.243 [2024-12-09 06:28:30.750409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:25457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.243 [2024-12-09 06:28:30.750419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.243 [2024-12-09 06:28:30.760100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.243 [2024-12-09 06:28:30.760117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:5357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.243 [2024-12-09 06:28:30.760123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.243 [2024-12-09 06:28:30.770095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.243 [2024-12-09 06:28:30.770112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:6309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.243 [2024-12-09 06:28:30.770119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.243 [2024-12-09 06:28:30.778668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.243 [2024-12-09 06:28:30.778685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:7450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.243 [2024-12-09 06:28:30.778691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.243 [2024-12-09 06:28:30.787278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.243 [2024-12-09 06:28:30.787295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:18619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.243 [2024-12-09 06:28:30.787302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.243 [2024-12-09 06:28:30.797836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.243 [2024-12-09 06:28:30.797852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:24788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.243 [2024-12-09 06:28:30.797859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.243 [2024-12-09 06:28:30.807387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.243 [2024-12-09 06:28:30.807403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:8336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.243 [2024-12-09 06:28:30.807410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.243 [2024-12-09 06:28:30.815651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.243 [2024-12-09 06:28:30.815668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:9066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.243 [2024-12-09 06:28:30.815675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.243 [2024-12-09 06:28:30.824424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.243 [2024-12-09 06:28:30.824441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:8545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.243 [2024-12-09 06:28:30.824447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.503 [2024-12-09 06:28:30.833645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.503 [2024-12-09 06:28:30.833663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:11877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.503 [2024-12-09 06:28:30.833669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.503 [2024-12-09 06:28:30.843068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.503 [2024-12-09 06:28:30.843085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:15679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.503 [2024-12-09 06:28:30.843092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.503 [2024-12-09 06:28:30.851671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.503 [2024-12-09 06:28:30.851688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.503 [2024-12-09 06:28:30.851695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.503 [2024-12-09 06:28:30.861283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.503 [2024-12-09 06:28:30.861300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:25214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.503 [2024-12-09 06:28:30.861307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.503 [2024-12-09 06:28:30.870774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.503 [2024-12-09 06:28:30.870791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:17782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.503 [2024-12-09 06:28:30.870797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.503 [2024-12-09 06:28:30.879235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.503 [2024-12-09 06:28:30.879251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:25124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.503 [2024-12-09 06:28:30.879258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.503 [2024-12-09 06:28:30.888901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.503 [2024-12-09 06:28:30.888918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:6711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.503 [2024-12-09 06:28:30.888925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.503 [2024-12-09 06:28:30.898994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.503 [2024-12-09 06:28:30.899011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:7517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.503 [2024-12-09 06:28:30.899018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.503 [2024-12-09 06:28:30.908741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.503 [2024-12-09 06:28:30.908758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:24465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.503 [2024-12-09 06:28:30.908767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.503 [2024-12-09 06:28:30.916343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.503 [2024-12-09 06:28:30.916360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:25060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.503 [2024-12-09 06:28:30.916366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.503 [2024-12-09 06:28:30.926180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.503 [2024-12-09 06:28:30.926197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:22852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.503 [2024-12-09 06:28:30.926204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.503 [2024-12-09 06:28:30.936411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.503 [2024-12-09 06:28:30.936427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:2427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.503 [2024-12-09 06:28:30.936434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.503 [2024-12-09 06:28:30.944707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.503 [2024-12-09 06:28:30.944723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.503 [2024-12-09 06:28:30.944729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.503 [2024-12-09 06:28:30.954391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.503 [2024-12-09 06:28:30.954408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.503 [2024-12-09 06:28:30.954414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.503 26839.00 IOPS, 104.84 MiB/s [2024-12-09T05:28:31.090Z] [2024-12-09 06:28:30.964013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.503 [2024-12-09 06:28:30.964030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:17646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.503 [2024-12-09 06:28:30.964037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.503 [2024-12-09 06:28:30.973874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.503 [2024-12-09 06:28:30.973890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:7809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.503 [2024-12-09 06:28:30.973896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.503 [2024-12-09 06:28:30.982442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.503 [2024-12-09 06:28:30.982463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:20334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.503 [2024-12-09 06:28:30.982469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.503 [2024-12-09 06:28:30.991286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.503 [2024-12-09 06:28:30.991305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.503 [2024-12-09 06:28:30.991312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.503 [2024-12-09 06:28:31.000934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.503 [2024-12-09 06:28:31.000950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:20534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.503 [2024-12-09 06:28:31.000957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.503 [2024-12-09 06:28:31.010676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.503 [2024-12-09 06:28:31.010693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:25164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.503 [2024-12-09 06:28:31.010699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.503 [2024-12-09 06:28:31.019502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.503 [2024-12-09 06:28:31.019518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:3760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.503 [2024-12-09 06:28:31.019524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.503 [2024-12-09 06:28:31.028724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.503 [2024-12-09 06:28:31.028741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:3699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.503 [2024-12-09 06:28:31.028747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.503 [2024-12-09 06:28:31.037263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.503 [2024-12-09 06:28:31.037279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:7372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.503 [2024-12-09 06:28:31.037286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.503 [2024-12-09 06:28:31.046019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.504 [2024-12-09 06:28:31.046035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:14056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.504 [2024-12-09 06:28:31.046042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.504 [2024-12-09 06:28:31.056248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.504 [2024-12-09 06:28:31.056265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:9209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.504 [2024-12-09 06:28:31.056271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.504 [2024-12-09 06:28:31.066078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.504 [2024-12-09 06:28:31.066094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.504 [2024-12-09 06:28:31.066101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.504 [2024-12-09 06:28:31.074902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.504 [2024-12-09 06:28:31.074918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:14605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.504 [2024-12-09 06:28:31.074925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.504 [2024-12-09 06:28:31.083737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.504 [2024-12-09 06:28:31.083754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.504 [2024-12-09 06:28:31.083761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.763 [2024-12-09 06:28:31.093593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.763 [2024-12-09 06:28:31.093610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:22335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.763 [2024-12-09 06:28:31.093616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.763 [2024-12-09 06:28:31.102906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.763 [2024-12-09 06:28:31.102922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:13030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.763 [2024-12-09 06:28:31.102929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.763 [2024-12-09 06:28:31.111278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.763 [2024-12-09 06:28:31.111295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.763 [2024-12-09 06:28:31.111301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.763 [2024-12-09 06:28:31.120880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.763 [2024-12-09 06:28:31.120896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:6063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.763 [2024-12-09 06:28:31.120903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.763 [2024-12-09 06:28:31.130642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.763 [2024-12-09 06:28:31.130659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.763 [2024-12-09 06:28:31.130665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.763 [2024-12-09 06:28:31.138865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.763 [2024-12-09 06:28:31.138881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.763 [2024-12-09 06:28:31.138887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.763 [2024-12-09 06:28:31.148399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.763 [2024-12-09 06:28:31.148415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:12676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.763 [2024-12-09 06:28:31.148425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.763 [2024-12-09 06:28:31.156962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.763 [2024-12-09 06:28:31.156979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:8929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.763 [2024-12-09 06:28:31.156986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.763 [2024-12-09 06:28:31.165120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.764 [2024-12-09 06:28:31.165137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.764 [2024-12-09 06:28:31.165144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.764 [2024-12-09 06:28:31.174856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.764 [2024-12-09 06:28:31.174874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:22277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.764 [2024-12-09 06:28:31.174881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.764 [2024-12-09 06:28:31.184933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.764 [2024-12-09 06:28:31.184950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:4074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.764 [2024-12-09 06:28:31.184957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.764 [2024-12-09 06:28:31.194741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.764 [2024-12-09 06:28:31.194759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.764 [2024-12-09 06:28:31.194765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.764 [2024-12-09 06:28:31.204642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.764 [2024-12-09 06:28:31.204660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:11781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.764 [2024-12-09 06:28:31.204667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.764 [2024-12-09 06:28:31.213944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.764 [2024-12-09 06:28:31.213961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.764 [2024-12-09 06:28:31.213968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.764 [2024-12-09 06:28:31.222397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.764 [2024-12-09 06:28:31.222414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.764 [2024-12-09 06:28:31.222421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.764 [2024-12-09 06:28:31.232212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.764 [2024-12-09 06:28:31.232229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:10972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.764 [2024-12-09 06:28:31.232236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.764 [2024-12-09 06:28:31.240702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.764 [2024-12-09 06:28:31.240720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.764 [2024-12-09 06:28:31.240726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.764 [2024-12-09 06:28:31.249374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.764 [2024-12-09 06:28:31.249391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:24365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.764 [2024-12-09 06:28:31.249397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.764 [2024-12-09 06:28:31.258745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.764 [2024-12-09 06:28:31.258763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:16546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.764 [2024-12-09 06:28:31.258769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.764 [2024-12-09 06:28:31.268118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.764 [2024-12-09 06:28:31.268134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:7628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.764 [2024-12-09 06:28:31.268140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.764 [2024-12-09 06:28:31.276951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.764 [2024-12-09 06:28:31.276967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:5759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.764 [2024-12-09 06:28:31.276974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.764 [2024-12-09 06:28:31.285814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.764 [2024-12-09 06:28:31.285830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.764 [2024-12-09 06:28:31.285837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.764 [2024-12-09 06:28:31.294898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.764 [2024-12-09 06:28:31.294915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:18244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.764 [2024-12-09 06:28:31.294922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.764 [2024-12-09 06:28:31.303930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.764 [2024-12-09 06:28:31.303948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:10524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.764 [2024-12-09 06:28:31.303958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.764 [2024-12-09 06:28:31.312957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.764 [2024-12-09 06:28:31.312974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:23433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.764 [2024-12-09 06:28:31.312980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.764 [2024-12-09 06:28:31.322414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.764 [2024-12-09 06:28:31.322430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:24249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.764 [2024-12-09 06:28:31.322437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.764 [2024-12-09 06:28:31.330998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.764 [2024-12-09 06:28:31.331015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:17775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.764 [2024-12-09 06:28:31.331021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.764 [2024-12-09 06:28:31.341938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:36.764 [2024-12-09 06:28:31.341954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:6234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.764 [2024-12-09 06:28:31.341961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.025 [2024-12-09 06:28:31.350838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:37.025 [2024-12-09 06:28:31.350855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:2453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.025 [2024-12-09 06:28:31.350861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.025 [2024-12-09 06:28:31.360052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:37.025 [2024-12-09 06:28:31.360069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:13507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.025 [2024-12-09 06:28:31.360076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.025 [2024-12-09 06:28:31.369028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:37.025 [2024-12-09 06:28:31.369045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:7939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.025 [2024-12-09 06:28:31.369052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.025 [2024-12-09 06:28:31.378247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:37.025 [2024-12-09 06:28:31.378264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:1343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.025 [2024-12-09 06:28:31.378270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.025 [2024-12-09 06:28:31.387459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:37.025 [2024-12-09 06:28:31.387479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:25117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.025 [2024-12-09 06:28:31.387485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.025 [2024-12-09 06:28:31.396183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:37.025 [2024-12-09 06:28:31.396200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.025 [2024-12-09 06:28:31.396207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.025 [2024-12-09 06:28:31.407505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:37.025 [2024-12-09 06:28:31.407521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.025 [2024-12-09 06:28:31.407528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.025 [2024-12-09 06:28:31.415135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:37.025 [2024-12-09 06:28:31.415153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.025 [2024-12-09 06:28:31.415159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.025 [2024-12-09 06:28:31.425537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:37.025 [2024-12-09 06:28:31.425555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.025 [2024-12-09 06:28:31.425561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.025 [2024-12-09 06:28:31.435702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:37.025 [2024-12-09 06:28:31.435719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:10692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.025 [2024-12-09 06:28:31.435726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.025 [2024-12-09 06:28:31.446226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:37.025 [2024-12-09 06:28:31.446243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:11092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.025 [2024-12-09 06:28:31.446250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.025 [2024-12-09 06:28:31.455161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:37.025 [2024-12-09 06:28:31.455177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:14815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.025 [2024-12-09 06:28:31.455183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.025 [2024-12-09 06:28:31.464912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:37.026 [2024-12-09 06:28:31.464928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:6888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.026 [2024-12-09 06:28:31.464935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.026 [2024-12-09 06:28:31.473636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:37.026 [2024-12-09 06:28:31.473652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:5232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.026 [2024-12-09 06:28:31.473659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.026 [2024-12-09 06:28:31.483529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:37.026 [2024-12-09 06:28:31.483546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:12095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.026 [2024-12-09 06:28:31.483552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.026 [2024-12-09 06:28:31.491532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:37.026 [2024-12-09 06:28:31.491549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:24065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.026 [2024-12-09 06:28:31.491555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.026 [2024-12-09 06:28:31.502141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:37.026 [2024-12-09 06:28:31.502158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.026 [2024-12-09 06:28:31.502165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.026 [2024-12-09 06:28:31.511683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:37.026 [2024-12-09 06:28:31.511700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:19886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.026 [2024-12-09 06:28:31.511707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.026 [2024-12-09 06:28:31.519748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:37.026 [2024-12-09 06:28:31.519765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:19696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.026 [2024-12-09 06:28:31.519772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.026 [2024-12-09 06:28:31.529669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:37.026 [2024-12-09 06:28:31.529686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:8345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.026 [2024-12-09 06:28:31.529693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.026 [2024-12-09 06:28:31.538751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:37.026 [2024-12-09 06:28:31.538767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:7015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.026 [2024-12-09 06:28:31.538773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.026 [2024-12-09 06:28:31.548209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:37.026 [2024-12-09 06:28:31.548226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:15442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.026 [2024-12-09 06:28:31.548236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.026 [2024-12-09 06:28:31.556076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:37.026 [2024-12-09 06:28:31.556093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:7607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.026 [2024-12-09 06:28:31.556100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.026 [2024-12-09 06:28:31.566167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:37.026 [2024-12-09 06:28:31.566184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.026 [2024-12-09 06:28:31.566191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.026 [2024-12-09 06:28:31.578054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:37.026 [2024-12-09 06:28:31.578070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:22370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.026 [2024-12-09 06:28:31.578077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.026 [2024-12-09 06:28:31.587568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:37.026 [2024-12-09 06:28:31.587585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:19040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.026 [2024-12-09 06:28:31.587591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.026 [2024-12-09 06:28:31.596693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:37.026 [2024-12-09 06:28:31.596710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:1940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.026 [2024-12-09 06:28:31.596717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.026 [2024-12-09 06:28:31.604603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:37.026 [2024-12-09 06:28:31.604621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:9736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.026 [2024-12-09 06:28:31.604627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.288 [2024-12-09 06:28:31.614624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:37.288 [2024-12-09 06:28:31.614641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:3093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.288 [2024-12-09 06:28:31.614647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.288 [2024-12-09 06:28:31.624800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:37.288 [2024-12-09 06:28:31.624817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.288 [2024-12-09 06:28:31.624823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.288 [2024-12-09 06:28:31.634335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:37.288 [2024-12-09 06:28:31.634352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.288 [2024-12-09 06:28:31.634359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.288 [2024-12-09 06:28:31.643271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:37.288 [2024-12-09 06:28:31.643288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:9926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.288 [2024-12-09 06:28:31.643295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.288 [2024-12-09 06:28:31.651744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:37.288 [2024-12-09 06:28:31.651761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:19654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.288 [2024-12-09 06:28:31.651768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.288 [2024-12-09 06:28:31.661275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:37.288 [2024-12-09 06:28:31.661292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:2228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.288 [2024-12-09 06:28:31.661299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.288 [2024-12-09 06:28:31.671385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:37.288 [2024-12-09 06:28:31.671402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:13821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.288 [2024-12-09 06:28:31.671409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.288 [2024-12-09 06:28:31.679680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:37.288 [2024-12-09 06:28:31.679697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:6601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.288 [2024-12-09 06:28:31.679704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.288 [2024-12-09 06:28:31.689525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:37.288 [2024-12-09 06:28:31.689542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:17518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.288 [2024-12-09 06:28:31.689550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.288 [2024-12-09 06:28:31.699559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:37.288 [2024-12-09 06:28:31.699576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:19185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.288 [2024-12-09 06:28:31.699583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.288 [2024-12-09 06:28:31.708546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:37.288 [2024-12-09 06:28:31.708563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:8183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.288 [2024-12-09 06:28:31.708573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.288 [2024-12-09 06:28:31.717463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:37.288 [2024-12-09 06:28:31.717480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:13947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.288 [2024-12-09 06:28:31.717487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.288 [2024-12-09 06:28:31.727701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:37.288 [2024-12-09 06:28:31.727718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:24499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.288 [2024-12-09 06:28:31.727725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.288 [2024-12-09 06:28:31.736982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:37.288 [2024-12-09 06:28:31.737000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:10374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.288 [2024-12-09 06:28:31.737006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.288 [2024-12-09 06:28:31.745660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:37.288 [2024-12-09 06:28:31.745677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.289 [2024-12-09 06:28:31.745684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.289 [2024-12-09 06:28:31.755318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:37.289 [2024-12-09 06:28:31.755335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.289 [2024-12-09 06:28:31.755342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.289 [2024-12-09 06:28:31.763659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:37.289 [2024-12-09 06:28:31.763675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:9966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.289 [2024-12-09 06:28:31.763682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.289 [2024-12-09 06:28:31.774205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:37.289 [2024-12-09 06:28:31.774222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:22886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.289 [2024-12-09 06:28:31.774228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.289 [2024-12-09 06:28:31.782420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:37.289 [2024-12-09 06:28:31.782438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.289 [2024-12-09 06:28:31.782444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.289 [2024-12-09 06:28:31.791728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:37.289 [2024-12-09 06:28:31.791748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.289 [2024-12-09 06:28:31.791754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.289 [2024-12-09 06:28:31.800772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:37.289 [2024-12-09 06:28:31.800788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:24614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.289 [2024-12-09 06:28:31.800795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.289 [2024-12-09 06:28:31.810490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:37.289 [2024-12-09 06:28:31.810507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:19636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.289 [2024-12-09 06:28:31.810514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.289 [2024-12-09 06:28:31.819601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:37.289 [2024-12-09 06:28:31.819619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.289 [2024-12-09 06:28:31.819625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.289 [2024-12-09 06:28:31.828662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:37.289 [2024-12-09 06:28:31.828679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.289 [2024-12-09 06:28:31.828685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.289 [2024-12-09 06:28:31.837871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:37.289 [2024-12-09 06:28:31.837889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.289 [2024-12-09 06:28:31.837895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.289 [2024-12-09 06:28:31.847685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:37.289 [2024-12-09 06:28:31.847702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:18116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.289 [2024-12-09 06:28:31.847709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.289 [2024-12-09 06:28:31.856249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:37.289 [2024-12-09 06:28:31.856266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.289 [2024-12-09 06:28:31.856272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.289 [2024-12-09 06:28:31.865333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:37.289 [2024-12-09 06:28:31.865351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:20755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.289 [2024-12-09 06:28:31.865357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.550 [2024-12-09 06:28:31.874224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:37.550 [2024-12-09 06:28:31.874241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:15641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.550 [2024-12-09 06:28:31.874248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.550 [2024-12-09 06:28:31.883779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:37.550 [2024-12-09 06:28:31.883796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:8984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.550 [2024-12-09 06:28:31.883802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.550 [2024-12-09 06:28:31.892330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:37.550 [2024-12-09 06:28:31.892347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:5160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.550 [2024-12-09 06:28:31.892354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.550 [2024-12-09 06:28:31.901501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:37.550 [2024-12-09 06:28:31.901518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:18447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.550 [2024-12-09 06:28:31.901524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.550 [2024-12-09 06:28:31.911155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:37.550 [2024-12-09 06:28:31.911172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.550 [2024-12-09 06:28:31.911179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.550 [2024-12-09 06:28:31.919639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:37.550 [2024-12-09 06:28:31.919655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:9829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.550 [2024-12-09 06:28:31.919661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.550 [2024-12-09 06:28:31.929101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:37.550 [2024-12-09 06:28:31.929119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:4384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.550 [2024-12-09 06:28:31.929125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.550 [2024-12-09 06:28:31.938538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:37.550 [2024-12-09 06:28:31.938555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:12591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.550 [2024-12-09 06:28:31.938562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.550 [2024-12-09 06:28:31.946895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:37.550 [2024-12-09 06:28:31.946914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.550 [2024-12-09 06:28:31.946924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.550 [2024-12-09 06:28:31.956552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b35570) 00:29:37.550 [2024-12-09 06:28:31.956569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:1167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.550 [2024-12-09 06:28:31.956575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.550 27179.50 IOPS, 106.17 MiB/s 00:29:37.550 Latency(us) 00:29:37.550 [2024-12-09T05:28:32.137Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:37.550 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:37.550 nvme0n1 : 2.00 27200.16 106.25 0.00 0.00 4700.70 2407.19 14216.27 00:29:37.550 [2024-12-09T05:28:32.137Z] =================================================================================================================== 00:29:37.550 [2024-12-09T05:28:32.137Z] Total : 27200.16 106.25 0.00 0.00 4700.70 2407.19 14216.27 00:29:37.550 { 00:29:37.550 "results": [ 00:29:37.550 { 00:29:37.550 "job": "nvme0n1", 00:29:37.550 "core_mask": "0x2", 00:29:37.550 "workload": "randread", 00:29:37.550 "status": "finished", 00:29:37.550 "queue_depth": 128, 00:29:37.550 "io_size": 4096, 00:29:37.550 "runtime": 2.003187, 00:29:37.550 "iops": 27200.156550536718, 00:29:37.550 "mibps": 106.25061152553405, 00:29:37.550 "io_failed": 0, 00:29:37.550 "io_timeout": 0, 00:29:37.550 "avg_latency_us": 4700.699387150922, 00:29:37.550 "min_latency_us": 2407.1876923076925, 00:29:37.550 "max_latency_us": 14216.270769230769 00:29:37.550 } 00:29:37.550 ], 00:29:37.550 "core_count": 1 00:29:37.550 } 00:29:37.550 06:28:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:37.551 06:28:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:37.551 06:28:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:37.551 | .driver_specific 00:29:37.551 | .nvme_error 00:29:37.551 | .status_code 00:29:37.551 | .command_transient_transport_error' 00:29:37.551 06:28:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:37.811 06:28:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 213 > 0 )) 00:29:37.811 06:28:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 491066 00:29:37.811 06:28:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 491066 ']' 00:29:37.811 06:28:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 491066 00:29:37.811 06:28:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:37.811 06:28:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:37.811 06:28:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 491066 00:29:37.811 06:28:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:37.811 06:28:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:37.811 06:28:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 491066' 00:29:37.811 killing process with pid 491066 00:29:37.811 06:28:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 491066 00:29:37.811 Received shutdown signal, test time was about 2.000000 seconds 00:29:37.811 00:29:37.811 Latency(us) 00:29:37.811 [2024-12-09T05:28:32.398Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:37.811 [2024-12-09T05:28:32.398Z] =================================================================================================================== 00:29:37.811 [2024-12-09T05:28:32.398Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:37.811 06:28:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 491066 00:29:37.811 06:28:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:29:37.811 06:28:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:37.811 06:28:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:37.811 06:28:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:37.811 06:28:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:37.811 06:28:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=491683 00:29:37.811 06:28:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 491683 /var/tmp/bperf.sock 00:29:37.811 06:28:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 491683 ']' 00:29:37.811 06:28:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:29:37.811 06:28:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:37.811 06:28:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:37.811 06:28:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:37.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:37.811 06:28:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:37.811 06:28:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:37.811 [2024-12-09 06:28:32.384400] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:29:37.811 [2024-12-09 06:28:32.384478] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid491683 ] 00:29:37.811 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:37.811 Zero copy mechanism will not be used. 00:29:38.072 [2024-12-09 06:28:32.443701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:38.072 [2024-12-09 06:28:32.472635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:38.072 06:28:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:38.072 06:28:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:38.072 06:28:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:38.072 06:28:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:38.332 06:28:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:38.332 06:28:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.332 06:28:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:38.332 06:28:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.332 06:28:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:38.332 06:28:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:38.592 nvme0n1 00:29:38.592 06:28:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:38.592 06:28:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.592 06:28:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:38.592 06:28:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.592 06:28:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:38.592 06:28:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:38.853 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:38.853 Zero copy mechanism will not be used. 00:29:38.853 Running I/O for 2 seconds... 00:29:38.853 [2024-12-09 06:28:33.253708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:38.853 [2024-12-09 06:28:33.253740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.853 [2024-12-09 06:28:33.253750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.853 [2024-12-09 06:28:33.262245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:38.853 [2024-12-09 06:28:33.262266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.853 [2024-12-09 06:28:33.262274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.853 [2024-12-09 06:28:33.269508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:38.853 [2024-12-09 06:28:33.269527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.853 [2024-12-09 06:28:33.269534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.853 [2024-12-09 06:28:33.274826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:38.853 [2024-12-09 06:28:33.274844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.853 [2024-12-09 06:28:33.274851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.853 [2024-12-09 06:28:33.277020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:38.853 [2024-12-09 06:28:33.277038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.853 [2024-12-09 06:28:33.277044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.853 [2024-12-09 06:28:33.280935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:38.853 [2024-12-09 06:28:33.280954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.853 [2024-12-09 06:28:33.280960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.853 [2024-12-09 06:28:33.288002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:38.853 [2024-12-09 06:28:33.288020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.853 [2024-12-09 06:28:33.288027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.853 [2024-12-09 06:28:33.291945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:38.853 [2024-12-09 06:28:33.291964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.853 [2024-12-09 06:28:33.291971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.853 [2024-12-09 06:28:33.297996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:38.853 [2024-12-09 06:28:33.298015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.853 [2024-12-09 06:28:33.298021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.853 [2024-12-09 06:28:33.302131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:38.853 [2024-12-09 06:28:33.302149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.853 [2024-12-09 06:28:33.302156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.853 [2024-12-09 06:28:33.309302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:38.853 [2024-12-09 06:28:33.309320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.853 [2024-12-09 06:28:33.309328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.853 [2024-12-09 06:28:33.315778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:38.853 [2024-12-09 06:28:33.315796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.853 [2024-12-09 06:28:33.315802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.853 [2024-12-09 06:28:33.319764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:38.853 [2024-12-09 06:28:33.319782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.853 [2024-12-09 06:28:33.319789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.853 [2024-12-09 06:28:33.327219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:38.853 [2024-12-09 06:28:33.327237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.853 [2024-12-09 06:28:33.327243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.853 [2024-12-09 06:28:33.335683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:38.853 [2024-12-09 06:28:33.335700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.853 [2024-12-09 06:28:33.335711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.853 [2024-12-09 06:28:33.344738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:38.853 [2024-12-09 06:28:33.344756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.854 [2024-12-09 06:28:33.344762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.854 [2024-12-09 06:28:33.348727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:38.854 [2024-12-09 06:28:33.348745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.854 [2024-12-09 06:28:33.348752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.854 [2024-12-09 06:28:33.352557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:38.854 [2024-12-09 06:28:33.352574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.854 [2024-12-09 06:28:33.352580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.854 [2024-12-09 06:28:33.359570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:38.854 [2024-12-09 06:28:33.359588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.854 [2024-12-09 06:28:33.359594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.854 [2024-12-09 06:28:33.363779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:38.854 [2024-12-09 06:28:33.363797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.854 [2024-12-09 06:28:33.363804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.854 [2024-12-09 06:28:33.367806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:38.854 [2024-12-09 06:28:33.367824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.854 [2024-12-09 06:28:33.367830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.854 [2024-12-09 06:28:33.371666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:38.854 [2024-12-09 06:28:33.371684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.854 [2024-12-09 06:28:33.371690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.854 [2024-12-09 06:28:33.377426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:38.854 [2024-12-09 06:28:33.377443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.854 [2024-12-09 06:28:33.377455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.854 [2024-12-09 06:28:33.381611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:38.854 [2024-12-09 06:28:33.381631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.854 [2024-12-09 06:28:33.381638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.854 [2024-12-09 06:28:33.390547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:38.854 [2024-12-09 06:28:33.390565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.854 [2024-12-09 06:28:33.390571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.854 [2024-12-09 06:28:33.395546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:38.854 [2024-12-09 06:28:33.395564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.854 [2024-12-09 06:28:33.395570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.854 [2024-12-09 06:28:33.399445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:38.854 [2024-12-09 06:28:33.399469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.854 [2024-12-09 06:28:33.399475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.854 [2024-12-09 06:28:33.403479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:38.854 [2024-12-09 06:28:33.403496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.854 [2024-12-09 06:28:33.403503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.854 [2024-12-09 06:28:33.407525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:38.854 [2024-12-09 06:28:33.407542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.854 [2024-12-09 06:28:33.407549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.854 [2024-12-09 06:28:33.412792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:38.854 [2024-12-09 06:28:33.412810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.854 [2024-12-09 06:28:33.412816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.854 [2024-12-09 06:28:33.420373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:38.854 [2024-12-09 06:28:33.420391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.854 [2024-12-09 06:28:33.420397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.854 [2024-12-09 06:28:33.428295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:38.854 [2024-12-09 06:28:33.428313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.854 [2024-12-09 06:28:33.428319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.854 [2024-12-09 06:28:33.437038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:38.854 [2024-12-09 06:28:33.437056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.854 [2024-12-09 06:28:33.437062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.116 [2024-12-09 06:28:33.446147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.116 [2024-12-09 06:28:33.446165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.116 [2024-12-09 06:28:33.446172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.116 [2024-12-09 06:28:33.455983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.116 [2024-12-09 06:28:33.456001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.116 [2024-12-09 06:28:33.456008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.116 [2024-12-09 06:28:33.465713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.116 [2024-12-09 06:28:33.465731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.116 [2024-12-09 06:28:33.465737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.116 [2024-12-09 06:28:33.475622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.116 [2024-12-09 06:28:33.475640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.116 [2024-12-09 06:28:33.475646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.116 [2024-12-09 06:28:33.484601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.116 [2024-12-09 06:28:33.484618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.116 [2024-12-09 06:28:33.484625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.116 [2024-12-09 06:28:33.494575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.116 [2024-12-09 06:28:33.494593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.116 [2024-12-09 06:28:33.494600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.116 [2024-12-09 06:28:33.503667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.116 [2024-12-09 06:28:33.503684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.116 [2024-12-09 06:28:33.503691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.116 [2024-12-09 06:28:33.514571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.116 [2024-12-09 06:28:33.514589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.116 [2024-12-09 06:28:33.514599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.116 [2024-12-09 06:28:33.524271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.116 [2024-12-09 06:28:33.524289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.116 [2024-12-09 06:28:33.524296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.116 [2024-12-09 06:28:33.533200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.116 [2024-12-09 06:28:33.533218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.116 [2024-12-09 06:28:33.533224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.116 [2024-12-09 06:28:33.540152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.116 [2024-12-09 06:28:33.540170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.116 [2024-12-09 06:28:33.540177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.116 [2024-12-09 06:28:33.547582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.116 [2024-12-09 06:28:33.547600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.116 [2024-12-09 06:28:33.547606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.116 [2024-12-09 06:28:33.551531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.116 [2024-12-09 06:28:33.551549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.116 [2024-12-09 06:28:33.551556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.116 [2024-12-09 06:28:33.555358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.116 [2024-12-09 06:28:33.555376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.116 [2024-12-09 06:28:33.555382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.116 [2024-12-09 06:28:33.561512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.116 [2024-12-09 06:28:33.561529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.116 [2024-12-09 06:28:33.561536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.116 [2024-12-09 06:28:33.569312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.116 [2024-12-09 06:28:33.569329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.116 [2024-12-09 06:28:33.569336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.116 [2024-12-09 06:28:33.578417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.116 [2024-12-09 06:28:33.578438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.116 [2024-12-09 06:28:33.578445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.116 [2024-12-09 06:28:33.584646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.116 [2024-12-09 06:28:33.584664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.116 [2024-12-09 06:28:33.584671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.116 [2024-12-09 06:28:33.592461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.116 [2024-12-09 06:28:33.592479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.116 [2024-12-09 06:28:33.592485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.116 [2024-12-09 06:28:33.599634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.116 [2024-12-09 06:28:33.599652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.116 [2024-12-09 06:28:33.599658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.116 [2024-12-09 06:28:33.603809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.116 [2024-12-09 06:28:33.603827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.116 [2024-12-09 06:28:33.603833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.116 [2024-12-09 06:28:33.608992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.116 [2024-12-09 06:28:33.609010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.116 [2024-12-09 06:28:33.609016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.116 [2024-12-09 06:28:33.617852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.116 [2024-12-09 06:28:33.617870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.116 [2024-12-09 06:28:33.617876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.116 [2024-12-09 06:28:33.627356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.116 [2024-12-09 06:28:33.627374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.116 [2024-12-09 06:28:33.627381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.116 [2024-12-09 06:28:33.633940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.116 [2024-12-09 06:28:33.633958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.116 [2024-12-09 06:28:33.633964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.116 [2024-12-09 06:28:33.640788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.117 [2024-12-09 06:28:33.640805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.117 [2024-12-09 06:28:33.640812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.117 [2024-12-09 06:28:33.648395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.117 [2024-12-09 06:28:33.648411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.117 [2024-12-09 06:28:33.648418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.117 [2024-12-09 06:28:33.654150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.117 [2024-12-09 06:28:33.654168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.117 [2024-12-09 06:28:33.654174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.117 [2024-12-09 06:28:33.658179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.117 [2024-12-09 06:28:33.658196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.117 [2024-12-09 06:28:33.658203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.117 [2024-12-09 06:28:33.662302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.117 [2024-12-09 06:28:33.662320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.117 [2024-12-09 06:28:33.662326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.117 [2024-12-09 06:28:33.667802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.117 [2024-12-09 06:28:33.667820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.117 [2024-12-09 06:28:33.667826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.117 [2024-12-09 06:28:33.671739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.117 [2024-12-09 06:28:33.671756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.117 [2024-12-09 06:28:33.671763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.117 [2024-12-09 06:28:33.678541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.117 [2024-12-09 06:28:33.678559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.117 [2024-12-09 06:28:33.678565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.117 [2024-12-09 06:28:33.684655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.117 [2024-12-09 06:28:33.684673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.117 [2024-12-09 06:28:33.684682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.117 [2024-12-09 06:28:33.691972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.117 [2024-12-09 06:28:33.691990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.117 [2024-12-09 06:28:33.691998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.117 [2024-12-09 06:28:33.695585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.117 [2024-12-09 06:28:33.695603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.117 [2024-12-09 06:28:33.695609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.117 [2024-12-09 06:28:33.699506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.117 [2024-12-09 06:28:33.699523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.117 [2024-12-09 06:28:33.699530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.400 [2024-12-09 06:28:33.702201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.400 [2024-12-09 06:28:33.702219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.400 [2024-12-09 06:28:33.702225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.400 [2024-12-09 06:28:33.705545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.400 [2024-12-09 06:28:33.705562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.400 [2024-12-09 06:28:33.705569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.400 [2024-12-09 06:28:33.709444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.400 [2024-12-09 06:28:33.709466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.400 [2024-12-09 06:28:33.709473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.400 [2024-12-09 06:28:33.715313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.400 [2024-12-09 06:28:33.715330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.400 [2024-12-09 06:28:33.715337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.400 [2024-12-09 06:28:33.722481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.400 [2024-12-09 06:28:33.722499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.400 [2024-12-09 06:28:33.722505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.400 [2024-12-09 06:28:33.731692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.400 [2024-12-09 06:28:33.731710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.400 [2024-12-09 06:28:33.731717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.400 [2024-12-09 06:28:33.742602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.400 [2024-12-09 06:28:33.742619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.400 [2024-12-09 06:28:33.742626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.400 [2024-12-09 06:28:33.751254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.400 [2024-12-09 06:28:33.751272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.400 [2024-12-09 06:28:33.751279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.400 [2024-12-09 06:28:33.756736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.400 [2024-12-09 06:28:33.756754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.400 [2024-12-09 06:28:33.756761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.400 [2024-12-09 06:28:33.764392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.400 [2024-12-09 06:28:33.764410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.400 [2024-12-09 06:28:33.764416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.400 [2024-12-09 06:28:33.768185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.400 [2024-12-09 06:28:33.768203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.400 [2024-12-09 06:28:33.768210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.400 [2024-12-09 06:28:33.772017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.400 [2024-12-09 06:28:33.772035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.400 [2024-12-09 06:28:33.772041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.400 [2024-12-09 06:28:33.779409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.400 [2024-12-09 06:28:33.779426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.400 [2024-12-09 06:28:33.779433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.400 [2024-12-09 06:28:33.784907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.400 [2024-12-09 06:28:33.784924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.400 [2024-12-09 06:28:33.784934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.400 [2024-12-09 06:28:33.793595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.400 [2024-12-09 06:28:33.793612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.400 [2024-12-09 06:28:33.793618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.400 [2024-12-09 06:28:33.803347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.400 [2024-12-09 06:28:33.803364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.400 [2024-12-09 06:28:33.803370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.400 [2024-12-09 06:28:33.812240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.400 [2024-12-09 06:28:33.812257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.400 [2024-12-09 06:28:33.812263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.400 [2024-12-09 06:28:33.821348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.400 [2024-12-09 06:28:33.821364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.400 [2024-12-09 06:28:33.821371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.400 [2024-12-09 06:28:33.830500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.400 [2024-12-09 06:28:33.830517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.400 [2024-12-09 06:28:33.830523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.400 [2024-12-09 06:28:33.840014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.400 [2024-12-09 06:28:33.840032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.400 [2024-12-09 06:28:33.840039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.400 [2024-12-09 06:28:33.850391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.400 [2024-12-09 06:28:33.850409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.400 [2024-12-09 06:28:33.850415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.400 [2024-12-09 06:28:33.861963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.400 [2024-12-09 06:28:33.861981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.400 [2024-12-09 06:28:33.861987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.400 [2024-12-09 06:28:33.873768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.401 [2024-12-09 06:28:33.873789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.401 [2024-12-09 06:28:33.873796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.401 [2024-12-09 06:28:33.885312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.401 [2024-12-09 06:28:33.885329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.401 [2024-12-09 06:28:33.885336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.401 [2024-12-09 06:28:33.897096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.401 [2024-12-09 06:28:33.897114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.401 [2024-12-09 06:28:33.897120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.401 [2024-12-09 06:28:33.906474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.401 [2024-12-09 06:28:33.906492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.401 [2024-12-09 06:28:33.906498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.401 [2024-12-09 06:28:33.914889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.401 [2024-12-09 06:28:33.914907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.401 [2024-12-09 06:28:33.914913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.401 [2024-12-09 06:28:33.918831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.401 [2024-12-09 06:28:33.918848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.401 [2024-12-09 06:28:33.918855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.401 [2024-12-09 06:28:33.923759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.401 [2024-12-09 06:28:33.923776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.401 [2024-12-09 06:28:33.923782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.401 [2024-12-09 06:28:33.930817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.401 [2024-12-09 06:28:33.930834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.401 [2024-12-09 06:28:33.930841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.401 [2024-12-09 06:28:33.936173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.401 [2024-12-09 06:28:33.936191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.401 [2024-12-09 06:28:33.936197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.401 [2024-12-09 06:28:33.946240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.401 [2024-12-09 06:28:33.946258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.401 [2024-12-09 06:28:33.946264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.401 [2024-12-09 06:28:33.955983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.401 [2024-12-09 06:28:33.956000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.401 [2024-12-09 06:28:33.956007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.401 [2024-12-09 06:28:33.964426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.401 [2024-12-09 06:28:33.964444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.401 [2024-12-09 06:28:33.964456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.401 [2024-12-09 06:28:33.976085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.401 [2024-12-09 06:28:33.976103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.401 [2024-12-09 06:28:33.976110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.663 [2024-12-09 06:28:33.986227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.663 [2024-12-09 06:28:33.986245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.663 [2024-12-09 06:28:33.986252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.663 [2024-12-09 06:28:33.995262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.663 [2024-12-09 06:28:33.995279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.663 [2024-12-09 06:28:33.995286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.663 [2024-12-09 06:28:34.000458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.663 [2024-12-09 06:28:34.000475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.663 [2024-12-09 06:28:34.000482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.663 [2024-12-09 06:28:34.004938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.663 [2024-12-09 06:28:34.004956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.663 [2024-12-09 06:28:34.004962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.663 [2024-12-09 06:28:34.014015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.663 [2024-12-09 06:28:34.014032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.663 [2024-12-09 06:28:34.014042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.663 [2024-12-09 06:28:34.023690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.663 [2024-12-09 06:28:34.023707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.663 [2024-12-09 06:28:34.023714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.663 [2024-12-09 06:28:34.034425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.663 [2024-12-09 06:28:34.034442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.663 [2024-12-09 06:28:34.034453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.663 [2024-12-09 06:28:34.044688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.663 [2024-12-09 06:28:34.044706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.663 [2024-12-09 06:28:34.044712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.663 [2024-12-09 06:28:34.054248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.663 [2024-12-09 06:28:34.054265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.663 [2024-12-09 06:28:34.054272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.663 [2024-12-09 06:28:34.064487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.663 [2024-12-09 06:28:34.064505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.663 [2024-12-09 06:28:34.064511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.663 [2024-12-09 06:28:34.073697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.663 [2024-12-09 06:28:34.073715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.663 [2024-12-09 06:28:34.073721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.663 [2024-12-09 06:28:34.082805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.663 [2024-12-09 06:28:34.082822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.663 [2024-12-09 06:28:34.082829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.663 [2024-12-09 06:28:34.092579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.663 [2024-12-09 06:28:34.092597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.663 [2024-12-09 06:28:34.092603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.663 [2024-12-09 06:28:34.102210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.663 [2024-12-09 06:28:34.102231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.663 [2024-12-09 06:28:34.102238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.663 [2024-12-09 06:28:34.112272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.663 [2024-12-09 06:28:34.112290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.663 [2024-12-09 06:28:34.112296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.663 [2024-12-09 06:28:34.119685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.663 [2024-12-09 06:28:34.119703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.663 [2024-12-09 06:28:34.119709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.663 [2024-12-09 06:28:34.123461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.663 [2024-12-09 06:28:34.123478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.663 [2024-12-09 06:28:34.123485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.663 [2024-12-09 06:28:34.128214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.663 [2024-12-09 06:28:34.128231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.663 [2024-12-09 06:28:34.128238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.663 [2024-12-09 06:28:34.132236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.663 [2024-12-09 06:28:34.132253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.663 [2024-12-09 06:28:34.132260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.663 [2024-12-09 06:28:34.139597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.663 [2024-12-09 06:28:34.139614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.663 [2024-12-09 06:28:34.139621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.663 [2024-12-09 06:28:34.146910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.663 [2024-12-09 06:28:34.146927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.663 [2024-12-09 06:28:34.146933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.663 [2024-12-09 06:28:34.154492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.664 [2024-12-09 06:28:34.154510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.664 [2024-12-09 06:28:34.154516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.664 [2024-12-09 06:28:34.161577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.664 [2024-12-09 06:28:34.161595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.664 [2024-12-09 06:28:34.161602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.664 [2024-12-09 06:28:34.165232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.664 [2024-12-09 06:28:34.165249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.664 [2024-12-09 06:28:34.165255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.664 [2024-12-09 06:28:34.168804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.664 [2024-12-09 06:28:34.168822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.664 [2024-12-09 06:28:34.168829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.664 [2024-12-09 06:28:34.172666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.664 [2024-12-09 06:28:34.172683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.664 [2024-12-09 06:28:34.172689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.664 [2024-12-09 06:28:34.179269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.664 [2024-12-09 06:28:34.179286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.664 [2024-12-09 06:28:34.179293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.664 [2024-12-09 06:28:34.182894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.664 [2024-12-09 06:28:34.182912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.664 [2024-12-09 06:28:34.182918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.664 [2024-12-09 06:28:34.190669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.664 [2024-12-09 06:28:34.190687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.664 [2024-12-09 06:28:34.190693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.664 [2024-12-09 06:28:34.200213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.664 [2024-12-09 06:28:34.200231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.664 [2024-12-09 06:28:34.200238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.664 [2024-12-09 06:28:34.209255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.664 [2024-12-09 06:28:34.209273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.664 [2024-12-09 06:28:34.209285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.664 [2024-12-09 06:28:34.217589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.664 [2024-12-09 06:28:34.217606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.664 [2024-12-09 06:28:34.217613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.664 [2024-12-09 06:28:34.226532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.664 [2024-12-09 06:28:34.226549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.664 [2024-12-09 06:28:34.226556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.664 [2024-12-09 06:28:34.234981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.664 [2024-12-09 06:28:34.234999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.664 [2024-12-09 06:28:34.235005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.664 4307.00 IOPS, 538.38 MiB/s [2024-12-09T05:28:34.251Z] [2024-12-09 06:28:34.245423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.664 [2024-12-09 06:28:34.245442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.664 [2024-12-09 06:28:34.245453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.924 [2024-12-09 06:28:34.253158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.924 [2024-12-09 06:28:34.253175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.924 [2024-12-09 06:28:34.253181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.924 [2024-12-09 06:28:34.257991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.924 [2024-12-09 06:28:34.258008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.924 [2024-12-09 06:28:34.258014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.924 [2024-12-09 06:28:34.267700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.924 [2024-12-09 06:28:34.267718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.924 [2024-12-09 06:28:34.267724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.925 [2024-12-09 06:28:34.277210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.925 [2024-12-09 06:28:34.277227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.925 [2024-12-09 06:28:34.277234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.925 [2024-12-09 06:28:34.280596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.925 [2024-12-09 06:28:34.280613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.925 [2024-12-09 06:28:34.280620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.925 [2024-12-09 06:28:34.286004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.925 [2024-12-09 06:28:34.286022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.925 [2024-12-09 06:28:34.286028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.925 [2024-12-09 06:28:34.290067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.925 [2024-12-09 06:28:34.290085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.925 [2024-12-09 06:28:34.290092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.925 [2024-12-09 06:28:34.294490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.925 [2024-12-09 06:28:34.294507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.925 [2024-12-09 06:28:34.294513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.925 [2024-12-09 06:28:34.301488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.925 [2024-12-09 06:28:34.301506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.925 [2024-12-09 06:28:34.301513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.925 [2024-12-09 06:28:34.305396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.925 [2024-12-09 06:28:34.305414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.925 [2024-12-09 06:28:34.305420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.925 [2024-12-09 06:28:34.311525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.925 [2024-12-09 06:28:34.311543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.925 [2024-12-09 06:28:34.311549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.925 [2024-12-09 06:28:34.316441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.925 [2024-12-09 06:28:34.316464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.925 [2024-12-09 06:28:34.316471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.925 [2024-12-09 06:28:34.318862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.925 [2024-12-09 06:28:34.318879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.925 [2024-12-09 06:28:34.318889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.925 [2024-12-09 06:28:34.321960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.925 [2024-12-09 06:28:34.321977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.925 [2024-12-09 06:28:34.321984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.925 [2024-12-09 06:28:34.326707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.925 [2024-12-09 06:28:34.326724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.925 [2024-12-09 06:28:34.326731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.925 [2024-12-09 06:28:34.334784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.925 [2024-12-09 06:28:34.334800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.925 [2024-12-09 06:28:34.334807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.925 [2024-12-09 06:28:34.341763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.925 [2024-12-09 06:28:34.341781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.925 [2024-12-09 06:28:34.341787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.925 [2024-12-09 06:28:34.349261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.925 [2024-12-09 06:28:34.349279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.925 [2024-12-09 06:28:34.349286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.925 [2024-12-09 06:28:34.355246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.925 [2024-12-09 06:28:34.355263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.925 [2024-12-09 06:28:34.355270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.925 [2024-12-09 06:28:34.362179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.925 [2024-12-09 06:28:34.362197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.925 [2024-12-09 06:28:34.362203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.925 [2024-12-09 06:28:34.371154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.925 [2024-12-09 06:28:34.371172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.925 [2024-12-09 06:28:34.371179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.925 [2024-12-09 06:28:34.379781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.925 [2024-12-09 06:28:34.379801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.925 [2024-12-09 06:28:34.379808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.925 [2024-12-09 06:28:34.388619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.925 [2024-12-09 06:28:34.388635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.925 [2024-12-09 06:28:34.388641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.925 [2024-12-09 06:28:34.395169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.925 [2024-12-09 06:28:34.395186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.925 [2024-12-09 06:28:34.395192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.925 [2024-12-09 06:28:34.399206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.925 [2024-12-09 06:28:34.399223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.925 [2024-12-09 06:28:34.399229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.925 [2024-12-09 06:28:34.403576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.925 [2024-12-09 06:28:34.403592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.925 [2024-12-09 06:28:34.403598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.925 [2024-12-09 06:28:34.408511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.925 [2024-12-09 06:28:34.408528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.925 [2024-12-09 06:28:34.408534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.925 [2024-12-09 06:28:34.414856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.925 [2024-12-09 06:28:34.414873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.925 [2024-12-09 06:28:34.414880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.925 [2024-12-09 06:28:34.418650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.925 [2024-12-09 06:28:34.418667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.925 [2024-12-09 06:28:34.418674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.925 [2024-12-09 06:28:34.425939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.925 [2024-12-09 06:28:34.425956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.925 [2024-12-09 06:28:34.425963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.925 [2024-12-09 06:28:34.429522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.925 [2024-12-09 06:28:34.429539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.925 [2024-12-09 06:28:34.429545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.925 [2024-12-09 06:28:34.433579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.925 [2024-12-09 06:28:34.433596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.925 [2024-12-09 06:28:34.433603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.925 [2024-12-09 06:28:34.442525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.925 [2024-12-09 06:28:34.442541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.925 [2024-12-09 06:28:34.442548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.925 [2024-12-09 06:28:34.447515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.925 [2024-12-09 06:28:34.447532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.925 [2024-12-09 06:28:34.447538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.925 [2024-12-09 06:28:34.454764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.925 [2024-12-09 06:28:34.454781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.925 [2024-12-09 06:28:34.454788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.925 [2024-12-09 06:28:34.465263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.925 [2024-12-09 06:28:34.465281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.925 [2024-12-09 06:28:34.465288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.925 [2024-12-09 06:28:34.474058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.925 [2024-12-09 06:28:34.474075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.925 [2024-12-09 06:28:34.474082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.925 [2024-12-09 06:28:34.482530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.925 [2024-12-09 06:28:34.482547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.925 [2024-12-09 06:28:34.482554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.925 [2024-12-09 06:28:34.491179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.925 [2024-12-09 06:28:34.491197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.925 [2024-12-09 06:28:34.491207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.925 [2024-12-09 06:28:34.500335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.925 [2024-12-09 06:28:34.500352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.925 [2024-12-09 06:28:34.500358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.925 [2024-12-09 06:28:34.508139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:39.925 [2024-12-09 06:28:34.508156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.925 [2024-12-09 06:28:34.508162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.185 [2024-12-09 06:28:34.518878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.185 [2024-12-09 06:28:34.518896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.185 [2024-12-09 06:28:34.518902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.185 [2024-12-09 06:28:34.527972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.185 [2024-12-09 06:28:34.527989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.185 [2024-12-09 06:28:34.527996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.185 [2024-12-09 06:28:34.535674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.185 [2024-12-09 06:28:34.535690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.185 [2024-12-09 06:28:34.535697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.185 [2024-12-09 06:28:34.539851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.185 [2024-12-09 06:28:34.539868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.185 [2024-12-09 06:28:34.539875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.185 [2024-12-09 06:28:34.543966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.185 [2024-12-09 06:28:34.543983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.185 [2024-12-09 06:28:34.543989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.185 [2024-12-09 06:28:34.548568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.185 [2024-12-09 06:28:34.548585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.185 [2024-12-09 06:28:34.548591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.185 [2024-12-09 06:28:34.554217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.185 [2024-12-09 06:28:34.554237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.185 [2024-12-09 06:28:34.554244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.185 [2024-12-09 06:28:34.558849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.185 [2024-12-09 06:28:34.558866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.185 [2024-12-09 06:28:34.558873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.185 [2024-12-09 06:28:34.562667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.185 [2024-12-09 06:28:34.562683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.185 [2024-12-09 06:28:34.562690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.185 [2024-12-09 06:28:34.569080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.185 [2024-12-09 06:28:34.569097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.185 [2024-12-09 06:28:34.569103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.185 [2024-12-09 06:28:34.573113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.185 [2024-12-09 06:28:34.573130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.185 [2024-12-09 06:28:34.573136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.185 [2024-12-09 06:28:34.582653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.185 [2024-12-09 06:28:34.582671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.185 [2024-12-09 06:28:34.582677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.185 [2024-12-09 06:28:34.587505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.185 [2024-12-09 06:28:34.587523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.185 [2024-12-09 06:28:34.587529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.185 [2024-12-09 06:28:34.596638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.185 [2024-12-09 06:28:34.596656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.185 [2024-12-09 06:28:34.596662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.185 [2024-12-09 06:28:34.605649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.185 [2024-12-09 06:28:34.605666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.185 [2024-12-09 06:28:34.605673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.185 [2024-12-09 06:28:34.615222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.185 [2024-12-09 06:28:34.615240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.185 [2024-12-09 06:28:34.615246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.185 [2024-12-09 06:28:34.622617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.185 [2024-12-09 06:28:34.622635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.185 [2024-12-09 06:28:34.622641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.185 [2024-12-09 06:28:34.628098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.185 [2024-12-09 06:28:34.628116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.185 [2024-12-09 06:28:34.628122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.185 [2024-12-09 06:28:34.635969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.185 [2024-12-09 06:28:34.635986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.185 [2024-12-09 06:28:34.635993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.185 [2024-12-09 06:28:34.641610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.185 [2024-12-09 06:28:34.641628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.185 [2024-12-09 06:28:34.641634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.185 [2024-12-09 06:28:34.645772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.185 [2024-12-09 06:28:34.645790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.185 [2024-12-09 06:28:34.645796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.185 [2024-12-09 06:28:34.648976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.186 [2024-12-09 06:28:34.648993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.186 [2024-12-09 06:28:34.648999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.186 [2024-12-09 06:28:34.652950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.186 [2024-12-09 06:28:34.652967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.186 [2024-12-09 06:28:34.652973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.186 [2024-12-09 06:28:34.657082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.186 [2024-12-09 06:28:34.657104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.186 [2024-12-09 06:28:34.657110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.186 [2024-12-09 06:28:34.660715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.186 [2024-12-09 06:28:34.660732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.186 [2024-12-09 06:28:34.660738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.186 [2024-12-09 06:28:34.664872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.186 [2024-12-09 06:28:34.664888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.186 [2024-12-09 06:28:34.664895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.186 [2024-12-09 06:28:34.671982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.186 [2024-12-09 06:28:34.671999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.186 [2024-12-09 06:28:34.672005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.186 [2024-12-09 06:28:34.679416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.186 [2024-12-09 06:28:34.679434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.186 [2024-12-09 06:28:34.679440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.186 [2024-12-09 06:28:34.684328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.186 [2024-12-09 06:28:34.684345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.186 [2024-12-09 06:28:34.684352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.186 [2024-12-09 06:28:34.694571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.186 [2024-12-09 06:28:34.694588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.186 [2024-12-09 06:28:34.694595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.186 [2024-12-09 06:28:34.702836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.186 [2024-12-09 06:28:34.702854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.186 [2024-12-09 06:28:34.702860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.186 [2024-12-09 06:28:34.709525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.186 [2024-12-09 06:28:34.709542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.186 [2024-12-09 06:28:34.709549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.186 [2024-12-09 06:28:34.717771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.186 [2024-12-09 06:28:34.717789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.186 [2024-12-09 06:28:34.717796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.186 [2024-12-09 06:28:34.725333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.186 [2024-12-09 06:28:34.725351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.186 [2024-12-09 06:28:34.725357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.186 [2024-12-09 06:28:34.729129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.186 [2024-12-09 06:28:34.729147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.186 [2024-12-09 06:28:34.729154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.186 [2024-12-09 06:28:34.732768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.186 [2024-12-09 06:28:34.732786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.186 [2024-12-09 06:28:34.732793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.186 [2024-12-09 06:28:34.737325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.186 [2024-12-09 06:28:34.737343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.186 [2024-12-09 06:28:34.737349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.186 [2024-12-09 06:28:34.746176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.186 [2024-12-09 06:28:34.746193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.186 [2024-12-09 06:28:34.746199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.186 [2024-12-09 06:28:34.750229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.186 [2024-12-09 06:28:34.750246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.186 [2024-12-09 06:28:34.750252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.186 [2024-12-09 06:28:34.753981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.186 [2024-12-09 06:28:34.753998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.186 [2024-12-09 06:28:34.754004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.186 [2024-12-09 06:28:34.760385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.186 [2024-12-09 06:28:34.760402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.186 [2024-12-09 06:28:34.760412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.186 [2024-12-09 06:28:34.764413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.186 [2024-12-09 06:28:34.764431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.186 [2024-12-09 06:28:34.764437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.186 [2024-12-09 06:28:34.768296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.186 [2024-12-09 06:28:34.768313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.186 [2024-12-09 06:28:34.768320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.446 [2024-12-09 06:28:34.777072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.446 [2024-12-09 06:28:34.777089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.446 [2024-12-09 06:28:34.777096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.446 [2024-12-09 06:28:34.785544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.446 [2024-12-09 06:28:34.785562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.446 [2024-12-09 06:28:34.785568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.446 [2024-12-09 06:28:34.795784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.446 [2024-12-09 06:28:34.795802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.446 [2024-12-09 06:28:34.795808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.446 [2024-12-09 06:28:34.804000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.446 [2024-12-09 06:28:34.804018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.446 [2024-12-09 06:28:34.804024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.446 [2024-12-09 06:28:34.811692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.446 [2024-12-09 06:28:34.811710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.446 [2024-12-09 06:28:34.811717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.446 [2024-12-09 06:28:34.815102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.446 [2024-12-09 06:28:34.815119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.446 [2024-12-09 06:28:34.815126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.446 [2024-12-09 06:28:34.819244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.446 [2024-12-09 06:28:34.819265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.446 [2024-12-09 06:28:34.819272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.446 [2024-12-09 06:28:34.823406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.446 [2024-12-09 06:28:34.823424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.446 [2024-12-09 06:28:34.823430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.446 [2024-12-09 06:28:34.832822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.446 [2024-12-09 06:28:34.832839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.446 [2024-12-09 06:28:34.832846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.446 [2024-12-09 06:28:34.840367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.446 [2024-12-09 06:28:34.840385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.446 [2024-12-09 06:28:34.840392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.446 [2024-12-09 06:28:34.845092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.446 [2024-12-09 06:28:34.845110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.446 [2024-12-09 06:28:34.845116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.446 [2024-12-09 06:28:34.854657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.446 [2024-12-09 06:28:34.854675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.446 [2024-12-09 06:28:34.854682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.446 [2024-12-09 06:28:34.865795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.446 [2024-12-09 06:28:34.865812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.446 [2024-12-09 06:28:34.865818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.446 [2024-12-09 06:28:34.873793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.446 [2024-12-09 06:28:34.873810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.446 [2024-12-09 06:28:34.873817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.446 [2024-12-09 06:28:34.879341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.446 [2024-12-09 06:28:34.879359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.446 [2024-12-09 06:28:34.879365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.446 [2024-12-09 06:28:34.888039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.446 [2024-12-09 06:28:34.888056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.446 [2024-12-09 06:28:34.888062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.446 [2024-12-09 06:28:34.897219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.446 [2024-12-09 06:28:34.897236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.446 [2024-12-09 06:28:34.897242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.446 [2024-12-09 06:28:34.906818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.446 [2024-12-09 06:28:34.906835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.446 [2024-12-09 06:28:34.906842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.446 [2024-12-09 06:28:34.917906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.446 [2024-12-09 06:28:34.917924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.446 [2024-12-09 06:28:34.917931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.446 [2024-12-09 06:28:34.928160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.446 [2024-12-09 06:28:34.928178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.446 [2024-12-09 06:28:34.928184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.446 [2024-12-09 06:28:34.939051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.446 [2024-12-09 06:28:34.939067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.446 [2024-12-09 06:28:34.939074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.446 [2024-12-09 06:28:34.950444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.446 [2024-12-09 06:28:34.950466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.446 [2024-12-09 06:28:34.950473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.446 [2024-12-09 06:28:34.961057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.446 [2024-12-09 06:28:34.961074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.446 [2024-12-09 06:28:34.961080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.446 [2024-12-09 06:28:34.973059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.446 [2024-12-09 06:28:34.973075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.446 [2024-12-09 06:28:34.973085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.446 [2024-12-09 06:28:34.983402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.446 [2024-12-09 06:28:34.983418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.446 [2024-12-09 06:28:34.983424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.446 [2024-12-09 06:28:34.995353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.446 [2024-12-09 06:28:34.995371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.446 [2024-12-09 06:28:34.995377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.446 [2024-12-09 06:28:35.006342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.446 [2024-12-09 06:28:35.006359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.446 [2024-12-09 06:28:35.006365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.446 [2024-12-09 06:28:35.017859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.446 [2024-12-09 06:28:35.017876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.446 [2024-12-09 06:28:35.017882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.446 [2024-12-09 06:28:35.029499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.446 [2024-12-09 06:28:35.029515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.446 [2024-12-09 06:28:35.029522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.712 [2024-12-09 06:28:35.041245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.712 [2024-12-09 06:28:35.041262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.712 [2024-12-09 06:28:35.041268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.712 [2024-12-09 06:28:35.052402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.712 [2024-12-09 06:28:35.052418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.712 [2024-12-09 06:28:35.052425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.712 [2024-12-09 06:28:35.062712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.712 [2024-12-09 06:28:35.062729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.712 [2024-12-09 06:28:35.062735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.712 [2024-12-09 06:28:35.071954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.713 [2024-12-09 06:28:35.071974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.713 [2024-12-09 06:28:35.071981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.713 [2024-12-09 06:28:35.081280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.713 [2024-12-09 06:28:35.081297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.713 [2024-12-09 06:28:35.081303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.713 [2024-12-09 06:28:35.089654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.713 [2024-12-09 06:28:35.089671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.713 [2024-12-09 06:28:35.089678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.713 [2024-12-09 06:28:35.094402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.713 [2024-12-09 06:28:35.094419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.713 [2024-12-09 06:28:35.094425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.713 [2024-12-09 06:28:35.103664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.713 [2024-12-09 06:28:35.103681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.713 [2024-12-09 06:28:35.103687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.713 [2024-12-09 06:28:35.110006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.713 [2024-12-09 06:28:35.110023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.713 [2024-12-09 06:28:35.110029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.713 [2024-12-09 06:28:35.113879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.713 [2024-12-09 06:28:35.113896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.713 [2024-12-09 06:28:35.113902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.713 [2024-12-09 06:28:35.117555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.713 [2024-12-09 06:28:35.117572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.713 [2024-12-09 06:28:35.117578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.713 [2024-12-09 06:28:35.122622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.713 [2024-12-09 06:28:35.122639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.713 [2024-12-09 06:28:35.122646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.713 [2024-12-09 06:28:35.131730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.713 [2024-12-09 06:28:35.131748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.713 [2024-12-09 06:28:35.131754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.713 [2024-12-09 06:28:35.140802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.713 [2024-12-09 06:28:35.140820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.713 [2024-12-09 06:28:35.140827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.713 [2024-12-09 06:28:35.150139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.713 [2024-12-09 06:28:35.150156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.713 [2024-12-09 06:28:35.150163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.713 [2024-12-09 06:28:35.161274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.713 [2024-12-09 06:28:35.161293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.713 [2024-12-09 06:28:35.161299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.713 [2024-12-09 06:28:35.171584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.713 [2024-12-09 06:28:35.171602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.713 [2024-12-09 06:28:35.171608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.713 [2024-12-09 06:28:35.181619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.713 [2024-12-09 06:28:35.181636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.713 [2024-12-09 06:28:35.181643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.713 [2024-12-09 06:28:35.190258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.713 [2024-12-09 06:28:35.190275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.713 [2024-12-09 06:28:35.190282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.713 [2024-12-09 06:28:35.198940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.713 [2024-12-09 06:28:35.198958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.713 [2024-12-09 06:28:35.198964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.713 [2024-12-09 06:28:35.203882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.713 [2024-12-09 06:28:35.203900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.713 [2024-12-09 06:28:35.203909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.713 [2024-12-09 06:28:35.208007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.713 [2024-12-09 06:28:35.208025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.713 [2024-12-09 06:28:35.208031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.713 [2024-12-09 06:28:35.214111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.713 [2024-12-09 06:28:35.214129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.713 [2024-12-09 06:28:35.214135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.713 [2024-12-09 06:28:35.221085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.713 [2024-12-09 06:28:35.221104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.713 [2024-12-09 06:28:35.221111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.713 [2024-12-09 06:28:35.229456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.713 [2024-12-09 06:28:35.229474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.713 [2024-12-09 06:28:35.229480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.713 [2024-12-09 06:28:35.239432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.713 [2024-12-09 06:28:35.239455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.713 [2024-12-09 06:28:35.239462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.713 4323.50 IOPS, 540.44 MiB/s [2024-12-09T05:28:35.300Z] [2024-12-09 06:28:35.248419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d4010) 00:29:40.713 [2024-12-09 06:28:35.248435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.713 [2024-12-09 06:28:35.248442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.713 00:29:40.713 Latency(us) 00:29:40.713 [2024-12-09T05:28:35.300Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:40.713 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:40.713 nvme0n1 : 2.00 4321.86 540.23 0.00 0.00 3698.72 519.88 18955.03 00:29:40.713 [2024-12-09T05:28:35.300Z] =================================================================================================================== 00:29:40.713 [2024-12-09T05:28:35.300Z] Total : 4321.86 540.23 0.00 0.00 3698.72 519.88 18955.03 00:29:40.713 { 00:29:40.713 "results": [ 00:29:40.713 { 00:29:40.713 "job": "nvme0n1", 00:29:40.713 "core_mask": "0x2", 00:29:40.713 "workload": "randread", 00:29:40.713 "status": "finished", 00:29:40.713 "queue_depth": 16, 00:29:40.713 "io_size": 131072, 00:29:40.713 "runtime": 2.004461, 00:29:40.713 "iops": 4321.860091066876, 00:29:40.713 "mibps": 540.2325113833595, 00:29:40.713 "io_failed": 0, 00:29:40.713 "io_timeout": 0, 00:29:40.713 "avg_latency_us": 3698.7197513741016, 00:29:40.713 "min_latency_us": 519.876923076923, 00:29:40.713 "max_latency_us": 18955.027692307693 00:29:40.713 } 00:29:40.713 ], 00:29:40.713 "core_count": 1 00:29:40.713 } 00:29:40.713 06:28:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:40.713 06:28:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:40.713 06:28:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:40.713 06:28:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:40.713 | .driver_specific 00:29:40.713 | .nvme_error 00:29:40.713 | .status_code 00:29:40.713 | .command_transient_transport_error' 00:29:40.974 06:28:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 280 > 0 )) 00:29:40.974 06:28:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 491683 00:29:40.974 06:28:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 491683 ']' 00:29:40.974 06:28:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 491683 00:29:40.974 06:28:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:40.974 06:28:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:40.974 06:28:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 491683 00:29:40.974 06:28:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:40.974 06:28:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:40.974 06:28:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 491683' 00:29:40.974 killing process with pid 491683 00:29:40.974 06:28:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 491683 00:29:40.974 Received shutdown signal, test time was about 2.000000 seconds 00:29:40.974 00:29:40.974 Latency(us) 00:29:40.974 [2024-12-09T05:28:35.561Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:40.974 [2024-12-09T05:28:35.561Z] =================================================================================================================== 00:29:40.974 [2024-12-09T05:28:35.561Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:40.974 06:28:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 491683 00:29:41.235 06:28:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:29:41.235 06:28:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:41.235 06:28:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:41.235 06:28:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:41.235 06:28:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:41.235 06:28:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=492240 00:29:41.235 06:28:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 492240 /var/tmp/bperf.sock 00:29:41.235 06:28:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 492240 ']' 00:29:41.235 06:28:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:29:41.235 06:28:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:41.235 06:28:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:41.235 06:28:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:41.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:41.235 06:28:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:41.235 06:28:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:41.235 [2024-12-09 06:28:35.679529] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:29:41.235 [2024-12-09 06:28:35.679581] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid492240 ] 00:29:41.235 [2024-12-09 06:28:35.738975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:41.235 [2024-12-09 06:28:35.767615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:41.495 06:28:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:41.495 06:28:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:41.495 06:28:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:41.495 06:28:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:41.495 06:28:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:41.495 06:28:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.495 06:28:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:41.495 06:28:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.495 06:28:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:41.495 06:28:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:41.754 nvme0n1 00:29:41.754 06:28:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:41.754 06:28:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.754 06:28:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:41.754 06:28:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.754 06:28:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:41.754 06:28:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:42.015 Running I/O for 2 seconds... 00:29:42.015 [2024-12-09 06:28:36.380162] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.015 [2024-12-09 06:28:36.380320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:23018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.015 [2024-12-09 06:28:36.380346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.015 [2024-12-09 06:28:36.389185] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.015 [2024-12-09 06:28:36.389337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:22761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.015 [2024-12-09 06:28:36.389357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.015 [2024-12-09 06:28:36.398137] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.015 [2024-12-09 06:28:36.398283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:5649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.015 [2024-12-09 06:28:36.398301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.015 [2024-12-09 06:28:36.407082] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.015 [2024-12-09 06:28:36.407228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:4629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.015 [2024-12-09 06:28:36.407246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.015 [2024-12-09 06:28:36.416024] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.015 [2024-12-09 06:28:36.416170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:11393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.015 [2024-12-09 06:28:36.416187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.015 [2024-12-09 06:28:36.424958] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.015 [2024-12-09 06:28:36.425103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.015 [2024-12-09 06:28:36.425120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.015 [2024-12-09 06:28:36.433888] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.015 [2024-12-09 06:28:36.434107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:11102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.015 [2024-12-09 06:28:36.434125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.015 [2024-12-09 06:28:36.442814] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.015 [2024-12-09 06:28:36.442957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:23635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.015 [2024-12-09 06:28:36.442974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.015 [2024-12-09 06:28:36.451748] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.015 [2024-12-09 06:28:36.451891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:19018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.015 [2024-12-09 06:28:36.451908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.015 [2024-12-09 06:28:36.460657] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.015 [2024-12-09 06:28:36.460800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.015 [2024-12-09 06:28:36.460817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.015 [2024-12-09 06:28:36.469574] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.015 [2024-12-09 06:28:36.469718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:16791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.015 [2024-12-09 06:28:36.469735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.015 [2024-12-09 06:28:36.478473] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.015 [2024-12-09 06:28:36.478617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:11996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.015 [2024-12-09 06:28:36.478634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.015 [2024-12-09 06:28:36.487372] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.015 [2024-12-09 06:28:36.487521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.015 [2024-12-09 06:28:36.487538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.015 [2024-12-09 06:28:36.496280] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.015 [2024-12-09 06:28:36.496424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:11529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.015 [2024-12-09 06:28:36.496440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.015 [2024-12-09 06:28:36.505182] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.015 [2024-12-09 06:28:36.505326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:16933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.015 [2024-12-09 06:28:36.505342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.015 [2024-12-09 06:28:36.514086] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.015 [2024-12-09 06:28:36.514229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:19398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.015 [2024-12-09 06:28:36.514245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.015 [2024-12-09 06:28:36.522987] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.015 [2024-12-09 06:28:36.523128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:11813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.015 [2024-12-09 06:28:36.523145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.015 [2024-12-09 06:28:36.531887] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.015 [2024-12-09 06:28:36.532030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:2707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.015 [2024-12-09 06:28:36.532047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.015 [2024-12-09 06:28:36.540800] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.015 [2024-12-09 06:28:36.540943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:5986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.015 [2024-12-09 06:28:36.540963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.015 [2024-12-09 06:28:36.549695] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.015 [2024-12-09 06:28:36.549836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:5528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.015 [2024-12-09 06:28:36.549852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.015 [2024-12-09 06:28:36.558602] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.015 [2024-12-09 06:28:36.558744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:10858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.015 [2024-12-09 06:28:36.558761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.015 [2024-12-09 06:28:36.567509] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.015 [2024-12-09 06:28:36.567651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:24913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.015 [2024-12-09 06:28:36.567668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.015 [2024-12-09 06:28:36.576403] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.015 [2024-12-09 06:28:36.576549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:16024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.015 [2024-12-09 06:28:36.576566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.015 [2024-12-09 06:28:36.585368] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.015 [2024-12-09 06:28:36.585525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:24902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.015 [2024-12-09 06:28:36.585542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.015 [2024-12-09 06:28:36.594287] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.015 [2024-12-09 06:28:36.594429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:14919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.015 [2024-12-09 06:28:36.594445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.276 [2024-12-09 06:28:36.603176] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.276 [2024-12-09 06:28:36.603319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:17841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.276 [2024-12-09 06:28:36.603335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.276 [2024-12-09 06:28:36.612084] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.276 [2024-12-09 06:28:36.612225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:7680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.276 [2024-12-09 06:28:36.612242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.276 [2024-12-09 06:28:36.620986] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.276 [2024-12-09 06:28:36.621128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:2102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.276 [2024-12-09 06:28:36.621147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.276 [2024-12-09 06:28:36.629891] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.276 [2024-12-09 06:28:36.630031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.276 [2024-12-09 06:28:36.630048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.276 [2024-12-09 06:28:36.638790] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.276 [2024-12-09 06:28:36.638934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:5006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.276 [2024-12-09 06:28:36.638950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.276 [2024-12-09 06:28:36.647683] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.276 [2024-12-09 06:28:36.647824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:9924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.276 [2024-12-09 06:28:36.647841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.276 [2024-12-09 06:28:36.656574] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.276 [2024-12-09 06:28:36.656717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.276 [2024-12-09 06:28:36.656733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.276 [2024-12-09 06:28:36.665460] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.276 [2024-12-09 06:28:36.665603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:20093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.276 [2024-12-09 06:28:36.665620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.276 [2024-12-09 06:28:36.674381] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.276 [2024-12-09 06:28:36.674528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:9273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.276 [2024-12-09 06:28:36.674544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.276 [2024-12-09 06:28:36.683269] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.276 [2024-12-09 06:28:36.683410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:19197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.276 [2024-12-09 06:28:36.683426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.276 [2024-12-09 06:28:36.692187] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.276 [2024-12-09 06:28:36.692330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:13218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.276 [2024-12-09 06:28:36.692346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.276 [2024-12-09 06:28:36.701079] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.276 [2024-12-09 06:28:36.701220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:22664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.276 [2024-12-09 06:28:36.701236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.276 [2024-12-09 06:28:36.709993] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.276 [2024-12-09 06:28:36.710135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:15469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.276 [2024-12-09 06:28:36.710151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.276 [2024-12-09 06:28:36.718903] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.276 [2024-12-09 06:28:36.719044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:13236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.276 [2024-12-09 06:28:36.719061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.276 [2024-12-09 06:28:36.727807] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.276 [2024-12-09 06:28:36.727950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:11033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.276 [2024-12-09 06:28:36.727966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.276 [2024-12-09 06:28:36.736721] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.276 [2024-12-09 06:28:36.736864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:21906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.276 [2024-12-09 06:28:36.736881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.276 [2024-12-09 06:28:36.745625] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.276 [2024-12-09 06:28:36.745767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:15961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.276 [2024-12-09 06:28:36.745783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.276 [2024-12-09 06:28:36.754509] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.276 [2024-12-09 06:28:36.754650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:10980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.276 [2024-12-09 06:28:36.754667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.276 [2024-12-09 06:28:36.763440] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.276 [2024-12-09 06:28:36.763587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:24520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.276 [2024-12-09 06:28:36.763604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.276 [2024-12-09 06:28:36.772352] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.276 [2024-12-09 06:28:36.772502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:13898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.276 [2024-12-09 06:28:36.772521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.276 [2024-12-09 06:28:36.781266] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.276 [2024-12-09 06:28:36.781409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:25496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.276 [2024-12-09 06:28:36.781426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.276 [2024-12-09 06:28:36.790159] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.276 [2024-12-09 06:28:36.790301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:6517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.276 [2024-12-09 06:28:36.790318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.276 [2024-12-09 06:28:36.799059] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.276 [2024-12-09 06:28:36.799203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:4932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.276 [2024-12-09 06:28:36.799219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.276 [2024-12-09 06:28:36.807966] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.276 [2024-12-09 06:28:36.808109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.276 [2024-12-09 06:28:36.808126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.276 [2024-12-09 06:28:36.816869] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.276 [2024-12-09 06:28:36.817013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.276 [2024-12-09 06:28:36.817032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.276 [2024-12-09 06:28:36.825776] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.276 [2024-12-09 06:28:36.825918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:5442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.276 [2024-12-09 06:28:36.825935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.276 [2024-12-09 06:28:36.834853] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.276 [2024-12-09 06:28:36.834995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:13204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.276 [2024-12-09 06:28:36.835012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.276 [2024-12-09 06:28:36.843746] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.276 [2024-12-09 06:28:36.843890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:4414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.276 [2024-12-09 06:28:36.843906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.276 [2024-12-09 06:28:36.852639] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.276 [2024-12-09 06:28:36.852784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:25532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.276 [2024-12-09 06:28:36.852806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.536 [2024-12-09 06:28:36.861544] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.537 [2024-12-09 06:28:36.861689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:15122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.537 [2024-12-09 06:28:36.861705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.537 [2024-12-09 06:28:36.870440] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.537 [2024-12-09 06:28:36.870591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:19755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.537 [2024-12-09 06:28:36.870608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.537 [2024-12-09 06:28:36.879328] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.537 [2024-12-09 06:28:36.879475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.537 [2024-12-09 06:28:36.879492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.537 [2024-12-09 06:28:36.888212] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.537 [2024-12-09 06:28:36.888355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:4364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.537 [2024-12-09 06:28:36.888371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.537 [2024-12-09 06:28:36.897107] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.537 [2024-12-09 06:28:36.897250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:17569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.537 [2024-12-09 06:28:36.897267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.537 [2024-12-09 06:28:36.906004] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.537 [2024-12-09 06:28:36.906147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:6385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.537 [2024-12-09 06:28:36.906163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.537 [2024-12-09 06:28:36.914901] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.537 [2024-12-09 06:28:36.915044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:20910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.537 [2024-12-09 06:28:36.915061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.537 [2024-12-09 06:28:36.923797] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.537 [2024-12-09 06:28:36.923940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:7813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.537 [2024-12-09 06:28:36.923956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.537 [2024-12-09 06:28:36.932693] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.537 [2024-12-09 06:28:36.932837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:19096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.537 [2024-12-09 06:28:36.932853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.537 [2024-12-09 06:28:36.941587] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.537 [2024-12-09 06:28:36.941729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:25147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.537 [2024-12-09 06:28:36.941745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.537 [2024-12-09 06:28:36.950486] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.537 [2024-12-09 06:28:36.950629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:13574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.537 [2024-12-09 06:28:36.950645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.537 [2024-12-09 06:28:36.959388] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.537 [2024-12-09 06:28:36.959536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:18680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.537 [2024-12-09 06:28:36.959552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.537 [2024-12-09 06:28:36.968291] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.537 [2024-12-09 06:28:36.968433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:5775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.537 [2024-12-09 06:28:36.968454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.537 [2024-12-09 06:28:36.977180] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.537 [2024-12-09 06:28:36.977322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:12492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.537 [2024-12-09 06:28:36.977338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.537 [2024-12-09 06:28:36.986067] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.537 [2024-12-09 06:28:36.986209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:10984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.537 [2024-12-09 06:28:36.986226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.537 [2024-12-09 06:28:36.994962] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.537 [2024-12-09 06:28:36.995105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:21664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.537 [2024-12-09 06:28:36.995122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.537 [2024-12-09 06:28:37.003862] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.537 [2024-12-09 06:28:37.004005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:22802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.537 [2024-12-09 06:28:37.004025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.537 [2024-12-09 06:28:37.012769] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.537 [2024-12-09 06:28:37.012914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:6878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.537 [2024-12-09 06:28:37.012931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.537 [2024-12-09 06:28:37.021676] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.537 [2024-12-09 06:28:37.021820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.537 [2024-12-09 06:28:37.021837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.537 [2024-12-09 06:28:37.030572] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.537 [2024-12-09 06:28:37.030714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:18219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.537 [2024-12-09 06:28:37.030730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.537 [2024-12-09 06:28:37.039552] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.537 [2024-12-09 06:28:37.039694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:9112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.537 [2024-12-09 06:28:37.039711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.537 [2024-12-09 06:28:37.048456] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.537 [2024-12-09 06:28:37.048600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:13074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.537 [2024-12-09 06:28:37.048617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.537 [2024-12-09 06:28:37.057357] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.537 [2024-12-09 06:28:37.057506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:5942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.537 [2024-12-09 06:28:37.057522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.537 [2024-12-09 06:28:37.066239] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.537 [2024-12-09 06:28:37.066382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:6583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.537 [2024-12-09 06:28:37.066399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.537 [2024-12-09 06:28:37.075153] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.537 [2024-12-09 06:28:37.075295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:11916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.537 [2024-12-09 06:28:37.075312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.537 [2024-12-09 06:28:37.084044] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.537 [2024-12-09 06:28:37.084187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:21121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.537 [2024-12-09 06:28:37.084209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.537 [2024-12-09 06:28:37.092960] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.537 [2024-12-09 06:28:37.093101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:7160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.537 [2024-12-09 06:28:37.093117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.537 [2024-12-09 06:28:37.101865] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.538 [2024-12-09 06:28:37.102008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:7889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.538 [2024-12-09 06:28:37.102028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.538 [2024-12-09 06:28:37.110785] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.538 [2024-12-09 06:28:37.110929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:18247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.538 [2024-12-09 06:28:37.110946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.538 [2024-12-09 06:28:37.119659] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.538 [2024-12-09 06:28:37.119803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:16045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.538 [2024-12-09 06:28:37.119820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.799 [2024-12-09 06:28:37.128660] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.799 [2024-12-09 06:28:37.128803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:14311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.799 [2024-12-09 06:28:37.128820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.799 [2024-12-09 06:28:37.137540] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.799 [2024-12-09 06:28:37.137683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:18550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.799 [2024-12-09 06:28:37.137699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.799 [2024-12-09 06:28:37.146466] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.799 [2024-12-09 06:28:37.146610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:6675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.799 [2024-12-09 06:28:37.146627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.800 [2024-12-09 06:28:37.155364] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.800 [2024-12-09 06:28:37.155513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:12176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.800 [2024-12-09 06:28:37.155529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.800 [2024-12-09 06:28:37.164275] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.800 [2024-12-09 06:28:37.164418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:8966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.800 [2024-12-09 06:28:37.164435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.800 [2024-12-09 06:28:37.173172] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.800 [2024-12-09 06:28:37.173314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:3606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.800 [2024-12-09 06:28:37.173331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.800 [2024-12-09 06:28:37.182072] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.800 [2024-12-09 06:28:37.182215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:6185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.800 [2024-12-09 06:28:37.182233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.800 [2024-12-09 06:28:37.190983] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.800 [2024-12-09 06:28:37.191125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:19852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.800 [2024-12-09 06:28:37.191142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.800 [2024-12-09 06:28:37.199886] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.800 [2024-12-09 06:28:37.200027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.800 [2024-12-09 06:28:37.200043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.800 [2024-12-09 06:28:37.208786] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.800 [2024-12-09 06:28:37.208928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:11916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.800 [2024-12-09 06:28:37.208945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.800 [2024-12-09 06:28:37.217683] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.800 [2024-12-09 06:28:37.217829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:22962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.800 [2024-12-09 06:28:37.217846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.800 [2024-12-09 06:28:37.226581] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.800 [2024-12-09 06:28:37.226724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:20343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.800 [2024-12-09 06:28:37.226741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.800 [2024-12-09 06:28:37.235494] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.800 [2024-12-09 06:28:37.235638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:17543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.800 [2024-12-09 06:28:37.235659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.800 [2024-12-09 06:28:37.244372] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.800 [2024-12-09 06:28:37.244521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:4645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.800 [2024-12-09 06:28:37.244537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.800 [2024-12-09 06:28:37.253294] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.800 [2024-12-09 06:28:37.253437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:13257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.800 [2024-12-09 06:28:37.253461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.800 [2024-12-09 06:28:37.262186] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.800 [2024-12-09 06:28:37.262330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.800 [2024-12-09 06:28:37.262345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.800 [2024-12-09 06:28:37.271083] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.800 [2024-12-09 06:28:37.271224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:24220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.800 [2024-12-09 06:28:37.271241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.800 [2024-12-09 06:28:37.279982] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.800 [2024-12-09 06:28:37.280126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:3233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.800 [2024-12-09 06:28:37.280142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.800 [2024-12-09 06:28:37.288930] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.800 [2024-12-09 06:28:37.289072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:22134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.800 [2024-12-09 06:28:37.289088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.800 [2024-12-09 06:28:37.297849] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.800 [2024-12-09 06:28:37.297992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:6818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.800 [2024-12-09 06:28:37.298008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.800 [2024-12-09 06:28:37.306761] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.800 [2024-12-09 06:28:37.306905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:9031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.800 [2024-12-09 06:28:37.306922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.800 [2024-12-09 06:28:37.315656] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.800 [2024-12-09 06:28:37.315799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:14124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.800 [2024-12-09 06:28:37.315819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.800 [2024-12-09 06:28:37.324556] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.800 [2024-12-09 06:28:37.324699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.800 [2024-12-09 06:28:37.324715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.800 [2024-12-09 06:28:37.333473] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.800 [2024-12-09 06:28:37.333617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:15292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.800 [2024-12-09 06:28:37.333634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.800 [2024-12-09 06:28:37.342387] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.800 [2024-12-09 06:28:37.342538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:23097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.800 [2024-12-09 06:28:37.342555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.800 [2024-12-09 06:28:37.351293] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.800 [2024-12-09 06:28:37.351435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:8804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.800 [2024-12-09 06:28:37.351457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.800 [2024-12-09 06:28:37.360199] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.800 [2024-12-09 06:28:37.360341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:13418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.800 [2024-12-09 06:28:37.360358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.800 [2024-12-09 06:28:37.369094] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.800 28497.00 IOPS, 111.32 MiB/s [2024-12-09T05:28:37.387Z] [2024-12-09 06:28:37.369578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:20604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.800 [2024-12-09 06:28:37.369595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.800 [2024-12-09 06:28:37.378000] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:42.800 [2024-12-09 06:28:37.378143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:20756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.800 [2024-12-09 06:28:37.378159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.062 [2024-12-09 06:28:37.386920] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.062 [2024-12-09 06:28:37.387063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:16470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.062 [2024-12-09 06:28:37.387080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.062 [2024-12-09 06:28:37.395841] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.062 [2024-12-09 06:28:37.395987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:11380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.062 [2024-12-09 06:28:37.396005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.062 [2024-12-09 06:28:37.404754] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.062 [2024-12-09 06:28:37.404896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:11752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.062 [2024-12-09 06:28:37.404913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.062 [2024-12-09 06:28:37.413656] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.062 [2024-12-09 06:28:37.413799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.062 [2024-12-09 06:28:37.413816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.062 [2024-12-09 06:28:37.422554] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.062 [2024-12-09 06:28:37.422697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:9855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.062 [2024-12-09 06:28:37.422713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.062 [2024-12-09 06:28:37.431486] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.062 [2024-12-09 06:28:37.431628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:9882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.062 [2024-12-09 06:28:37.431645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.062 [2024-12-09 06:28:37.440396] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.062 [2024-12-09 06:28:37.440545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:3955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.062 [2024-12-09 06:28:37.440563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.062 [2024-12-09 06:28:37.449298] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.062 [2024-12-09 06:28:37.449440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:9912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.062 [2024-12-09 06:28:37.449461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.062 [2024-12-09 06:28:37.458188] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.062 [2024-12-09 06:28:37.458330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:25550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.062 [2024-12-09 06:28:37.458347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.062 [2024-12-09 06:28:37.467084] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.062 [2024-12-09 06:28:37.467228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:11159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.062 [2024-12-09 06:28:37.467250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.062 [2024-12-09 06:28:37.476012] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.062 [2024-12-09 06:28:37.476154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.062 [2024-12-09 06:28:37.476170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.062 [2024-12-09 06:28:37.484924] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.062 [2024-12-09 06:28:37.485066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.062 [2024-12-09 06:28:37.485083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.062 [2024-12-09 06:28:37.493816] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.062 [2024-12-09 06:28:37.493959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.062 [2024-12-09 06:28:37.493975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.062 [2024-12-09 06:28:37.502726] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.062 [2024-12-09 06:28:37.502868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.062 [2024-12-09 06:28:37.502885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.062 [2024-12-09 06:28:37.511601] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.063 [2024-12-09 06:28:37.511744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.063 [2024-12-09 06:28:37.511760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.063 [2024-12-09 06:28:37.520532] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.063 [2024-12-09 06:28:37.520675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:17407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.063 [2024-12-09 06:28:37.520691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.063 [2024-12-09 06:28:37.529419] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.063 [2024-12-09 06:28:37.529568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.063 [2024-12-09 06:28:37.529585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.063 [2024-12-09 06:28:37.538353] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.063 [2024-12-09 06:28:37.538502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:17833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.063 [2024-12-09 06:28:37.538519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.063 [2024-12-09 06:28:37.547256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.063 [2024-12-09 06:28:37.547408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:3584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.063 [2024-12-09 06:28:37.547425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.063 [2024-12-09 06:28:37.556162] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.063 [2024-12-09 06:28:37.556305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.063 [2024-12-09 06:28:37.556322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.063 [2024-12-09 06:28:37.565047] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.063 [2024-12-09 06:28:37.565190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:14194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.063 [2024-12-09 06:28:37.565207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.063 [2024-12-09 06:28:37.573979] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.063 [2024-12-09 06:28:37.574122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:15073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.063 [2024-12-09 06:28:37.574140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.063 [2024-12-09 06:28:37.582882] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.063 [2024-12-09 06:28:37.583024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.063 [2024-12-09 06:28:37.583041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.063 [2024-12-09 06:28:37.591782] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.063 [2024-12-09 06:28:37.591925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:10165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.063 [2024-12-09 06:28:37.591941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.063 [2024-12-09 06:28:37.600663] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.063 [2024-12-09 06:28:37.600805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:8541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.063 [2024-12-09 06:28:37.600821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.063 [2024-12-09 06:28:37.609557] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.063 [2024-12-09 06:28:37.609701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.063 [2024-12-09 06:28:37.609720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.063 [2024-12-09 06:28:37.618447] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.063 [2024-12-09 06:28:37.618595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:14533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.063 [2024-12-09 06:28:37.618613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.063 [2024-12-09 06:28:37.627371] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.063 [2024-12-09 06:28:37.627570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.063 [2024-12-09 06:28:37.627587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.063 [2024-12-09 06:28:37.636274] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.063 [2024-12-09 06:28:37.636417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.063 [2024-12-09 06:28:37.636434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.063 [2024-12-09 06:28:37.645158] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.063 [2024-12-09 06:28:37.645301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:16911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.063 [2024-12-09 06:28:37.645320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.324 [2024-12-09 06:28:37.654026] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.324 [2024-12-09 06:28:37.654169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.324 [2024-12-09 06:28:37.654185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.325 [2024-12-09 06:28:37.662937] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.325 [2024-12-09 06:28:37.663080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:20390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.325 [2024-12-09 06:28:37.663099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.325 [2024-12-09 06:28:37.671832] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.325 [2024-12-09 06:28:37.671974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:20170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.325 [2024-12-09 06:28:37.671991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.325 [2024-12-09 06:28:37.680735] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.325 [2024-12-09 06:28:37.680878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:6088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.325 [2024-12-09 06:28:37.680895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.325 [2024-12-09 06:28:37.689621] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.325 [2024-12-09 06:28:37.689764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:23616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.325 [2024-12-09 06:28:37.689781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.325 [2024-12-09 06:28:37.698510] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.325 [2024-12-09 06:28:37.698655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:19648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.325 [2024-12-09 06:28:37.698679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.325 [2024-12-09 06:28:37.707392] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.325 [2024-12-09 06:28:37.707542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:20696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.325 [2024-12-09 06:28:37.707559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.325 [2024-12-09 06:28:37.716283] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.325 [2024-12-09 06:28:37.716428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:20394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.325 [2024-12-09 06:28:37.716445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.325 [2024-12-09 06:28:37.725183] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.325 [2024-12-09 06:28:37.725327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.325 [2024-12-09 06:28:37.725344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.325 [2024-12-09 06:28:37.734084] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.325 [2024-12-09 06:28:37.734228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:23712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.325 [2024-12-09 06:28:37.734245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.325 [2024-12-09 06:28:37.742974] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.325 [2024-12-09 06:28:37.743116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.325 [2024-12-09 06:28:37.743133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.325 [2024-12-09 06:28:37.751863] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.325 [2024-12-09 06:28:37.752006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:3565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.325 [2024-12-09 06:28:37.752022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.325 [2024-12-09 06:28:37.760752] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.325 [2024-12-09 06:28:37.760896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.325 [2024-12-09 06:28:37.760912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.325 [2024-12-09 06:28:37.769648] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.325 [2024-12-09 06:28:37.769790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:25146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.325 [2024-12-09 06:28:37.769806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.325 [2024-12-09 06:28:37.778535] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.325 [2024-12-09 06:28:37.778682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:25028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.325 [2024-12-09 06:28:37.778699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.325 [2024-12-09 06:28:37.787423] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.325 [2024-12-09 06:28:37.787570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.325 [2024-12-09 06:28:37.787587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.325 [2024-12-09 06:28:37.796308] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.325 [2024-12-09 06:28:37.796453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:8620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.325 [2024-12-09 06:28:37.796469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.325 [2024-12-09 06:28:37.805202] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.325 [2024-12-09 06:28:37.805345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:20077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.325 [2024-12-09 06:28:37.805361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.325 [2024-12-09 06:28:37.814110] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.325 [2024-12-09 06:28:37.814254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.325 [2024-12-09 06:28:37.814271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.325 [2024-12-09 06:28:37.823016] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.325 [2024-12-09 06:28:37.823160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.325 [2024-12-09 06:28:37.823176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.325 [2024-12-09 06:28:37.831909] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.325 [2024-12-09 06:28:37.832204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:15368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.325 [2024-12-09 06:28:37.832222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.325 [2024-12-09 06:28:37.840962] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.325 [2024-12-09 06:28:37.841105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.325 [2024-12-09 06:28:37.841122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.325 [2024-12-09 06:28:37.849848] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.325 [2024-12-09 06:28:37.849990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.325 [2024-12-09 06:28:37.850007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.325 [2024-12-09 06:28:37.858753] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.325 [2024-12-09 06:28:37.858896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:16724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.325 [2024-12-09 06:28:37.858913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.325 [2024-12-09 06:28:37.867672] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.325 [2024-12-09 06:28:37.867815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:4647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.325 [2024-12-09 06:28:37.867832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.325 [2024-12-09 06:28:37.876565] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.325 [2024-12-09 06:28:37.876708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:22489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.326 [2024-12-09 06:28:37.876724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.326 [2024-12-09 06:28:37.885439] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.326 [2024-12-09 06:28:37.885587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:3260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.326 [2024-12-09 06:28:37.885605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.326 [2024-12-09 06:28:37.894333] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.326 [2024-12-09 06:28:37.894482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:14366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.326 [2024-12-09 06:28:37.894500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.326 [2024-12-09 06:28:37.903220] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.326 [2024-12-09 06:28:37.903364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.326 [2024-12-09 06:28:37.903381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.587 [2024-12-09 06:28:37.912105] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.587 [2024-12-09 06:28:37.912249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:25221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.587 [2024-12-09 06:28:37.912265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.587 [2024-12-09 06:28:37.921019] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.587 [2024-12-09 06:28:37.921162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:10092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.587 [2024-12-09 06:28:37.921179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.587 [2024-12-09 06:28:37.929887] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.587 [2024-12-09 06:28:37.930031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:22139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.587 [2024-12-09 06:28:37.930050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.587 [2024-12-09 06:28:37.938800] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.587 [2024-12-09 06:28:37.938944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.587 [2024-12-09 06:28:37.938961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.587 [2024-12-09 06:28:37.947671] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.587 [2024-12-09 06:28:37.947813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:11953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.587 [2024-12-09 06:28:37.947830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.587 [2024-12-09 06:28:37.956587] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.587 [2024-12-09 06:28:37.956731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:23853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.587 [2024-12-09 06:28:37.956748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.587 [2024-12-09 06:28:37.965484] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.587 [2024-12-09 06:28:37.965626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:18499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.587 [2024-12-09 06:28:37.965642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.587 [2024-12-09 06:28:37.974362] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.587 [2024-12-09 06:28:37.974509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:9399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.587 [2024-12-09 06:28:37.974526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.587 [2024-12-09 06:28:37.983243] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.587 [2024-12-09 06:28:37.983384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.587 [2024-12-09 06:28:37.983400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.587 [2024-12-09 06:28:37.992132] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.588 [2024-12-09 06:28:37.992272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.588 [2024-12-09 06:28:37.992289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.588 [2024-12-09 06:28:38.001034] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.588 [2024-12-09 06:28:38.001177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:9279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.588 [2024-12-09 06:28:38.001194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.588 [2024-12-09 06:28:38.009922] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.588 [2024-12-09 06:28:38.010070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:11191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.588 [2024-12-09 06:28:38.010089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.588 [2024-12-09 06:28:38.018820] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.588 [2024-12-09 06:28:38.018962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:22464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.588 [2024-12-09 06:28:38.018978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.588 [2024-12-09 06:28:38.027705] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.588 [2024-12-09 06:28:38.027848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:20813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.588 [2024-12-09 06:28:38.027865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.588 [2024-12-09 06:28:38.036597] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.588 [2024-12-09 06:28:38.036741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:24106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.588 [2024-12-09 06:28:38.036757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.588 [2024-12-09 06:28:38.045484] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.588 [2024-12-09 06:28:38.045629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.588 [2024-12-09 06:28:38.045649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.588 [2024-12-09 06:28:38.054381] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.588 [2024-12-09 06:28:38.054529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:8598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.588 [2024-12-09 06:28:38.054545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.588 [2024-12-09 06:28:38.063287] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.588 [2024-12-09 06:28:38.063428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:20214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.588 [2024-12-09 06:28:38.063452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.588 [2024-12-09 06:28:38.072179] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.588 [2024-12-09 06:28:38.072320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:23724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.588 [2024-12-09 06:28:38.072337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.588 [2024-12-09 06:28:38.081066] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.588 [2024-12-09 06:28:38.081210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:10405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.588 [2024-12-09 06:28:38.081226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.588 [2024-12-09 06:28:38.089965] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.588 [2024-12-09 06:28:38.090107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:9938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.588 [2024-12-09 06:28:38.090124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.588 [2024-12-09 06:28:38.098857] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.588 [2024-12-09 06:28:38.098998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:8776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.588 [2024-12-09 06:28:38.099015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.588 [2024-12-09 06:28:38.107747] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.588 [2024-12-09 06:28:38.107889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:21344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.588 [2024-12-09 06:28:38.107906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.588 [2024-12-09 06:28:38.116624] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.588 [2024-12-09 06:28:38.116767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:9647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.588 [2024-12-09 06:28:38.116783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.588 [2024-12-09 06:28:38.125505] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.588 [2024-12-09 06:28:38.125648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.588 [2024-12-09 06:28:38.125664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.588 [2024-12-09 06:28:38.134472] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.588 [2024-12-09 06:28:38.134617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.588 [2024-12-09 06:28:38.134635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.588 [2024-12-09 06:28:38.143363] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.588 [2024-12-09 06:28:38.143509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:23757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.588 [2024-12-09 06:28:38.143525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.588 [2024-12-09 06:28:38.152261] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.588 [2024-12-09 06:28:38.152402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:6119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.588 [2024-12-09 06:28:38.152419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.588 [2024-12-09 06:28:38.161162] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.588 [2024-12-09 06:28:38.161303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.588 [2024-12-09 06:28:38.161323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.588 [2024-12-09 06:28:38.170044] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.588 [2024-12-09 06:28:38.170187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:3146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.588 [2024-12-09 06:28:38.170204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.850 [2024-12-09 06:28:38.178932] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.850 [2024-12-09 06:28:38.179075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:22639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.850 [2024-12-09 06:28:38.179092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.850 [2024-12-09 06:28:38.187817] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.850 [2024-12-09 06:28:38.187961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:9204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.850 [2024-12-09 06:28:38.187977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.850 [2024-12-09 06:28:38.196712] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.850 [2024-12-09 06:28:38.196853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.850 [2024-12-09 06:28:38.196870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.850 [2024-12-09 06:28:38.205604] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.850 [2024-12-09 06:28:38.205748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.850 [2024-12-09 06:28:38.205764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.850 [2024-12-09 06:28:38.214491] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.850 [2024-12-09 06:28:38.214634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:23726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.850 [2024-12-09 06:28:38.214649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.850 [2024-12-09 06:28:38.223375] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.850 [2024-12-09 06:28:38.223521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:9449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.850 [2024-12-09 06:28:38.223538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.850 [2024-12-09 06:28:38.232262] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.850 [2024-12-09 06:28:38.232405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.850 [2024-12-09 06:28:38.232421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.850 [2024-12-09 06:28:38.241170] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.850 [2024-12-09 06:28:38.241315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.850 [2024-12-09 06:28:38.241331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.850 [2024-12-09 06:28:38.250083] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.850 [2024-12-09 06:28:38.250224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:12518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.850 [2024-12-09 06:28:38.250240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.850 [2024-12-09 06:28:38.258978] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.850 [2024-12-09 06:28:38.259120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.850 [2024-12-09 06:28:38.259136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.850 [2024-12-09 06:28:38.267865] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.850 [2024-12-09 06:28:38.268008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:23796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.850 [2024-12-09 06:28:38.268025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.850 [2024-12-09 06:28:38.276731] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.850 [2024-12-09 06:28:38.276874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.851 [2024-12-09 06:28:38.276890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.851 [2024-12-09 06:28:38.285641] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.851 [2024-12-09 06:28:38.285782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:11101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.851 [2024-12-09 06:28:38.285799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.851 [2024-12-09 06:28:38.294548] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.851 [2024-12-09 06:28:38.294691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.851 [2024-12-09 06:28:38.294708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.851 [2024-12-09 06:28:38.303443] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.851 [2024-12-09 06:28:38.303589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:10006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.851 [2024-12-09 06:28:38.303605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.851 [2024-12-09 06:28:38.312381] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.851 [2024-12-09 06:28:38.312529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:17171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.851 [2024-12-09 06:28:38.312545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.851 [2024-12-09 06:28:38.321283] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.851 [2024-12-09 06:28:38.321427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:11086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.851 [2024-12-09 06:28:38.321443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.851 [2024-12-09 06:28:38.330146] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.851 [2024-12-09 06:28:38.330288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.851 [2024-12-09 06:28:38.330304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.851 [2024-12-09 06:28:38.339065] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.851 [2024-12-09 06:28:38.339208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:17802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.851 [2024-12-09 06:28:38.339225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.851 [2024-12-09 06:28:38.347955] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.851 [2024-12-09 06:28:38.348096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.851 [2024-12-09 06:28:38.348113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.851 [2024-12-09 06:28:38.356847] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.851 [2024-12-09 06:28:38.356989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:16099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.851 [2024-12-09 06:28:38.357005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.851 [2024-12-09 06:28:38.365705] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.851 [2024-12-09 06:28:38.365848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:18653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.851 [2024-12-09 06:28:38.365865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.851 28612.00 IOPS, 111.77 MiB/s [2024-12-09T05:28:38.438Z] [2024-12-09 06:28:38.374607] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc1b20) with pdu=0x200016efeb58 00:29:43.851 [2024-12-09 06:28:38.374748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:47 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.851 [2024-12-09 06:28:38.374764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.851 00:29:43.851 Latency(us) 00:29:43.851 [2024-12-09T05:28:38.438Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:43.851 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:43.851 nvme0n1 : 2.01 28612.85 111.77 0.00 0.00 4465.33 3251.59 11040.30 00:29:43.851 [2024-12-09T05:28:38.438Z] =================================================================================================================== 00:29:43.851 [2024-12-09T05:28:38.438Z] Total : 28612.85 111.77 0.00 0.00 4465.33 3251.59 11040.30 00:29:43.851 { 00:29:43.851 "results": [ 00:29:43.851 { 00:29:43.851 "job": "nvme0n1", 00:29:43.851 "core_mask": "0x2", 00:29:43.851 "workload": "randwrite", 00:29:43.851 "status": "finished", 00:29:43.851 "queue_depth": 128, 00:29:43.851 "io_size": 4096, 00:29:43.851 "runtime": 2.005812, 00:29:43.851 "iops": 28612.85105483465, 00:29:43.851 "mibps": 111.76894943294785, 00:29:43.851 "io_failed": 0, 00:29:43.851 "io_timeout": 0, 00:29:43.851 "avg_latency_us": 4465.331289485535, 00:29:43.851 "min_latency_us": 3251.5938461538462, 00:29:43.851 "max_latency_us": 11040.295384615385 00:29:43.851 } 00:29:43.851 ], 00:29:43.851 "core_count": 1 00:29:43.851 } 00:29:43.851 06:28:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:43.851 06:28:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:43.851 06:28:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:43.851 | .driver_specific 00:29:43.851 | .nvme_error 00:29:43.851 | .status_code 00:29:43.851 | .command_transient_transport_error' 00:29:43.851 06:28:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:44.111 06:28:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 225 > 0 )) 00:29:44.111 06:28:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 492240 00:29:44.111 06:28:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 492240 ']' 00:29:44.111 06:28:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 492240 00:29:44.111 06:28:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:44.111 06:28:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:44.111 06:28:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 492240 00:29:44.111 06:28:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:44.111 06:28:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:44.111 06:28:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 492240' 00:29:44.111 killing process with pid 492240 00:29:44.111 06:28:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 492240 00:29:44.111 Received shutdown signal, test time was about 2.000000 seconds 00:29:44.111 00:29:44.111 Latency(us) 00:29:44.111 [2024-12-09T05:28:38.698Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:44.111 [2024-12-09T05:28:38.698Z] =================================================================================================================== 00:29:44.111 [2024-12-09T05:28:38.698Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:44.111 06:28:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 492240 00:29:44.371 06:28:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:29:44.371 06:28:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:44.371 06:28:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:44.371 06:28:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:44.371 06:28:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:44.371 06:28:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=492642 00:29:44.371 06:28:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 492642 /var/tmp/bperf.sock 00:29:44.371 06:28:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 492642 ']' 00:29:44.371 06:28:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:29:44.371 06:28:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:44.371 06:28:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:44.371 06:28:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:44.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:44.371 06:28:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:44.371 06:28:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:44.371 [2024-12-09 06:28:38.796237] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:29:44.371 [2024-12-09 06:28:38.796291] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid492642 ] 00:29:44.371 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:44.371 Zero copy mechanism will not be used. 00:29:44.372 [2024-12-09 06:28:38.854400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:44.372 [2024-12-09 06:28:38.883733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:44.372 06:28:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:44.372 06:28:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:44.372 06:28:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:44.372 06:28:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:44.632 06:28:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:44.632 06:28:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.632 06:28:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:44.632 06:28:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.632 06:28:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:44.632 06:28:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:45.203 nvme0n1 00:29:45.203 06:28:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:45.203 06:28:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.203 06:28:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:45.203 06:28:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.203 06:28:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:45.203 06:28:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:45.203 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:45.203 Zero copy mechanism will not be used. 00:29:45.203 Running I/O for 2 seconds... 00:29:45.203 [2024-12-09 06:28:39.658930] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.203 [2024-12-09 06:28:39.659038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.204 [2024-12-09 06:28:39.659068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:45.204 [2024-12-09 06:28:39.668767] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.204 [2024-12-09 06:28:39.669020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.204 [2024-12-09 06:28:39.669038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:45.204 [2024-12-09 06:28:39.679338] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.204 [2024-12-09 06:28:39.679579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.204 [2024-12-09 06:28:39.679596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:45.204 [2024-12-09 06:28:39.689200] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.204 [2024-12-09 06:28:39.689446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.204 [2024-12-09 06:28:39.689468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:45.204 [2024-12-09 06:28:39.698987] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.204 [2024-12-09 06:28:39.699162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.204 [2024-12-09 06:28:39.699178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:45.204 [2024-12-09 06:28:39.708425] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.204 [2024-12-09 06:28:39.708682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.204 [2024-12-09 06:28:39.708700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:45.204 [2024-12-09 06:28:39.718126] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.204 [2024-12-09 06:28:39.718373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.204 [2024-12-09 06:28:39.718389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:45.204 [2024-12-09 06:28:39.727506] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.204 [2024-12-09 06:28:39.727719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.204 [2024-12-09 06:28:39.727736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:45.204 [2024-12-09 06:28:39.736756] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.204 [2024-12-09 06:28:39.737055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.204 [2024-12-09 06:28:39.737073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:45.204 [2024-12-09 06:28:39.746262] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.204 [2024-12-09 06:28:39.746476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.204 [2024-12-09 06:28:39.746492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:45.204 [2024-12-09 06:28:39.756936] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.204 [2024-12-09 06:28:39.757145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.204 [2024-12-09 06:28:39.757161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:45.204 [2024-12-09 06:28:39.767618] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.204 [2024-12-09 06:28:39.767851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.204 [2024-12-09 06:28:39.767868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:45.204 [2024-12-09 06:28:39.774635] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.204 [2024-12-09 06:28:39.774830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.204 [2024-12-09 06:28:39.774848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:45.204 [2024-12-09 06:28:39.781355] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.204 [2024-12-09 06:28:39.781650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.204 [2024-12-09 06:28:39.781668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:45.466 [2024-12-09 06:28:39.789783] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.466 [2024-12-09 06:28:39.790081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.466 [2024-12-09 06:28:39.790099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:45.466 [2024-12-09 06:28:39.797477] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.466 [2024-12-09 06:28:39.797811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.466 [2024-12-09 06:28:39.797828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:45.466 [2024-12-09 06:28:39.804124] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.466 [2024-12-09 06:28:39.804316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.466 [2024-12-09 06:28:39.804333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:45.466 [2024-12-09 06:28:39.811636] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.466 [2024-12-09 06:28:39.811961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.466 [2024-12-09 06:28:39.811979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:45.466 [2024-12-09 06:28:39.818411] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.466 [2024-12-09 06:28:39.818619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.466 [2024-12-09 06:28:39.818636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:45.466 [2024-12-09 06:28:39.825555] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.466 [2024-12-09 06:28:39.825903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.466 [2024-12-09 06:28:39.825921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:45.466 [2024-12-09 06:28:39.832176] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.466 [2024-12-09 06:28:39.832358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.466 [2024-12-09 06:28:39.832375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:45.466 [2024-12-09 06:28:39.840876] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.466 [2024-12-09 06:28:39.841141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.466 [2024-12-09 06:28:39.841159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:45.466 [2024-12-09 06:28:39.850022] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.466 [2024-12-09 06:28:39.850211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.466 [2024-12-09 06:28:39.850228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:45.466 [2024-12-09 06:28:39.858506] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.466 [2024-12-09 06:28:39.858798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.466 [2024-12-09 06:28:39.858816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:45.466 [2024-12-09 06:28:39.868474] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.466 [2024-12-09 06:28:39.868829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.466 [2024-12-09 06:28:39.868847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:45.466 [2024-12-09 06:28:39.878900] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.466 [2024-12-09 06:28:39.879142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.466 [2024-12-09 06:28:39.879159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:45.466 [2024-12-09 06:28:39.889447] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.466 [2024-12-09 06:28:39.889682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.466 [2024-12-09 06:28:39.889702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:45.466 [2024-12-09 06:28:39.899279] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.466 [2024-12-09 06:28:39.899535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.466 [2024-12-09 06:28:39.899551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:45.466 [2024-12-09 06:28:39.909756] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.466 [2024-12-09 06:28:39.910001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.466 [2024-12-09 06:28:39.910016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:45.466 [2024-12-09 06:28:39.921324] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.466 [2024-12-09 06:28:39.921608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.466 [2024-12-09 06:28:39.921624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:45.466 [2024-12-09 06:28:39.931588] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.466 [2024-12-09 06:28:39.931652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.466 [2024-12-09 06:28:39.931667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:45.466 [2024-12-09 06:28:39.940404] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.466 [2024-12-09 06:28:39.940714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.466 [2024-12-09 06:28:39.940730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:45.466 [2024-12-09 06:28:39.951076] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.466 [2024-12-09 06:28:39.951321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.466 [2024-12-09 06:28:39.951337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:45.466 [2024-12-09 06:28:39.961630] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.466 [2024-12-09 06:28:39.961882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.466 [2024-12-09 06:28:39.961897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:45.466 [2024-12-09 06:28:39.970985] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.466 [2024-12-09 06:28:39.971050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.466 [2024-12-09 06:28:39.971065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:45.467 [2024-12-09 06:28:39.980553] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.467 [2024-12-09 06:28:39.980620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.467 [2024-12-09 06:28:39.980635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:45.467 [2024-12-09 06:28:39.988214] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.467 [2024-12-09 06:28:39.988263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.467 [2024-12-09 06:28:39.988279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:45.467 [2024-12-09 06:28:39.996983] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.467 [2024-12-09 06:28:39.997042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.467 [2024-12-09 06:28:39.997057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:45.467 [2024-12-09 06:28:40.007040] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.467 [2024-12-09 06:28:40.007311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.467 [2024-12-09 06:28:40.007328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:45.467 [2024-12-09 06:28:40.016030] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.467 [2024-12-09 06:28:40.016084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.467 [2024-12-09 06:28:40.016100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:45.467 [2024-12-09 06:28:40.024096] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.467 [2024-12-09 06:28:40.024167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.467 [2024-12-09 06:28:40.024183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:45.467 [2024-12-09 06:28:40.033188] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.467 [2024-12-09 06:28:40.033253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.467 [2024-12-09 06:28:40.033268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:45.467 [2024-12-09 06:28:40.043753] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.467 [2024-12-09 06:28:40.043810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.467 [2024-12-09 06:28:40.043826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:45.467 [2024-12-09 06:28:40.050084] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.467 [2024-12-09 06:28:40.050134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.467 [2024-12-09 06:28:40.050149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:45.729 [2024-12-09 06:28:40.058636] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.729 [2024-12-09 06:28:40.058703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.729 [2024-12-09 06:28:40.058718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:45.729 [2024-12-09 06:28:40.066917] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.729 [2024-12-09 06:28:40.067129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.729 [2024-12-09 06:28:40.067144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:45.729 [2024-12-09 06:28:40.075171] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.729 [2024-12-09 06:28:40.075240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.729 [2024-12-09 06:28:40.075255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:45.729 [2024-12-09 06:28:40.083296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.729 [2024-12-09 06:28:40.083420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.729 [2024-12-09 06:28:40.083435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:45.729 [2024-12-09 06:28:40.090797] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.729 [2024-12-09 06:28:40.090866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.729 [2024-12-09 06:28:40.090880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:45.729 [2024-12-09 06:28:40.099774] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.729 [2024-12-09 06:28:40.099846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.729 [2024-12-09 06:28:40.099861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:45.729 [2024-12-09 06:28:40.108918] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.729 [2024-12-09 06:28:40.109140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.729 [2024-12-09 06:28:40.109156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:45.729 [2024-12-09 06:28:40.119837] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.729 [2024-12-09 06:28:40.120087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.729 [2024-12-09 06:28:40.120103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:45.729 [2024-12-09 06:28:40.130879] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.729 [2024-12-09 06:28:40.131112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.729 [2024-12-09 06:28:40.131131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:45.729 [2024-12-09 06:28:40.140758] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.729 [2024-12-09 06:28:40.141041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.729 [2024-12-09 06:28:40.141057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:45.729 [2024-12-09 06:28:40.151924] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.729 [2024-12-09 06:28:40.151991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.729 [2024-12-09 06:28:40.152006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:45.729 [2024-12-09 06:28:40.162834] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.729 [2024-12-09 06:28:40.163093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.729 [2024-12-09 06:28:40.163108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:45.729 [2024-12-09 06:28:40.173549] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.729 [2024-12-09 06:28:40.173844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.729 [2024-12-09 06:28:40.173860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:45.729 [2024-12-09 06:28:40.184296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.729 [2024-12-09 06:28:40.184567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.729 [2024-12-09 06:28:40.184583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:45.729 [2024-12-09 06:28:40.194686] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.729 [2024-12-09 06:28:40.194847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.729 [2024-12-09 06:28:40.194863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:45.729 [2024-12-09 06:28:40.206184] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.729 [2024-12-09 06:28:40.206463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.729 [2024-12-09 06:28:40.206479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:45.729 [2024-12-09 06:28:40.215984] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.729 [2024-12-09 06:28:40.216066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.729 [2024-12-09 06:28:40.216081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:45.729 [2024-12-09 06:28:40.221065] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.729 [2024-12-09 06:28:40.221115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.729 [2024-12-09 06:28:40.221131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:45.729 [2024-12-09 06:28:40.228605] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.730 [2024-12-09 06:28:40.228832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.730 [2024-12-09 06:28:40.228847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:45.730 [2024-12-09 06:28:40.239349] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.730 [2024-12-09 06:28:40.239394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.730 [2024-12-09 06:28:40.239410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:45.730 [2024-12-09 06:28:40.246763] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.730 [2024-12-09 06:28:40.246829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.730 [2024-12-09 06:28:40.246844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:45.730 [2024-12-09 06:28:40.251340] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.730 [2024-12-09 06:28:40.251390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.730 [2024-12-09 06:28:40.251405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:45.730 [2024-12-09 06:28:40.257776] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.730 [2024-12-09 06:28:40.257946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.730 [2024-12-09 06:28:40.257961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:45.730 [2024-12-09 06:28:40.266346] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.730 [2024-12-09 06:28:40.266407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.730 [2024-12-09 06:28:40.266422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:45.730 [2024-12-09 06:28:40.271480] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.730 [2024-12-09 06:28:40.271526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.730 [2024-12-09 06:28:40.271542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:45.730 [2024-12-09 06:28:40.277740] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.730 [2024-12-09 06:28:40.277855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.730 [2024-12-09 06:28:40.277870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:45.730 [2024-12-09 06:28:40.285079] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.730 [2024-12-09 06:28:40.285146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.730 [2024-12-09 06:28:40.285161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:45.730 [2024-12-09 06:28:40.290951] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.730 [2024-12-09 06:28:40.291114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.730 [2024-12-09 06:28:40.291130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:45.730 [2024-12-09 06:28:40.297738] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.730 [2024-12-09 06:28:40.297811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.730 [2024-12-09 06:28:40.297827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:45.730 [2024-12-09 06:28:40.307218] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.730 [2024-12-09 06:28:40.307271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.730 [2024-12-09 06:28:40.307287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:45.992 [2024-12-09 06:28:40.316495] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.992 [2024-12-09 06:28:40.316566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.992 [2024-12-09 06:28:40.316582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:45.992 [2024-12-09 06:28:40.325373] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.992 [2024-12-09 06:28:40.325607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.992 [2024-12-09 06:28:40.325622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:45.992 [2024-12-09 06:28:40.336472] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.992 [2024-12-09 06:28:40.336732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.992 [2024-12-09 06:28:40.336747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:45.992 [2024-12-09 06:28:40.347180] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.992 [2024-12-09 06:28:40.347413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.992 [2024-12-09 06:28:40.347428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:45.992 [2024-12-09 06:28:40.357605] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.992 [2024-12-09 06:28:40.357663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.992 [2024-12-09 06:28:40.357678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:45.992 [2024-12-09 06:28:40.365741] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.992 [2024-12-09 06:28:40.365788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.992 [2024-12-09 06:28:40.365804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:45.992 [2024-12-09 06:28:40.373815] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.992 [2024-12-09 06:28:40.373881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.992 [2024-12-09 06:28:40.373896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:45.992 [2024-12-09 06:28:40.381217] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.992 [2024-12-09 06:28:40.381261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.992 [2024-12-09 06:28:40.381277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:45.992 [2024-12-09 06:28:40.390807] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.992 [2024-12-09 06:28:40.391066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.992 [2024-12-09 06:28:40.391081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:45.992 [2024-12-09 06:28:40.399042] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.992 [2024-12-09 06:28:40.399099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.992 [2024-12-09 06:28:40.399114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:45.992 [2024-12-09 06:28:40.407932] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.992 [2024-12-09 06:28:40.407988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.992 [2024-12-09 06:28:40.408003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:45.992 [2024-12-09 06:28:40.417933] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.992 [2024-12-09 06:28:40.417978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.992 [2024-12-09 06:28:40.417993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:45.992 [2024-12-09 06:28:40.425391] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.992 [2024-12-09 06:28:40.425454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.992 [2024-12-09 06:28:40.425469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:45.992 [2024-12-09 06:28:40.434397] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.992 [2024-12-09 06:28:40.434466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.992 [2024-12-09 06:28:40.434484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:45.992 [2024-12-09 06:28:40.443870] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.992 [2024-12-09 06:28:40.443920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.992 [2024-12-09 06:28:40.443936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:45.992 [2024-12-09 06:28:40.453190] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.992 [2024-12-09 06:28:40.453401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.992 [2024-12-09 06:28:40.453416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:45.992 [2024-12-09 06:28:40.462151] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.992 [2024-12-09 06:28:40.462428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.992 [2024-12-09 06:28:40.462444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:45.992 [2024-12-09 06:28:40.471345] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.992 [2024-12-09 06:28:40.471411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.992 [2024-12-09 06:28:40.471426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:45.992 [2024-12-09 06:28:40.477865] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.992 [2024-12-09 06:28:40.477945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.992 [2024-12-09 06:28:40.477960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:45.992 [2024-12-09 06:28:40.487198] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.992 [2024-12-09 06:28:40.487266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.992 [2024-12-09 06:28:40.487281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:45.992 [2024-12-09 06:28:40.495236] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.992 [2024-12-09 06:28:40.495408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.992 [2024-12-09 06:28:40.495423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:45.992 [2024-12-09 06:28:40.506173] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.993 [2024-12-09 06:28:40.506439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.993 [2024-12-09 06:28:40.506458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:45.993 [2024-12-09 06:28:40.516850] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.993 [2024-12-09 06:28:40.517115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.993 [2024-12-09 06:28:40.517131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:45.993 [2024-12-09 06:28:40.528236] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.993 [2024-12-09 06:28:40.528500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.993 [2024-12-09 06:28:40.528516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:45.993 [2024-12-09 06:28:40.539422] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.993 [2024-12-09 06:28:40.539676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.993 [2024-12-09 06:28:40.539691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:45.993 [2024-12-09 06:28:40.549236] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.993 [2024-12-09 06:28:40.549489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.993 [2024-12-09 06:28:40.549505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:45.993 [2024-12-09 06:28:40.559750] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.993 [2024-12-09 06:28:40.560042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.993 [2024-12-09 06:28:40.560059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:45.993 [2024-12-09 06:28:40.567499] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:45.993 [2024-12-09 06:28:40.567589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.993 [2024-12-09 06:28:40.567604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.255 [2024-12-09 06:28:40.578499] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.255 [2024-12-09 06:28:40.578750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.255 [2024-12-09 06:28:40.578765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.255 [2024-12-09 06:28:40.588791] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.255 [2024-12-09 06:28:40.588893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.255 [2024-12-09 06:28:40.588909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.255 [2024-12-09 06:28:40.599175] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.255 [2024-12-09 06:28:40.599396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.255 [2024-12-09 06:28:40.599412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.255 [2024-12-09 06:28:40.609620] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.255 [2024-12-09 06:28:40.609910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.255 [2024-12-09 06:28:40.609927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.255 [2024-12-09 06:28:40.620703] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.255 [2024-12-09 06:28:40.620934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.255 [2024-12-09 06:28:40.620950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.255 [2024-12-09 06:28:40.631599] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.255 [2024-12-09 06:28:40.631862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.255 [2024-12-09 06:28:40.631879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.255 [2024-12-09 06:28:40.642990] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.255 [2024-12-09 06:28:40.643215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.255 [2024-12-09 06:28:40.643230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.255 [2024-12-09 06:28:40.653493] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.255 [2024-12-09 06:28:40.654843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.255 [2024-12-09 06:28:40.654859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.255 3379.00 IOPS, 422.38 MiB/s [2024-12-09T05:28:40.842Z] [2024-12-09 06:28:40.664735] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.255 [2024-12-09 06:28:40.664980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.255 [2024-12-09 06:28:40.664997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.255 [2024-12-09 06:28:40.675622] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.255 [2024-12-09 06:28:40.675864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.255 [2024-12-09 06:28:40.675880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.255 [2024-12-09 06:28:40.686529] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.255 [2024-12-09 06:28:40.686811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.255 [2024-12-09 06:28:40.686827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.255 [2024-12-09 06:28:40.697599] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.255 [2024-12-09 06:28:40.697836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.255 [2024-12-09 06:28:40.697854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.255 [2024-12-09 06:28:40.708331] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.255 [2024-12-09 06:28:40.708552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.255 [2024-12-09 06:28:40.708567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.255 [2024-12-09 06:28:40.713966] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.255 [2024-12-09 06:28:40.714016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.255 [2024-12-09 06:28:40.714031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.255 [2024-12-09 06:28:40.721753] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.255 [2024-12-09 06:28:40.721825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.255 [2024-12-09 06:28:40.721840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.255 [2024-12-09 06:28:40.729138] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.255 [2024-12-09 06:28:40.729182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.255 [2024-12-09 06:28:40.729198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.255 [2024-12-09 06:28:40.736487] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.256 [2024-12-09 06:28:40.736546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.256 [2024-12-09 06:28:40.736562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.256 [2024-12-09 06:28:40.745159] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.256 [2024-12-09 06:28:40.745221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.256 [2024-12-09 06:28:40.745236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.256 [2024-12-09 06:28:40.752888] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.256 [2024-12-09 06:28:40.753093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.256 [2024-12-09 06:28:40.753108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.256 [2024-12-09 06:28:40.760832] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.256 [2024-12-09 06:28:40.761113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.256 [2024-12-09 06:28:40.761131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.256 [2024-12-09 06:28:40.770312] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.256 [2024-12-09 06:28:40.770370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.256 [2024-12-09 06:28:40.770385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.256 [2024-12-09 06:28:40.777790] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.256 [2024-12-09 06:28:40.778092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.256 [2024-12-09 06:28:40.778108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.256 [2024-12-09 06:28:40.786416] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.256 [2024-12-09 06:28:40.786710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.256 [2024-12-09 06:28:40.786727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.256 [2024-12-09 06:28:40.796231] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.256 [2024-12-09 06:28:40.796279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.256 [2024-12-09 06:28:40.796295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.256 [2024-12-09 06:28:40.802184] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.256 [2024-12-09 06:28:40.802247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.256 [2024-12-09 06:28:40.802263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.256 [2024-12-09 06:28:40.808016] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.256 [2024-12-09 06:28:40.808066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.256 [2024-12-09 06:28:40.808081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.256 [2024-12-09 06:28:40.815559] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.256 [2024-12-09 06:28:40.815619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.256 [2024-12-09 06:28:40.815634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.256 [2024-12-09 06:28:40.821732] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.256 [2024-12-09 06:28:40.821818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.256 [2024-12-09 06:28:40.821833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.256 [2024-12-09 06:28:40.830929] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.256 [2024-12-09 06:28:40.831046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.256 [2024-12-09 06:28:40.831061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.256 [2024-12-09 06:28:40.838269] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.256 [2024-12-09 06:28:40.838483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.256 [2024-12-09 06:28:40.838499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.517 [2024-12-09 06:28:40.845831] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.517 [2024-12-09 06:28:40.845898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.517 [2024-12-09 06:28:40.845913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.517 [2024-12-09 06:28:40.854847] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.517 [2024-12-09 06:28:40.854969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.517 [2024-12-09 06:28:40.854985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.517 [2024-12-09 06:28:40.864558] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.517 [2024-12-09 06:28:40.864613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.517 [2024-12-09 06:28:40.864628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.517 [2024-12-09 06:28:40.870767] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.517 [2024-12-09 06:28:40.870809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.517 [2024-12-09 06:28:40.870825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.517 [2024-12-09 06:28:40.877811] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.517 [2024-12-09 06:28:40.877857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.517 [2024-12-09 06:28:40.877873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.517 [2024-12-09 06:28:40.885398] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.517 [2024-12-09 06:28:40.885458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.517 [2024-12-09 06:28:40.885473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.517 [2024-12-09 06:28:40.891580] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.517 [2024-12-09 06:28:40.891817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.517 [2024-12-09 06:28:40.891833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.517 [2024-12-09 06:28:40.901669] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.517 [2024-12-09 06:28:40.901723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.517 [2024-12-09 06:28:40.901743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.517 [2024-12-09 06:28:40.909966] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.517 [2024-12-09 06:28:40.910253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.517 [2024-12-09 06:28:40.910270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.517 [2024-12-09 06:28:40.918716] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.517 [2024-12-09 06:28:40.918767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.517 [2024-12-09 06:28:40.918783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.517 [2024-12-09 06:28:40.927607] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.517 [2024-12-09 06:28:40.927809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.517 [2024-12-09 06:28:40.927825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.517 [2024-12-09 06:28:40.935174] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.517 [2024-12-09 06:28:40.935481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.517 [2024-12-09 06:28:40.935498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.517 [2024-12-09 06:28:40.943557] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.517 [2024-12-09 06:28:40.943613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.517 [2024-12-09 06:28:40.943629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.517 [2024-12-09 06:28:40.952065] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.517 [2024-12-09 06:28:40.952379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.517 [2024-12-09 06:28:40.952396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.517 [2024-12-09 06:28:40.962542] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.517 [2024-12-09 06:28:40.962805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.517 [2024-12-09 06:28:40.962822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.517 [2024-12-09 06:28:40.973096] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.517 [2024-12-09 06:28:40.973347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.517 [2024-12-09 06:28:40.973363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.517 [2024-12-09 06:28:40.982904] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.517 [2024-12-09 06:28:40.983116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.517 [2024-12-09 06:28:40.983133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.517 [2024-12-09 06:28:40.993710] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.517 [2024-12-09 06:28:40.993981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.517 [2024-12-09 06:28:40.993997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.517 [2024-12-09 06:28:41.004396] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.517 [2024-12-09 06:28:41.004494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.517 [2024-12-09 06:28:41.004510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.517 [2024-12-09 06:28:41.015859] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.517 [2024-12-09 06:28:41.016140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.517 [2024-12-09 06:28:41.016156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.517 [2024-12-09 06:28:41.026611] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.517 [2024-12-09 06:28:41.026887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.517 [2024-12-09 06:28:41.026904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.517 [2024-12-09 06:28:41.037268] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.517 [2024-12-09 06:28:41.037473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.517 [2024-12-09 06:28:41.037488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.517 [2024-12-09 06:28:41.048211] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.517 [2024-12-09 06:28:41.048467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.517 [2024-12-09 06:28:41.048483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.517 [2024-12-09 06:28:41.058441] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.517 [2024-12-09 06:28:41.058722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.518 [2024-12-09 06:28:41.058739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.518 [2024-12-09 06:28:41.069337] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.518 [2024-12-09 06:28:41.069561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.518 [2024-12-09 06:28:41.069577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.518 [2024-12-09 06:28:41.080035] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.518 [2024-12-09 06:28:41.080326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.518 [2024-12-09 06:28:41.080342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.518 [2024-12-09 06:28:41.091092] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.518 [2024-12-09 06:28:41.091378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.518 [2024-12-09 06:28:41.091394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.518 [2024-12-09 06:28:41.101105] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.518 [2024-12-09 06:28:41.101348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.518 [2024-12-09 06:28:41.101363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.779 [2024-12-09 06:28:41.111519] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.779 [2024-12-09 06:28:41.111816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.779 [2024-12-09 06:28:41.111832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.779 [2024-12-09 06:28:41.121461] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.779 [2024-12-09 06:28:41.121661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.779 [2024-12-09 06:28:41.121676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.779 [2024-12-09 06:28:41.131145] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.779 [2024-12-09 06:28:41.131436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.779 [2024-12-09 06:28:41.131455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.779 [2024-12-09 06:28:41.140791] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.779 [2024-12-09 06:28:41.141012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.779 [2024-12-09 06:28:41.141028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.779 [2024-12-09 06:28:41.150801] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.779 [2024-12-09 06:28:41.151060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.779 [2024-12-09 06:28:41.151082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.779 [2024-12-09 06:28:41.161311] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.779 [2024-12-09 06:28:41.161518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.779 [2024-12-09 06:28:41.161536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.779 [2024-12-09 06:28:41.171415] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.779 [2024-12-09 06:28:41.171691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.779 [2024-12-09 06:28:41.171707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.779 [2024-12-09 06:28:41.181426] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.779 [2024-12-09 06:28:41.181679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.779 [2024-12-09 06:28:41.181694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.779 [2024-12-09 06:28:41.190947] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.779 [2024-12-09 06:28:41.191183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.779 [2024-12-09 06:28:41.191199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.779 [2024-12-09 06:28:41.200150] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.779 [2024-12-09 06:28:41.200344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.779 [2024-12-09 06:28:41.200360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.779 [2024-12-09 06:28:41.210366] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.779 [2024-12-09 06:28:41.210661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.779 [2024-12-09 06:28:41.210678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.779 [2024-12-09 06:28:41.220174] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.779 [2024-12-09 06:28:41.220392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.779 [2024-12-09 06:28:41.220408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.779 [2024-12-09 06:28:41.230138] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.779 [2024-12-09 06:28:41.230362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.779 [2024-12-09 06:28:41.230378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.779 [2024-12-09 06:28:41.240442] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.779 [2024-12-09 06:28:41.240540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.779 [2024-12-09 06:28:41.240555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.779 [2024-12-09 06:28:41.249555] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.779 [2024-12-09 06:28:41.249846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.779 [2024-12-09 06:28:41.249863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.779 [2024-12-09 06:28:41.258409] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.779 [2024-12-09 06:28:41.258619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.779 [2024-12-09 06:28:41.258634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.779 [2024-12-09 06:28:41.267589] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.779 [2024-12-09 06:28:41.267840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.779 [2024-12-09 06:28:41.267855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.779 [2024-12-09 06:28:41.276784] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.779 [2024-12-09 06:28:41.276997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.779 [2024-12-09 06:28:41.277012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.779 [2024-12-09 06:28:41.286520] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.779 [2024-12-09 06:28:41.286835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.779 [2024-12-09 06:28:41.286851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.779 [2024-12-09 06:28:41.296030] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.779 [2024-12-09 06:28:41.296229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.779 [2024-12-09 06:28:41.296244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.779 [2024-12-09 06:28:41.306090] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.779 [2024-12-09 06:28:41.306356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.779 [2024-12-09 06:28:41.306372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.779 [2024-12-09 06:28:41.315140] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.779 [2024-12-09 06:28:41.315199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.779 [2024-12-09 06:28:41.315214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.779 [2024-12-09 06:28:41.324800] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.779 [2024-12-09 06:28:41.325064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.779 [2024-12-09 06:28:41.325081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.779 [2024-12-09 06:28:41.330318] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.780 [2024-12-09 06:28:41.330381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.780 [2024-12-09 06:28:41.330396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.780 [2024-12-09 06:28:41.337742] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.780 [2024-12-09 06:28:41.337803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.780 [2024-12-09 06:28:41.337818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.780 [2024-12-09 06:28:41.345814] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.780 [2024-12-09 06:28:41.346019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.780 [2024-12-09 06:28:41.346034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.780 [2024-12-09 06:28:41.352370] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.780 [2024-12-09 06:28:41.352633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.780 [2024-12-09 06:28:41.352649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.780 [2024-12-09 06:28:41.360317] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:46.780 [2024-12-09 06:28:41.360399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.780 [2024-12-09 06:28:41.360414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.042 [2024-12-09 06:28:41.365227] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:47.042 [2024-12-09 06:28:41.365271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.042 [2024-12-09 06:28:41.365286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.042 [2024-12-09 06:28:41.373002] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:47.042 [2024-12-09 06:28:41.373301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.042 [2024-12-09 06:28:41.373317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.042 [2024-12-09 06:28:41.379031] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:47.042 [2024-12-09 06:28:41.379088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.042 [2024-12-09 06:28:41.379103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.042 [2024-12-09 06:28:41.384066] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:47.042 [2024-12-09 06:28:41.384321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.042 [2024-12-09 06:28:41.384351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.042 [2024-12-09 06:28:41.392378] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:47.042 [2024-12-09 06:28:41.392442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.042 [2024-12-09 06:28:41.392462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.042 [2024-12-09 06:28:41.398018] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:47.042 [2024-12-09 06:28:41.398063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.042 [2024-12-09 06:28:41.398078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.042 [2024-12-09 06:28:41.402862] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:47.042 [2024-12-09 06:28:41.403140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.042 [2024-12-09 06:28:41.403156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.042 [2024-12-09 06:28:41.411238] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:47.042 [2024-12-09 06:28:41.411331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.042 [2024-12-09 06:28:41.411347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.042 [2024-12-09 06:28:41.419821] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:47.042 [2024-12-09 06:28:41.419893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.042 [2024-12-09 06:28:41.419909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.042 [2024-12-09 06:28:41.426300] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:47.042 [2024-12-09 06:28:41.426345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.042 [2024-12-09 06:28:41.426360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.042 [2024-12-09 06:28:41.433069] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:47.042 [2024-12-09 06:28:41.433132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.042 [2024-12-09 06:28:41.433147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.042 [2024-12-09 06:28:41.438259] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:47.042 [2024-12-09 06:28:41.438488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.042 [2024-12-09 06:28:41.438503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.042 [2024-12-09 06:28:41.448739] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:47.042 [2024-12-09 06:28:41.449034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.042 [2024-12-09 06:28:41.449052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.042 [2024-12-09 06:28:41.458661] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:47.042 [2024-12-09 06:28:41.458888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.042 [2024-12-09 06:28:41.458903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.042 [2024-12-09 06:28:41.468142] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:47.042 [2024-12-09 06:28:41.468394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.042 [2024-12-09 06:28:41.468411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.042 [2024-12-09 06:28:41.478060] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:47.042 [2024-12-09 06:28:41.478298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.042 [2024-12-09 06:28:41.478314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.042 [2024-12-09 06:28:41.488080] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:47.042 [2024-12-09 06:28:41.488141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.042 [2024-12-09 06:28:41.488157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.042 [2024-12-09 06:28:41.495278] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:47.042 [2024-12-09 06:28:41.495324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.042 [2024-12-09 06:28:41.495339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.042 [2024-12-09 06:28:41.498989] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:47.042 [2024-12-09 06:28:41.499168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.042 [2024-12-09 06:28:41.499183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.042 [2024-12-09 06:28:41.506423] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:47.042 [2024-12-09 06:28:41.506496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.042 [2024-12-09 06:28:41.506512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.042 [2024-12-09 06:28:41.512938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:47.042 [2024-12-09 06:28:41.512992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.042 [2024-12-09 06:28:41.513008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.042 [2024-12-09 06:28:41.520477] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:47.042 [2024-12-09 06:28:41.520540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.042 [2024-12-09 06:28:41.520556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.042 [2024-12-09 06:28:41.527193] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:47.042 [2024-12-09 06:28:41.527275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.042 [2024-12-09 06:28:41.527290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.042 [2024-12-09 06:28:41.534159] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:47.042 [2024-12-09 06:28:41.534294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.042 [2024-12-09 06:28:41.534310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.043 [2024-12-09 06:28:41.543117] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:47.043 [2024-12-09 06:28:41.543164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.043 [2024-12-09 06:28:41.543179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.043 [2024-12-09 06:28:41.547997] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:47.043 [2024-12-09 06:28:41.548043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.043 [2024-12-09 06:28:41.548059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.043 [2024-12-09 06:28:41.551144] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:47.043 [2024-12-09 06:28:41.551188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.043 [2024-12-09 06:28:41.551203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.043 [2024-12-09 06:28:41.554791] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:47.043 [2024-12-09 06:28:41.554839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.043 [2024-12-09 06:28:41.554854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.043 [2024-12-09 06:28:41.557776] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:47.043 [2024-12-09 06:28:41.557833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.043 [2024-12-09 06:28:41.557848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.043 [2024-12-09 06:28:41.560483] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:47.043 [2024-12-09 06:28:41.560531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.043 [2024-12-09 06:28:41.560550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.043 [2024-12-09 06:28:41.563231] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:47.043 [2024-12-09 06:28:41.563284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.043 [2024-12-09 06:28:41.563299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.043 [2024-12-09 06:28:41.566080] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:47.043 [2024-12-09 06:28:41.566136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.043 [2024-12-09 06:28:41.566152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.043 [2024-12-09 06:28:41.568676] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:47.043 [2024-12-09 06:28:41.568732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.043 [2024-12-09 06:28:41.568747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.043 [2024-12-09 06:28:41.571277] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:47.043 [2024-12-09 06:28:41.571322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.043 [2024-12-09 06:28:41.571336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.043 [2024-12-09 06:28:41.573765] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:47.043 [2024-12-09 06:28:41.573812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.043 [2024-12-09 06:28:41.573828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.043 [2024-12-09 06:28:41.576253] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:47.043 [2024-12-09 06:28:41.576298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.043 [2024-12-09 06:28:41.576313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.043 [2024-12-09 06:28:41.578776] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:47.043 [2024-12-09 06:28:41.578829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.043 [2024-12-09 06:28:41.578844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.043 [2024-12-09 06:28:41.582001] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:47.043 [2024-12-09 06:28:41.582062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.043 [2024-12-09 06:28:41.582078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.043 [2024-12-09 06:28:41.584581] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:47.043 [2024-12-09 06:28:41.584633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.043 [2024-12-09 06:28:41.584648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.043 [2024-12-09 06:28:41.587068] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:47.043 [2024-12-09 06:28:41.587118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.043 [2024-12-09 06:28:41.587133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.043 [2024-12-09 06:28:41.589668] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:47.043 [2024-12-09 06:28:41.589717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.043 [2024-12-09 06:28:41.589733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.043 [2024-12-09 06:28:41.592364] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:47.043 [2024-12-09 06:28:41.592424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.043 [2024-12-09 06:28:41.592440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.043 [2024-12-09 06:28:41.595030] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:47.043 [2024-12-09 06:28:41.595200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.043 [2024-12-09 06:28:41.595215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.043 [2024-12-09 06:28:41.602523] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:47.043 [2024-12-09 06:28:41.602566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.043 [2024-12-09 06:28:41.602582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.043 [2024-12-09 06:28:41.605371] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:47.043 [2024-12-09 06:28:41.605421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.043 [2024-12-09 06:28:41.605436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.043 [2024-12-09 06:28:41.609934] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:47.043 [2024-12-09 06:28:41.610201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.043 [2024-12-09 06:28:41.610218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.043 [2024-12-09 06:28:41.614918] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:47.043 [2024-12-09 06:28:41.614980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.043 [2024-12-09 06:28:41.614995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.043 [2024-12-09 06:28:41.617623] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:47.043 [2024-12-09 06:28:41.617666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.043 [2024-12-09 06:28:41.617682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.043 [2024-12-09 06:28:41.621654] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:47.043 [2024-12-09 06:28:41.621702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.043 [2024-12-09 06:28:41.621717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.043 [2024-12-09 06:28:41.625776] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:47.043 [2024-12-09 06:28:41.625820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.043 [2024-12-09 06:28:41.625836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.305 [2024-12-09 06:28:41.629217] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:47.305 [2024-12-09 06:28:41.629268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.305 [2024-12-09 06:28:41.629284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.305 [2024-12-09 06:28:41.631721] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:47.305 [2024-12-09 06:28:41.631773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.305 [2024-12-09 06:28:41.631788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.305 [2024-12-09 06:28:41.634248] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:47.305 [2024-12-09 06:28:41.634293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.305 [2024-12-09 06:28:41.634308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.305 [2024-12-09 06:28:41.636775] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:47.305 [2024-12-09 06:28:41.636831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.305 [2024-12-09 06:28:41.636847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.305 [2024-12-09 06:28:41.639274] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:47.305 [2024-12-09 06:28:41.639328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.305 [2024-12-09 06:28:41.639344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.305 [2024-12-09 06:28:41.641786] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:47.305 [2024-12-09 06:28:41.641833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.305 [2024-12-09 06:28:41.641851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.305 [2024-12-09 06:28:41.644288] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:47.305 [2024-12-09 06:28:41.644333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.305 [2024-12-09 06:28:41.644348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.305 [2024-12-09 06:28:41.646766] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:47.305 [2024-12-09 06:28:41.646818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.305 [2024-12-09 06:28:41.646833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.305 [2024-12-09 06:28:41.649244] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:47.305 [2024-12-09 06:28:41.649293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.305 [2024-12-09 06:28:41.649308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.305 [2024-12-09 06:28:41.652027] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:47.305 [2024-12-09 06:28:41.652128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.305 [2024-12-09 06:28:41.652143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.305 3836.00 IOPS, 479.50 MiB/s [2024-12-09T05:28:41.892Z] [2024-12-09 06:28:41.660231] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc2000) with pdu=0x200016eff3c8 00:29:47.305 [2024-12-09 06:28:41.660495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.305 [2024-12-09 06:28:41.660511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.305 00:29:47.305 Latency(us) 00:29:47.305 [2024-12-09T05:28:41.892Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:47.305 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:47.305 nvme0n1 : 2.01 3835.57 479.45 0.00 0.00 4164.59 1178.39 12401.43 00:29:47.305 [2024-12-09T05:28:41.892Z] =================================================================================================================== 00:29:47.305 [2024-12-09T05:28:41.892Z] Total : 3835.57 479.45 0.00 0.00 4164.59 1178.39 12401.43 00:29:47.305 { 00:29:47.305 "results": [ 00:29:47.305 { 00:29:47.305 "job": "nvme0n1", 00:29:47.305 "core_mask": "0x2", 00:29:47.305 "workload": "randwrite", 00:29:47.305 "status": "finished", 00:29:47.305 "queue_depth": 16, 00:29:47.305 "io_size": 131072, 00:29:47.305 "runtime": 2.005439, 00:29:47.305 "iops": 3835.5691696431554, 00:29:47.305 "mibps": 479.4461462053944, 00:29:47.305 "io_failed": 0, 00:29:47.305 "io_timeout": 0, 00:29:47.305 "avg_latency_us": 4164.590263610545, 00:29:47.305 "min_latency_us": 1178.3876923076923, 00:29:47.305 "max_latency_us": 12401.427692307692 00:29:47.305 } 00:29:47.305 ], 00:29:47.305 "core_count": 1 00:29:47.305 } 00:29:47.305 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:47.305 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:47.305 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:47.305 | .driver_specific 00:29:47.305 | .nvme_error 00:29:47.305 | .status_code 00:29:47.305 | .command_transient_transport_error' 00:29:47.305 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:47.305 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 249 > 0 )) 00:29:47.305 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 492642 00:29:47.305 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 492642 ']' 00:29:47.305 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 492642 00:29:47.305 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:47.305 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:47.305 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 492642 00:29:47.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:47.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:47.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 492642' 00:29:47.566 killing process with pid 492642 00:29:47.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 492642 00:29:47.566 Received shutdown signal, test time was about 2.000000 seconds 00:29:47.566 00:29:47.566 Latency(us) 00:29:47.566 [2024-12-09T05:28:42.153Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:47.566 [2024-12-09T05:28:42.153Z] =================================================================================================================== 00:29:47.566 [2024-12-09T05:28:42.153Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:47.566 06:28:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 492642 00:29:47.566 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 490926 00:29:47.566 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 490926 ']' 00:29:47.566 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 490926 00:29:47.566 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:47.566 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:47.566 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 490926 00:29:47.566 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:47.566 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:47.566 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 490926' 00:29:47.566 killing process with pid 490926 00:29:47.566 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 490926 00:29:47.566 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 490926 00:29:47.827 00:29:47.827 real 0m14.135s 00:29:47.827 user 0m27.600s 00:29:47.827 sys 0m3.345s 00:29:47.827 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:47.827 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:47.827 ************************************ 00:29:47.827 END TEST nvmf_digest_error 00:29:47.827 ************************************ 00:29:47.827 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:29:47.827 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:29:47.827 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:47.827 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:29:47.827 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:47.827 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:29:47.827 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:47.827 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:47.827 rmmod nvme_tcp 00:29:47.827 rmmod nvme_fabrics 00:29:47.827 rmmod nvme_keyring 00:29:47.827 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:47.827 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:29:47.827 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:29:47.827 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 490926 ']' 00:29:47.827 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 490926 00:29:47.827 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 490926 ']' 00:29:47.827 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 490926 00:29:47.827 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (490926) - No such process 00:29:47.827 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 490926 is not found' 00:29:47.827 Process with pid 490926 is not found 00:29:47.827 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:47.827 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:47.827 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:47.827 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:29:47.827 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:29:47.827 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:47.827 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:29:47.827 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:47.827 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:47.827 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:47.827 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:47.827 06:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:50.394 06:28:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:50.394 00:29:50.394 real 0m39.033s 00:29:50.394 user 0m58.896s 00:29:50.394 sys 0m12.636s 00:29:50.394 06:28:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:50.394 06:28:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:50.394 ************************************ 00:29:50.394 END TEST nvmf_digest 00:29:50.394 ************************************ 00:29:50.394 06:28:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:29:50.394 06:28:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:29:50.394 06:28:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:29:50.394 06:28:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:50.394 06:28:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:50.394 06:28:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:50.394 06:28:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.394 ************************************ 00:29:50.394 START TEST nvmf_bdevperf 00:29:50.394 ************************************ 00:29:50.394 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:50.394 * Looking for test storage... 00:29:50.394 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:50.394 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:50.394 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:29:50.394 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:50.394 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:50.394 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:50.394 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:50.394 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:50.394 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:29:50.394 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:29:50.394 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:29:50.394 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:29:50.394 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:29:50.394 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:29:50.394 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:29:50.394 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:50.394 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:29:50.394 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:29:50.394 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:50.394 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:50.394 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:29:50.394 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:29:50.394 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:50.394 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:29:50.394 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:29:50.394 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:29:50.394 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:29:50.394 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:50.394 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:29:50.394 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:29:50.394 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:50.394 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:50.394 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:29:50.394 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:50.394 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:50.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.394 --rc genhtml_branch_coverage=1 00:29:50.394 --rc genhtml_function_coverage=1 00:29:50.394 --rc genhtml_legend=1 00:29:50.394 --rc geninfo_all_blocks=1 00:29:50.394 --rc geninfo_unexecuted_blocks=1 00:29:50.394 00:29:50.394 ' 00:29:50.394 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:50.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.394 --rc genhtml_branch_coverage=1 00:29:50.394 --rc genhtml_function_coverage=1 00:29:50.394 --rc genhtml_legend=1 00:29:50.394 --rc geninfo_all_blocks=1 00:29:50.394 --rc geninfo_unexecuted_blocks=1 00:29:50.394 00:29:50.394 ' 00:29:50.394 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:50.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.394 --rc genhtml_branch_coverage=1 00:29:50.394 --rc genhtml_function_coverage=1 00:29:50.394 --rc genhtml_legend=1 00:29:50.394 --rc geninfo_all_blocks=1 00:29:50.394 --rc geninfo_unexecuted_blocks=1 00:29:50.394 00:29:50.394 ' 00:29:50.394 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:50.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.394 --rc genhtml_branch_coverage=1 00:29:50.394 --rc genhtml_function_coverage=1 00:29:50.394 --rc genhtml_legend=1 00:29:50.394 --rc geninfo_all_blocks=1 00:29:50.394 --rc geninfo_unexecuted_blocks=1 00:29:50.394 00:29:50.394 ' 00:29:50.394 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:50.394 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:29:50.394 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:50.394 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:50.394 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:50.394 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:50.394 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:50.394 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:50.394 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:50.394 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:50.394 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:50.394 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:50.394 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:29:50.395 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:29:50.395 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:50.395 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:50.395 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:50.395 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:50.395 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:50.395 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:29:50.395 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:50.395 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:50.395 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:50.395 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.395 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.395 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.395 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:29:50.395 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.395 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:29:50.395 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:50.395 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:50.395 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:50.395 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:50.395 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:50.395 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:50.395 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:50.395 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:50.395 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:50.395 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:50.395 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:50.395 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:50.395 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:29:50.395 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:50.395 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:50.395 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:50.395 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:50.395 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:50.395 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:50.395 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:50.395 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:50.395 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:50.395 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:50.395 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:29:50.395 06:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:58.534 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:58.534 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:58.534 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:58.534 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:58.534 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:58.535 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:58.535 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:58.535 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:58.535 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:58.535 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:58.535 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:58.535 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:58.535 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:58.535 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:58.535 06:28:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:58.535 06:28:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:58.535 06:28:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:58.535 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:58.535 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.638 ms 00:29:58.535 00:29:58.535 --- 10.0.0.2 ping statistics --- 00:29:58.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:58.535 rtt min/avg/max/mdev = 0.638/0.638/0.638/0.000 ms 00:29:58.535 06:28:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:58.535 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:58.535 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:29:58.535 00:29:58.535 --- 10.0.0.1 ping statistics --- 00:29:58.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:58.535 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:29:58.535 06:28:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:58.535 06:28:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:29:58.535 06:28:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:58.535 06:28:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:58.535 06:28:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:58.535 06:28:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:58.535 06:28:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:58.535 06:28:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:58.535 06:28:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:58.535 06:28:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:29:58.535 06:28:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:58.535 06:28:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:58.535 06:28:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:58.535 06:28:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:58.535 06:28:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=497182 00:29:58.535 06:28:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 497182 00:29:58.535 06:28:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:58.535 06:28:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 497182 ']' 00:29:58.535 06:28:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:58.535 06:28:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:58.535 06:28:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:58.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:58.535 06:28:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:58.535 06:28:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:58.535 [2024-12-09 06:28:52.146135] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:29:58.535 [2024-12-09 06:28:52.146233] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:58.535 [2024-12-09 06:28:52.230927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:58.535 [2024-12-09 06:28:52.282283] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:58.535 [2024-12-09 06:28:52.282340] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:58.535 [2024-12-09 06:28:52.282348] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:58.535 [2024-12-09 06:28:52.282355] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:58.535 [2024-12-09 06:28:52.282361] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:58.535 [2024-12-09 06:28:52.284175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:58.535 [2024-12-09 06:28:52.284329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:58.535 [2024-12-09 06:28:52.284330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:58.535 06:28:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:58.535 06:28:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:29:58.535 06:28:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:58.535 06:28:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:58.535 06:28:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:58.535 06:28:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:58.535 06:28:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:58.535 06:28:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.535 06:28:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:58.535 [2024-12-09 06:28:53.003238] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:58.535 06:28:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.535 06:28:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:58.535 06:28:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.535 06:28:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:58.535 Malloc0 00:29:58.535 06:28:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.535 06:28:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:58.535 06:28:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.535 06:28:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:58.535 06:28:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.535 06:28:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:58.535 06:28:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.535 06:28:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:58.535 06:28:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.535 06:28:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:58.535 06:28:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.535 06:28:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:58.535 [2024-12-09 06:28:53.067562] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:58.535 06:28:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.535 06:28:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:29:58.535 06:28:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:29:58.535 06:28:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:29:58.535 06:28:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:29:58.535 06:28:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:58.535 06:28:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:58.535 { 00:29:58.535 "params": { 00:29:58.535 "name": "Nvme$subsystem", 00:29:58.535 "trtype": "$TEST_TRANSPORT", 00:29:58.536 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:58.536 "adrfam": "ipv4", 00:29:58.536 "trsvcid": "$NVMF_PORT", 00:29:58.536 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:58.536 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:58.536 "hdgst": ${hdgst:-false}, 00:29:58.536 "ddgst": ${ddgst:-false} 00:29:58.536 }, 00:29:58.536 "method": "bdev_nvme_attach_controller" 00:29:58.536 } 00:29:58.536 EOF 00:29:58.536 )") 00:29:58.536 06:28:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:29:58.536 06:28:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:29:58.536 06:28:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:29:58.536 06:28:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:58.536 "params": { 00:29:58.536 "name": "Nvme1", 00:29:58.536 "trtype": "tcp", 00:29:58.536 "traddr": "10.0.0.2", 00:29:58.536 "adrfam": "ipv4", 00:29:58.536 "trsvcid": "4420", 00:29:58.536 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:58.536 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:58.536 "hdgst": false, 00:29:58.536 "ddgst": false 00:29:58.536 }, 00:29:58.536 "method": "bdev_nvme_attach_controller" 00:29:58.536 }' 00:29:58.796 [2024-12-09 06:28:53.121311] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:29:58.796 [2024-12-09 06:28:53.121359] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid497483 ] 00:29:58.796 [2024-12-09 06:28:53.208807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:58.796 [2024-12-09 06:28:53.243283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:59.056 Running I/O for 1 seconds... 00:29:59.996 10864.00 IOPS, 42.44 MiB/s 00:29:59.996 Latency(us) 00:29:59.996 [2024-12-09T05:28:54.583Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:59.996 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:59.996 Verification LBA range: start 0x0 length 0x4000 00:29:59.996 Nvme1n1 : 1.01 10925.63 42.68 0.00 0.00 11650.63 2117.32 15627.82 00:29:59.996 [2024-12-09T05:28:54.583Z] =================================================================================================================== 00:29:59.996 [2024-12-09T05:28:54.583Z] Total : 10925.63 42.68 0.00 0.00 11650.63 2117.32 15627.82 00:29:59.996 06:28:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=497736 00:29:59.996 06:28:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:29:59.996 06:28:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:29:59.996 06:28:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:29:59.996 06:28:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:29:59.996 06:28:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:29:59.996 06:28:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:59.996 06:28:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:59.996 { 00:29:59.996 "params": { 00:29:59.996 "name": "Nvme$subsystem", 00:29:59.996 "trtype": "$TEST_TRANSPORT", 00:29:59.996 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:59.996 "adrfam": "ipv4", 00:29:59.996 "trsvcid": "$NVMF_PORT", 00:29:59.996 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:59.996 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:59.996 "hdgst": ${hdgst:-false}, 00:29:59.996 "ddgst": ${ddgst:-false} 00:29:59.996 }, 00:29:59.996 "method": "bdev_nvme_attach_controller" 00:29:59.996 } 00:29:59.996 EOF 00:29:59.996 )") 00:29:59.996 06:28:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:29:59.996 06:28:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:29:59.996 06:28:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:29:59.996 06:28:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:59.996 "params": { 00:29:59.996 "name": "Nvme1", 00:29:59.996 "trtype": "tcp", 00:29:59.996 "traddr": "10.0.0.2", 00:29:59.996 "adrfam": "ipv4", 00:29:59.996 "trsvcid": "4420", 00:29:59.996 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:59.996 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:59.996 "hdgst": false, 00:29:59.996 "ddgst": false 00:29:59.996 }, 00:29:59.996 "method": "bdev_nvme_attach_controller" 00:29:59.996 }' 00:30:00.256 [2024-12-09 06:28:54.607438] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:30:00.256 [2024-12-09 06:28:54.607524] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid497736 ] 00:30:00.256 [2024-12-09 06:28:54.695769] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:00.256 [2024-12-09 06:28:54.730354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:00.515 Running I/O for 15 seconds... 00:30:02.463 11394.00 IOPS, 44.51 MiB/s [2024-12-09T05:28:57.621Z] 11414.50 IOPS, 44.59 MiB/s [2024-12-09T05:28:57.621Z] 06:28:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 497182 00:30:03.034 06:28:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:30:03.034 [2024-12-09 06:28:57.572190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:99832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.034 [2024-12-09 06:28:57.572228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.034 [2024-12-09 06:28:57.572247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:99840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.034 [2024-12-09 06:28:57.572258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.034 [2024-12-09 06:28:57.572268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:99848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.034 [2024-12-09 06:28:57.572276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.034 [2024-12-09 06:28:57.572286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:99856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.034 [2024-12-09 06:28:57.572293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.034 [2024-12-09 06:28:57.572304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:99864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.034 [2024-12-09 06:28:57.572315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.034 [2024-12-09 06:28:57.572327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:99872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.034 [2024-12-09 06:28:57.572334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.034 [2024-12-09 06:28:57.572344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:99880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.034 [2024-12-09 06:28:57.572352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.035 [2024-12-09 06:28:57.572362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:99888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.035 [2024-12-09 06:28:57.572373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.035 [2024-12-09 06:28:57.572383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:99896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.035 [2024-12-09 06:28:57.572391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.035 [2024-12-09 06:28:57.572400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:99904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.035 [2024-12-09 06:28:57.572407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.035 [2024-12-09 06:28:57.572418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:99912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.035 [2024-12-09 06:28:57.572425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.035 [2024-12-09 06:28:57.572435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.035 [2024-12-09 06:28:57.572443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.035 [2024-12-09 06:28:57.572459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:99928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.035 [2024-12-09 06:28:57.572470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.035 [2024-12-09 06:28:57.572481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:99936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.035 [2024-12-09 06:28:57.572490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.035 [2024-12-09 06:28:57.572500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:99944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.035 [2024-12-09 06:28:57.572508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.035 [2024-12-09 06:28:57.572519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:99952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.035 [2024-12-09 06:28:57.572526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.035 [2024-12-09 06:28:57.572537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:99960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.035 [2024-12-09 06:28:57.572547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.035 [2024-12-09 06:28:57.572557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:99968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.035 [2024-12-09 06:28:57.572565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.035 [2024-12-09 06:28:57.572575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.035 [2024-12-09 06:28:57.572582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.035 [2024-12-09 06:28:57.572593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:99984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.035 [2024-12-09 06:28:57.572601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.035 [2024-12-09 06:28:57.572614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:99992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.035 [2024-12-09 06:28:57.572624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.035 [2024-12-09 06:28:57.572637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:100000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.035 [2024-12-09 06:28:57.572648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.035 [2024-12-09 06:28:57.572658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:100008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.035 [2024-12-09 06:28:57.572665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.035 [2024-12-09 06:28:57.572676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:100016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.035 [2024-12-09 06:28:57.572683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.035 [2024-12-09 06:28:57.572693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:100024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.035 [2024-12-09 06:28:57.572702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.035 [2024-12-09 06:28:57.572713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:100032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.035 [2024-12-09 06:28:57.572724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.035 [2024-12-09 06:28:57.572734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:100040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.035 [2024-12-09 06:28:57.572752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.035 [2024-12-09 06:28:57.572761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:100048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.035 [2024-12-09 06:28:57.572768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.035 [2024-12-09 06:28:57.572781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:100056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.035 [2024-12-09 06:28:57.572789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.035 [2024-12-09 06:28:57.572806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:100064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.035 [2024-12-09 06:28:57.572820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.035 [2024-12-09 06:28:57.572831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.035 [2024-12-09 06:28:57.572838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.035 [2024-12-09 06:28:57.572851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:100080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.035 [2024-12-09 06:28:57.572861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.035 [2024-12-09 06:28:57.572872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:100088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.035 [2024-12-09 06:28:57.572879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.035 [2024-12-09 06:28:57.572895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:100096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.035 [2024-12-09 06:28:57.572906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.035 [2024-12-09 06:28:57.572917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:100104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.035 [2024-12-09 06:28:57.572931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.035 [2024-12-09 06:28:57.572941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.035 [2024-12-09 06:28:57.572948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.035 [2024-12-09 06:28:57.572959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:100120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.035 [2024-12-09 06:28:57.572971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.035 [2024-12-09 06:28:57.572980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:100128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.035 [2024-12-09 06:28:57.572987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.035 [2024-12-09 06:28:57.572995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:100136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.035 [2024-12-09 06:28:57.573003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.035 [2024-12-09 06:28:57.573013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.035 [2024-12-09 06:28:57.573025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.035 [2024-12-09 06:28:57.573036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.035 [2024-12-09 06:28:57.573042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.035 [2024-12-09 06:28:57.573055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.035 [2024-12-09 06:28:57.573066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.035 [2024-12-09 06:28:57.573077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:100168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.035 [2024-12-09 06:28:57.573087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.035 [2024-12-09 06:28:57.573103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:100176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.035 [2024-12-09 06:28:57.573113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.035 [2024-12-09 06:28:57.573123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:100184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.035 [2024-12-09 06:28:57.573130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.035 [2024-12-09 06:28:57.573141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:100192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.035 [2024-12-09 06:28:57.573152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.035 [2024-12-09 06:28:57.573161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:100200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.036 [2024-12-09 06:28:57.573168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.036 [2024-12-09 06:28:57.573177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:100208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.036 [2024-12-09 06:28:57.573186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.036 [2024-12-09 06:28:57.573200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:100216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.036 [2024-12-09 06:28:57.573210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.036 [2024-12-09 06:28:57.573219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:100224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.036 [2024-12-09 06:28:57.573226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.036 [2024-12-09 06:28:57.573237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:100232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.036 [2024-12-09 06:28:57.573245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.036 [2024-12-09 06:28:57.573256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:100240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.036 [2024-12-09 06:28:57.573263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.036 [2024-12-09 06:28:57.573272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:100248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.036 [2024-12-09 06:28:57.573279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.036 [2024-12-09 06:28:57.573287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:100256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.036 [2024-12-09 06:28:57.573294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.036 [2024-12-09 06:28:57.573303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:100264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.036 [2024-12-09 06:28:57.573311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.036 [2024-12-09 06:28:57.573319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:100272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.036 [2024-12-09 06:28:57.573327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.036 [2024-12-09 06:28:57.573336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:100280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.036 [2024-12-09 06:28:57.573344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.036 [2024-12-09 06:28:57.573352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:100288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.036 [2024-12-09 06:28:57.573359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.036 [2024-12-09 06:28:57.573370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:100296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.036 [2024-12-09 06:28:57.573376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.036 [2024-12-09 06:28:57.573386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:100304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.036 [2024-12-09 06:28:57.573395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.036 [2024-12-09 06:28:57.573406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:100312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.036 [2024-12-09 06:28:57.573417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.036 [2024-12-09 06:28:57.573426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:100320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.036 [2024-12-09 06:28:57.573440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.036 [2024-12-09 06:28:57.573458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:100328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.036 [2024-12-09 06:28:57.573492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.036 [2024-12-09 06:28:57.573505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:100336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.036 [2024-12-09 06:28:57.573519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.036 [2024-12-09 06:28:57.573532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:100344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.036 [2024-12-09 06:28:57.573545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.036 [2024-12-09 06:28:57.573561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:100352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.036 [2024-12-09 06:28:57.573568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.036 [2024-12-09 06:28:57.573578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:100360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.036 [2024-12-09 06:28:57.573592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.036 [2024-12-09 06:28:57.573602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:100368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.036 [2024-12-09 06:28:57.573616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.036 [2024-12-09 06:28:57.573629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:100376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.036 [2024-12-09 06:28:57.573643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.036 [2024-12-09 06:28:57.573656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:100384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.036 [2024-12-09 06:28:57.573667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.036 [2024-12-09 06:28:57.573681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:100392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.036 [2024-12-09 06:28:57.573700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.036 [2024-12-09 06:28:57.573711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:100400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.036 [2024-12-09 06:28:57.573728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.036 [2024-12-09 06:28:57.573737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:100408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.036 [2024-12-09 06:28:57.573749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.036 [2024-12-09 06:28:57.573762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:100416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.036 [2024-12-09 06:28:57.573775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.036 [2024-12-09 06:28:57.573791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:100424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.036 [2024-12-09 06:28:57.573802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.036 [2024-12-09 06:28:57.573817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:100432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.036 [2024-12-09 06:28:57.573829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.036 [2024-12-09 06:28:57.573844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:100440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.036 [2024-12-09 06:28:57.573853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.036 [2024-12-09 06:28:57.573864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:100448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.036 [2024-12-09 06:28:57.573871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.036 [2024-12-09 06:28:57.573880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:100456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.036 [2024-12-09 06:28:57.573887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.036 [2024-12-09 06:28:57.573896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:100464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.036 [2024-12-09 06:28:57.573903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.036 [2024-12-09 06:28:57.573911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:100472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.036 [2024-12-09 06:28:57.573918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.036 [2024-12-09 06:28:57.573927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:100480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.036 [2024-12-09 06:28:57.573933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.036 [2024-12-09 06:28:57.573942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:100488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.036 [2024-12-09 06:28:57.573949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.036 [2024-12-09 06:28:57.573959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:100496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.036 [2024-12-09 06:28:57.573967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.036 [2024-12-09 06:28:57.573976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:100504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.036 [2024-12-09 06:28:57.573983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.036 [2024-12-09 06:28:57.573991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:100512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.036 [2024-12-09 06:28:57.573998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.037 [2024-12-09 06:28:57.574007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:100520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.037 [2024-12-09 06:28:57.574014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.037 [2024-12-09 06:28:57.574023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:100528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.037 [2024-12-09 06:28:57.574030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.037 [2024-12-09 06:28:57.574038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:100536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.037 [2024-12-09 06:28:57.574045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.037 [2024-12-09 06:28:57.574053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:100544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.037 [2024-12-09 06:28:57.574061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.037 [2024-12-09 06:28:57.574070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:100552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.037 [2024-12-09 06:28:57.574077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.037 [2024-12-09 06:28:57.574085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:100560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.037 [2024-12-09 06:28:57.574092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.037 [2024-12-09 06:28:57.574101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.037 [2024-12-09 06:28:57.574107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.037 [2024-12-09 06:28:57.574116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.037 [2024-12-09 06:28:57.574123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.037 [2024-12-09 06:28:57.574132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:100584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.037 [2024-12-09 06:28:57.574138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.037 [2024-12-09 06:28:57.574147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:100592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.037 [2024-12-09 06:28:57.574154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.037 [2024-12-09 06:28:57.574164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:100600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.037 [2024-12-09 06:28:57.574171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.037 [2024-12-09 06:28:57.574181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:100608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.037 [2024-12-09 06:28:57.574188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.037 [2024-12-09 06:28:57.574196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:100616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.037 [2024-12-09 06:28:57.574203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.037 [2024-12-09 06:28:57.574212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:100624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.037 [2024-12-09 06:28:57.574219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.037 [2024-12-09 06:28:57.574227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:100632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.037 [2024-12-09 06:28:57.574234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.037 [2024-12-09 06:28:57.574243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:100640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.037 [2024-12-09 06:28:57.574249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.037 [2024-12-09 06:28:57.574258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.037 [2024-12-09 06:28:57.574265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.037 [2024-12-09 06:28:57.574274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:100656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.037 [2024-12-09 06:28:57.574281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.037 [2024-12-09 06:28:57.574289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:100664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.037 [2024-12-09 06:28:57.574296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.037 [2024-12-09 06:28:57.574305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:100672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.037 [2024-12-09 06:28:57.574311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.037 [2024-12-09 06:28:57.574320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:100680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.037 [2024-12-09 06:28:57.574327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.037 [2024-12-09 06:28:57.574336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:100688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.037 [2024-12-09 06:28:57.574342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.037 [2024-12-09 06:28:57.574351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:100696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.037 [2024-12-09 06:28:57.574359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.037 [2024-12-09 06:28:57.574368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:100704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.037 [2024-12-09 06:28:57.574374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.037 [2024-12-09 06:28:57.574383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.037 [2024-12-09 06:28:57.574390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.037 [2024-12-09 06:28:57.574401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:100720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.037 [2024-12-09 06:28:57.574408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.037 [2024-12-09 06:28:57.574417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.037 [2024-12-09 06:28:57.574423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.037 [2024-12-09 06:28:57.574432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:100736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.037 [2024-12-09 06:28:57.574439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.037 [2024-12-09 06:28:57.574452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:100744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.037 [2024-12-09 06:28:57.574460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.037 [2024-12-09 06:28:57.574469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:100752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.037 [2024-12-09 06:28:57.574476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.037 [2024-12-09 06:28:57.574484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:100760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.037 [2024-12-09 06:28:57.574491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.037 [2024-12-09 06:28:57.574500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:100768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.037 [2024-12-09 06:28:57.574507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.037 [2024-12-09 06:28:57.574515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:100776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.037 [2024-12-09 06:28:57.574522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.037 [2024-12-09 06:28:57.574530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:100784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.037 [2024-12-09 06:28:57.574537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.037 [2024-12-09 06:28:57.574546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:100792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.037 [2024-12-09 06:28:57.574553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.037 [2024-12-09 06:28:57.574563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:100800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.037 [2024-12-09 06:28:57.574570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.037 [2024-12-09 06:28:57.574578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:100808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.037 [2024-12-09 06:28:57.574585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.037 [2024-12-09 06:28:57.574594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:100816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.037 [2024-12-09 06:28:57.574601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.037 [2024-12-09 06:28:57.574610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.037 [2024-12-09 06:28:57.574616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.037 [2024-12-09 06:28:57.574625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.038 [2024-12-09 06:28:57.574632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.038 [2024-12-09 06:28:57.574640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:100840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.038 [2024-12-09 06:28:57.574647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.038 [2024-12-09 06:28:57.574655] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1480360 is same with the state(6) to be set 00:30:03.038 [2024-12-09 06:28:57.574663] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:03.038 [2024-12-09 06:28:57.574669] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:03.038 [2024-12-09 06:28:57.574675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100848 len:8 PRP1 0x0 PRP2 0x0 00:30:03.038 [2024-12-09 06:28:57.574683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.038 [2024-12-09 06:28:57.578021] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.038 [2024-12-09 06:28:57.578072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:03.038 [2024-12-09 06:28:57.578931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.038 [2024-12-09 06:28:57.578967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:03.038 [2024-12-09 06:28:57.578977] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:03.038 [2024-12-09 06:28:57.579204] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:03.038 [2024-12-09 06:28:57.579412] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.038 [2024-12-09 06:28:57.579420] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.038 [2024-12-09 06:28:57.579429] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.038 [2024-12-09 06:28:57.579438] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.038 [2024-12-09 06:28:57.591959] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.038 [2024-12-09 06:28:57.592505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.038 [2024-12-09 06:28:57.592532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:03.038 [2024-12-09 06:28:57.592541] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:03.038 [2024-12-09 06:28:57.592752] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:03.038 [2024-12-09 06:28:57.592958] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.038 [2024-12-09 06:28:57.592966] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.038 [2024-12-09 06:28:57.592973] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.038 [2024-12-09 06:28:57.592981] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.038 [2024-12-09 06:28:57.605656] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.038 [2024-12-09 06:28:57.606261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.038 [2024-12-09 06:28:57.606299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:03.038 [2024-12-09 06:28:57.606310] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:03.038 [2024-12-09 06:28:57.606545] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:03.038 [2024-12-09 06:28:57.606754] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.038 [2024-12-09 06:28:57.606762] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.038 [2024-12-09 06:28:57.606770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.038 [2024-12-09 06:28:57.606778] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.299 [2024-12-09 06:28:57.619282] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.299 [2024-12-09 06:28:57.619947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.299 [2024-12-09 06:28:57.619987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:03.299 [2024-12-09 06:28:57.619997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:03.299 [2024-12-09 06:28:57.620223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:03.299 [2024-12-09 06:28:57.620431] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.299 [2024-12-09 06:28:57.620440] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.299 [2024-12-09 06:28:57.620459] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.299 [2024-12-09 06:28:57.620468] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.299 [2024-12-09 06:28:57.632958] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.299 [2024-12-09 06:28:57.633510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.299 [2024-12-09 06:28:57.633552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:03.299 [2024-12-09 06:28:57.633570] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:03.299 [2024-12-09 06:28:57.633799] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:03.299 [2024-12-09 06:28:57.634008] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.299 [2024-12-09 06:28:57.634016] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.299 [2024-12-09 06:28:57.634024] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.299 [2024-12-09 06:28:57.634032] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.299 [2024-12-09 06:28:57.646532] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.299 [2024-12-09 06:28:57.647152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.299 [2024-12-09 06:28:57.647195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:03.299 [2024-12-09 06:28:57.647205] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:03.299 [2024-12-09 06:28:57.647434] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:03.299 [2024-12-09 06:28:57.647654] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.299 [2024-12-09 06:28:57.647663] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.299 [2024-12-09 06:28:57.647670] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.299 [2024-12-09 06:28:57.647678] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.299 [2024-12-09 06:28:57.660157] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.299 [2024-12-09 06:28:57.660777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.299 [2024-12-09 06:28:57.660822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:03.299 [2024-12-09 06:28:57.660833] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:03.299 [2024-12-09 06:28:57.661062] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:03.299 [2024-12-09 06:28:57.661272] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.299 [2024-12-09 06:28:57.661280] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.299 [2024-12-09 06:28:57.661288] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.299 [2024-12-09 06:28:57.661296] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.299 [2024-12-09 06:28:57.673780] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.299 [2024-12-09 06:28:57.674272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.299 [2024-12-09 06:28:57.674318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:03.299 [2024-12-09 06:28:57.674329] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:03.299 [2024-12-09 06:28:57.674570] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:03.299 [2024-12-09 06:28:57.674786] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.299 [2024-12-09 06:28:57.674796] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.299 [2024-12-09 06:28:57.674803] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.300 [2024-12-09 06:28:57.674811] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.300 [2024-12-09 06:28:57.687491] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.300 [2024-12-09 06:28:57.688150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.300 [2024-12-09 06:28:57.688198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:03.300 [2024-12-09 06:28:57.688209] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:03.300 [2024-12-09 06:28:57.688441] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:03.300 [2024-12-09 06:28:57.688662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.300 [2024-12-09 06:28:57.688672] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.300 [2024-12-09 06:28:57.688680] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.300 [2024-12-09 06:28:57.688688] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.300 [2024-12-09 06:28:57.701179] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.300 [2024-12-09 06:28:57.701742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.300 [2024-12-09 06:28:57.701798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:03.300 [2024-12-09 06:28:57.701810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:03.300 [2024-12-09 06:28:57.702047] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:03.300 [2024-12-09 06:28:57.702258] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.300 [2024-12-09 06:28:57.702266] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.300 [2024-12-09 06:28:57.702274] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.300 [2024-12-09 06:28:57.702283] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.300 [2024-12-09 06:28:57.714799] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.300 [2024-12-09 06:28:57.715483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.300 [2024-12-09 06:28:57.715542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:03.300 [2024-12-09 06:28:57.715554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:03.300 [2024-12-09 06:28:57.715795] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:03.300 [2024-12-09 06:28:57.716006] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.300 [2024-12-09 06:28:57.716016] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.300 [2024-12-09 06:28:57.716031] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.300 [2024-12-09 06:28:57.716041] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.300 [2024-12-09 06:28:57.728367] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.300 [2024-12-09 06:28:57.729061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.300 [2024-12-09 06:28:57.729122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:03.300 [2024-12-09 06:28:57.729134] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:03.300 [2024-12-09 06:28:57.729374] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:03.300 [2024-12-09 06:28:57.729600] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.300 [2024-12-09 06:28:57.729609] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.300 [2024-12-09 06:28:57.729617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.300 [2024-12-09 06:28:57.729625] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.300 [2024-12-09 06:28:57.741961] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.300 [2024-12-09 06:28:57.742654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.300 [2024-12-09 06:28:57.742713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:03.300 [2024-12-09 06:28:57.742725] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:03.300 [2024-12-09 06:28:57.742965] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:03.300 [2024-12-09 06:28:57.743176] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.300 [2024-12-09 06:28:57.743186] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.300 [2024-12-09 06:28:57.743194] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.300 [2024-12-09 06:28:57.743203] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.300 [2024-12-09 06:28:57.755535] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.300 [2024-12-09 06:28:57.756221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.300 [2024-12-09 06:28:57.756279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:03.300 [2024-12-09 06:28:57.756291] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:03.300 [2024-12-09 06:28:57.756542] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:03.300 [2024-12-09 06:28:57.756754] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.300 [2024-12-09 06:28:57.756764] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.300 [2024-12-09 06:28:57.756773] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.300 [2024-12-09 06:28:57.756782] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.300 [2024-12-09 06:28:57.769096] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.300 [2024-12-09 06:28:57.769755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.300 [2024-12-09 06:28:57.769815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:03.300 [2024-12-09 06:28:57.769827] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:03.300 [2024-12-09 06:28:57.770067] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:03.300 [2024-12-09 06:28:57.770278] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.300 [2024-12-09 06:28:57.770287] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.300 [2024-12-09 06:28:57.770295] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.300 [2024-12-09 06:28:57.770303] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.300 [2024-12-09 06:28:57.782820] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.300 [2024-12-09 06:28:57.783486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.300 [2024-12-09 06:28:57.783546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:03.300 [2024-12-09 06:28:57.783558] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:03.300 [2024-12-09 06:28:57.783798] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:03.300 [2024-12-09 06:28:57.784009] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.300 [2024-12-09 06:28:57.784018] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.300 [2024-12-09 06:28:57.784025] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.300 [2024-12-09 06:28:57.784034] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.300 [2024-12-09 06:28:57.796562] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.300 [2024-12-09 06:28:57.797174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.300 [2024-12-09 06:28:57.797202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:03.300 [2024-12-09 06:28:57.797210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:03.300 [2024-12-09 06:28:57.797418] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:03.300 [2024-12-09 06:28:57.797633] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.300 [2024-12-09 06:28:57.797645] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.300 [2024-12-09 06:28:57.797652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.300 [2024-12-09 06:28:57.797661] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.300 [2024-12-09 06:28:57.810163] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.300 [2024-12-09 06:28:57.810844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.300 [2024-12-09 06:28:57.810905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:03.300 [2024-12-09 06:28:57.810925] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:03.300 [2024-12-09 06:28:57.811165] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:03.300 [2024-12-09 06:28:57.811376] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.300 [2024-12-09 06:28:57.811386] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.300 [2024-12-09 06:28:57.811394] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.300 [2024-12-09 06:28:57.811403] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.300 [2024-12-09 06:28:57.823848] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.300 [2024-12-09 06:28:57.824556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.300 [2024-12-09 06:28:57.824617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:03.300 [2024-12-09 06:28:57.824629] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:03.300 [2024-12-09 06:28:57.824869] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:03.300 [2024-12-09 06:28:57.825080] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.300 [2024-12-09 06:28:57.825091] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.300 [2024-12-09 06:28:57.825100] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.300 [2024-12-09 06:28:57.825109] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.300 [2024-12-09 06:28:57.837513] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.300 [2024-12-09 06:28:57.838010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.300 [2024-12-09 06:28:57.838038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:03.300 [2024-12-09 06:28:57.838047] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:03.300 [2024-12-09 06:28:57.838267] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:03.300 [2024-12-09 06:28:57.838483] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.300 [2024-12-09 06:28:57.838492] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.300 [2024-12-09 06:28:57.838500] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.300 [2024-12-09 06:28:57.838508] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.300 [2024-12-09 06:28:57.851208] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.300 [2024-12-09 06:28:57.851710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.300 [2024-12-09 06:28:57.851734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:03.301 [2024-12-09 06:28:57.851742] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:03.301 [2024-12-09 06:28:57.851949] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:03.301 [2024-12-09 06:28:57.852165] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.301 [2024-12-09 06:28:57.852175] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.301 [2024-12-09 06:28:57.852182] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.301 [2024-12-09 06:28:57.852190] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.301 [2024-12-09 06:28:57.864890] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.301 [2024-12-09 06:28:57.865564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.301 [2024-12-09 06:28:57.865625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:03.301 [2024-12-09 06:28:57.865638] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:03.301 [2024-12-09 06:28:57.865879] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:03.301 [2024-12-09 06:28:57.866094] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.301 [2024-12-09 06:28:57.866105] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.301 [2024-12-09 06:28:57.866112] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.301 [2024-12-09 06:28:57.866121] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.301 [2024-12-09 06:28:57.878458] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.301 [2024-12-09 06:28:57.879181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.301 [2024-12-09 06:28:57.879240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:03.301 [2024-12-09 06:28:57.879252] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:03.301 [2024-12-09 06:28:57.879503] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:03.301 [2024-12-09 06:28:57.879716] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.301 [2024-12-09 06:28:57.879727] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.301 [2024-12-09 06:28:57.879735] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.301 [2024-12-09 06:28:57.879744] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.563 [2024-12-09 06:28:57.892060] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.564 [2024-12-09 06:28:57.892630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-12-09 06:28:57.892660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:03.564 [2024-12-09 06:28:57.892669] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:03.564 [2024-12-09 06:28:57.892878] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:03.564 [2024-12-09 06:28:57.893086] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.564 [2024-12-09 06:28:57.893095] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.564 [2024-12-09 06:28:57.893112] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.564 [2024-12-09 06:28:57.893120] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.564 [2024-12-09 06:28:57.905628] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.564 [2024-12-09 06:28:57.906308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-12-09 06:28:57.906367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:03.564 [2024-12-09 06:28:57.906379] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:03.564 [2024-12-09 06:28:57.906635] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:03.564 [2024-12-09 06:28:57.906849] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.564 [2024-12-09 06:28:57.906859] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.564 [2024-12-09 06:28:57.906867] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.564 [2024-12-09 06:28:57.906876] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.564 [2024-12-09 06:28:57.919216] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.564 [2024-12-09 06:28:57.919919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-12-09 06:28:57.919979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:03.564 [2024-12-09 06:28:57.919991] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:03.564 [2024-12-09 06:28:57.920232] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:03.564 [2024-12-09 06:28:57.920444] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.564 [2024-12-09 06:28:57.920468] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.564 [2024-12-09 06:28:57.920477] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.564 [2024-12-09 06:28:57.920486] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.564 [2024-12-09 06:28:57.932807] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.564 [2024-12-09 06:28:57.933406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-12-09 06:28:57.933434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:03.564 [2024-12-09 06:28:57.933443] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:03.564 [2024-12-09 06:28:57.933660] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:03.564 [2024-12-09 06:28:57.933867] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.564 [2024-12-09 06:28:57.933876] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.564 [2024-12-09 06:28:57.933883] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.564 [2024-12-09 06:28:57.933890] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.564 [2024-12-09 06:28:57.946427] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.564 [2024-12-09 06:28:57.946906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-12-09 06:28:57.946932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:03.564 [2024-12-09 06:28:57.946941] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:03.564 [2024-12-09 06:28:57.947148] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:03.564 [2024-12-09 06:28:57.947356] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.564 [2024-12-09 06:28:57.947365] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.564 [2024-12-09 06:28:57.947373] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.564 [2024-12-09 06:28:57.947380] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.564 [2024-12-09 06:28:57.960078] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.564 [2024-12-09 06:28:57.960766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-12-09 06:28:57.960824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:03.564 [2024-12-09 06:28:57.960836] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:03.564 [2024-12-09 06:28:57.961077] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:03.564 [2024-12-09 06:28:57.961288] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.564 [2024-12-09 06:28:57.961297] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.564 [2024-12-09 06:28:57.961306] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.564 [2024-12-09 06:28:57.961315] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.564 [2024-12-09 06:28:57.973657] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.564 [2024-12-09 06:28:57.974254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-12-09 06:28:57.974315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:03.564 [2024-12-09 06:28:57.974327] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:03.564 [2024-12-09 06:28:57.974578] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:03.564 [2024-12-09 06:28:57.974792] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.564 [2024-12-09 06:28:57.974801] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.564 [2024-12-09 06:28:57.974809] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.564 [2024-12-09 06:28:57.974818] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.564 [2024-12-09 06:28:57.987338] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.564 [2024-12-09 06:28:57.988044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-12-09 06:28:57.988105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:03.564 [2024-12-09 06:28:57.988126] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:03.564 [2024-12-09 06:28:57.988368] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:03.564 [2024-12-09 06:28:57.988595] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.564 [2024-12-09 06:28:57.988606] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.564 [2024-12-09 06:28:57.988614] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.564 [2024-12-09 06:28:57.988625] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.564 [2024-12-09 06:28:58.000953] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.564 [2024-12-09 06:28:58.001700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-12-09 06:28:58.001761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:03.564 [2024-12-09 06:28:58.001773] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:03.564 [2024-12-09 06:28:58.002013] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:03.564 [2024-12-09 06:28:58.002225] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.564 [2024-12-09 06:28:58.002235] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.564 [2024-12-09 06:28:58.002243] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.564 [2024-12-09 06:28:58.002252] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.564 [2024-12-09 06:28:58.014609] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.564 [2024-12-09 06:28:58.015294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-12-09 06:28:58.015353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:03.564 [2024-12-09 06:28:58.015365] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:03.564 [2024-12-09 06:28:58.015620] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:03.565 [2024-12-09 06:28:58.015832] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.565 [2024-12-09 06:28:58.015842] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.565 [2024-12-09 06:28:58.015851] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.565 [2024-12-09 06:28:58.015860] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.565 [2024-12-09 06:28:58.028184] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.565 [2024-12-09 06:28:58.028890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-12-09 06:28:58.028950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:03.565 [2024-12-09 06:28:58.028963] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:03.565 [2024-12-09 06:28:58.029203] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:03.565 [2024-12-09 06:28:58.029423] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.565 [2024-12-09 06:28:58.029432] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.565 [2024-12-09 06:28:58.029440] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.565 [2024-12-09 06:28:58.029465] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.565 [2024-12-09 06:28:58.041796] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.565 [2024-12-09 06:28:58.042445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-12-09 06:28:58.042514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:03.565 [2024-12-09 06:28:58.042526] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:03.565 [2024-12-09 06:28:58.042766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:03.565 [2024-12-09 06:28:58.042978] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.565 [2024-12-09 06:28:58.042987] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.565 [2024-12-09 06:28:58.042995] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.565 [2024-12-09 06:28:58.043003] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.565 9621.00 IOPS, 37.58 MiB/s [2024-12-09T05:28:58.152Z] [2024-12-09 06:28:58.055511] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.565 [2024-12-09 06:28:58.056118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-12-09 06:28:58.056145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:03.565 [2024-12-09 06:28:58.056154] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:03.565 [2024-12-09 06:28:58.056362] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:03.565 [2024-12-09 06:28:58.056579] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.565 [2024-12-09 06:28:58.056596] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.565 [2024-12-09 06:28:58.056604] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.565 [2024-12-09 06:28:58.056612] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.565 [2024-12-09 06:28:58.069122] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.565 [2024-12-09 06:28:58.069677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-12-09 06:28:58.069701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:03.565 [2024-12-09 06:28:58.069709] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:03.565 [2024-12-09 06:28:58.069916] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:03.565 [2024-12-09 06:28:58.070124] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.565 [2024-12-09 06:28:58.070133] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.565 [2024-12-09 06:28:58.070150] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.565 [2024-12-09 06:28:58.070160] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.565 [2024-12-09 06:28:58.082903] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.565 [2024-12-09 06:28:58.083492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-12-09 06:28:58.083517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:03.565 [2024-12-09 06:28:58.083525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:03.565 [2024-12-09 06:28:58.083734] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:03.565 [2024-12-09 06:28:58.083940] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.565 [2024-12-09 06:28:58.083958] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.565 [2024-12-09 06:28:58.083966] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.565 [2024-12-09 06:28:58.083975] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.565 [2024-12-09 06:28:58.096493] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.565 [2024-12-09 06:28:58.097029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-12-09 06:28:58.097052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:03.565 [2024-12-09 06:28:58.097061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:03.565 [2024-12-09 06:28:58.097267] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:03.565 [2024-12-09 06:28:58.097481] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.565 [2024-12-09 06:28:58.097490] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.565 [2024-12-09 06:28:58.097498] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.565 [2024-12-09 06:28:58.097506] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.565 [2024-12-09 06:28:58.110201] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.565 [2024-12-09 06:28:58.110697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-12-09 06:28:58.110723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:03.565 [2024-12-09 06:28:58.110732] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:03.565 [2024-12-09 06:28:58.110940] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:03.565 [2024-12-09 06:28:58.111148] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.565 [2024-12-09 06:28:58.111157] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.565 [2024-12-09 06:28:58.111169] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.565 [2024-12-09 06:28:58.111178] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.565 [2024-12-09 06:28:58.123922] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.565 [2024-12-09 06:28:58.124481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-12-09 06:28:58.124541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:03.565 [2024-12-09 06:28:58.124554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:03.565 [2024-12-09 06:28:58.124795] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:03.565 [2024-12-09 06:28:58.125007] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.565 [2024-12-09 06:28:58.125017] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.565 [2024-12-09 06:28:58.125024] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.565 [2024-12-09 06:28:58.125033] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.565 [2024-12-09 06:28:58.137569] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.565 [2024-12-09 06:28:58.138263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-12-09 06:28:58.138323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:03.565 [2024-12-09 06:28:58.138335] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:03.565 [2024-12-09 06:28:58.138588] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:03.565 [2024-12-09 06:28:58.138815] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.565 [2024-12-09 06:28:58.138826] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.565 [2024-12-09 06:28:58.138835] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.565 [2024-12-09 06:28:58.138843] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.827 [2024-12-09 06:28:58.151180] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.827 [2024-12-09 06:28:58.151776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.827 [2024-12-09 06:28:58.151805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:03.827 [2024-12-09 06:28:58.151813] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:03.827 [2024-12-09 06:28:58.152023] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:03.827 [2024-12-09 06:28:58.152229] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.827 [2024-12-09 06:28:58.152238] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.827 [2024-12-09 06:28:58.152245] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.827 [2024-12-09 06:28:58.152252] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.827 [2024-12-09 06:28:58.164744] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.827 [2024-12-09 06:28:58.165320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.827 [2024-12-09 06:28:58.165343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:03.827 [2024-12-09 06:28:58.165359] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:03.827 [2024-12-09 06:28:58.165574] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:03.827 [2024-12-09 06:28:58.165780] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.827 [2024-12-09 06:28:58.165788] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.827 [2024-12-09 06:28:58.165795] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.827 [2024-12-09 06:28:58.165803] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.827 [2024-12-09 06:28:58.178286] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.827 [2024-12-09 06:28:58.178961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.827 [2024-12-09 06:28:58.179020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:03.827 [2024-12-09 06:28:58.179032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:03.827 [2024-12-09 06:28:58.179273] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:03.827 [2024-12-09 06:28:58.179499] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.827 [2024-12-09 06:28:58.179509] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.827 [2024-12-09 06:28:58.179517] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.827 [2024-12-09 06:28:58.179526] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.827 [2024-12-09 06:28:58.191855] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.827 [2024-12-09 06:28:58.192550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.827 [2024-12-09 06:28:58.192611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:03.827 [2024-12-09 06:28:58.192624] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:03.827 [2024-12-09 06:28:58.192864] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:03.827 [2024-12-09 06:28:58.193076] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.827 [2024-12-09 06:28:58.193086] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.827 [2024-12-09 06:28:58.193094] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.827 [2024-12-09 06:28:58.193103] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.827 [2024-12-09 06:28:58.205432] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.827 [2024-12-09 06:28:58.206088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.827 [2024-12-09 06:28:58.206146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:03.827 [2024-12-09 06:28:58.206158] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:03.827 [2024-12-09 06:28:58.206398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:03.827 [2024-12-09 06:28:58.206633] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.827 [2024-12-09 06:28:58.206643] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.827 [2024-12-09 06:28:58.206652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.827 [2024-12-09 06:28:58.206660] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.827 [2024-12-09 06:28:58.219003] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.827 [2024-12-09 06:28:58.219654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.827 [2024-12-09 06:28:58.219685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:03.827 [2024-12-09 06:28:58.219693] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:03.827 [2024-12-09 06:28:58.219901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:03.827 [2024-12-09 06:28:58.220108] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.827 [2024-12-09 06:28:58.220119] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.827 [2024-12-09 06:28:58.220127] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.827 [2024-12-09 06:28:58.220134] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.827 [2024-12-09 06:28:58.232643] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.827 [2024-12-09 06:28:58.233222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.827 [2024-12-09 06:28:58.233246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:03.827 [2024-12-09 06:28:58.233254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:03.827 [2024-12-09 06:28:58.233468] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:03.827 [2024-12-09 06:28:58.233675] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.827 [2024-12-09 06:28:58.233683] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.827 [2024-12-09 06:28:58.233690] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.827 [2024-12-09 06:28:58.233697] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.827 [2024-12-09 06:28:58.246200] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.827 [2024-12-09 06:28:58.246845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.828 [2024-12-09 06:28:58.246905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:03.828 [2024-12-09 06:28:58.246916] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:03.828 [2024-12-09 06:28:58.247157] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:03.828 [2024-12-09 06:28:58.247371] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.828 [2024-12-09 06:28:58.247380] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.828 [2024-12-09 06:28:58.247401] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.828 [2024-12-09 06:28:58.247410] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.828 [2024-12-09 06:28:58.259938] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.828 [2024-12-09 06:28:58.260477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.828 [2024-12-09 06:28:58.260507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:03.828 [2024-12-09 06:28:58.260516] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:03.828 [2024-12-09 06:28:58.260725] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:03.828 [2024-12-09 06:28:58.260932] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.828 [2024-12-09 06:28:58.260940] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.828 [2024-12-09 06:28:58.260948] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.828 [2024-12-09 06:28:58.260955] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.828 [2024-12-09 06:28:58.273663] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.828 [2024-12-09 06:28:58.274241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.828 [2024-12-09 06:28:58.274265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:03.828 [2024-12-09 06:28:58.274273] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:03.828 [2024-12-09 06:28:58.274548] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:03.828 [2024-12-09 06:28:58.274759] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.828 [2024-12-09 06:28:58.274771] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.828 [2024-12-09 06:28:58.274779] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.828 [2024-12-09 06:28:58.274786] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.828 [2024-12-09 06:28:58.287283] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.828 [2024-12-09 06:28:58.287979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.828 [2024-12-09 06:28:58.288039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:03.828 [2024-12-09 06:28:58.288051] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:03.828 [2024-12-09 06:28:58.288292] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:03.828 [2024-12-09 06:28:58.288516] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.828 [2024-12-09 06:28:58.288526] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.828 [2024-12-09 06:28:58.288534] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.828 [2024-12-09 06:28:58.288543] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.828 [2024-12-09 06:28:58.300872] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.828 [2024-12-09 06:28:58.301539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.828 [2024-12-09 06:28:58.301599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:03.828 [2024-12-09 06:28:58.301611] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:03.828 [2024-12-09 06:28:58.301852] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:03.828 [2024-12-09 06:28:58.302064] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.828 [2024-12-09 06:28:58.302074] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.828 [2024-12-09 06:28:58.302082] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.828 [2024-12-09 06:28:58.302091] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.828 [2024-12-09 06:28:58.314634] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.828 [2024-12-09 06:28:58.315287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.828 [2024-12-09 06:28:58.315347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:03.828 [2024-12-09 06:28:58.315359] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:03.828 [2024-12-09 06:28:58.315612] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:03.828 [2024-12-09 06:28:58.315826] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.828 [2024-12-09 06:28:58.315835] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.828 [2024-12-09 06:28:58.315844] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.828 [2024-12-09 06:28:58.315853] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.828 [2024-12-09 06:28:58.328375] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.828 [2024-12-09 06:28:58.329070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.828 [2024-12-09 06:28:58.329130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:03.828 [2024-12-09 06:28:58.329142] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:03.828 [2024-12-09 06:28:58.329383] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:03.828 [2024-12-09 06:28:58.329607] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.828 [2024-12-09 06:28:58.329617] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.828 [2024-12-09 06:28:58.329625] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.828 [2024-12-09 06:28:58.329634] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.828 [2024-12-09 06:28:58.341967] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.828 [2024-12-09 06:28:58.342492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.828 [2024-12-09 06:28:58.342524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:03.828 [2024-12-09 06:28:58.342540] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:03.828 [2024-12-09 06:28:58.342750] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:03.828 [2024-12-09 06:28:58.342956] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.828 [2024-12-09 06:28:58.342965] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.828 [2024-12-09 06:28:58.342972] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.828 [2024-12-09 06:28:58.342979] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.828 [2024-12-09 06:28:58.355693] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.828 [2024-12-09 06:28:58.356367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.828 [2024-12-09 06:28:58.356427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:03.828 [2024-12-09 06:28:58.356440] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:03.828 [2024-12-09 06:28:58.356691] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:03.828 [2024-12-09 06:28:58.356903] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.828 [2024-12-09 06:28:58.356914] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.828 [2024-12-09 06:28:58.356921] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.828 [2024-12-09 06:28:58.356930] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.828 [2024-12-09 06:28:58.369435] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.828 [2024-12-09 06:28:58.370037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.828 [2024-12-09 06:28:58.370066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:03.828 [2024-12-09 06:28:58.370075] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:03.828 [2024-12-09 06:28:58.370282] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:03.828 [2024-12-09 06:28:58.370497] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.828 [2024-12-09 06:28:58.370505] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.828 [2024-12-09 06:28:58.370513] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.828 [2024-12-09 06:28:58.370521] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.828 [2024-12-09 06:28:58.383032] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.828 [2024-12-09 06:28:58.383614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.829 [2024-12-09 06:28:58.383639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:03.829 [2024-12-09 06:28:58.383647] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:03.829 [2024-12-09 06:28:58.383854] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:03.829 [2024-12-09 06:28:58.384068] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.829 [2024-12-09 06:28:58.384077] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.829 [2024-12-09 06:28:58.384084] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.829 [2024-12-09 06:28:58.384091] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.829 [2024-12-09 06:28:58.396591] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.829 [2024-12-09 06:28:58.397177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.829 [2024-12-09 06:28:58.397202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:03.829 [2024-12-09 06:28:58.397210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:03.829 [2024-12-09 06:28:58.397416] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:03.829 [2024-12-09 06:28:58.397630] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.829 [2024-12-09 06:28:58.397639] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.829 [2024-12-09 06:28:58.397647] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.829 [2024-12-09 06:28:58.397653] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.829 [2024-12-09 06:28:58.410140] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.829 [2024-12-09 06:28:58.410689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.829 [2024-12-09 06:28:58.410713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:03.829 [2024-12-09 06:28:58.410721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:03.829 [2024-12-09 06:28:58.410928] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:03.829 [2024-12-09 06:28:58.411134] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.829 [2024-12-09 06:28:58.411143] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.829 [2024-12-09 06:28:58.411150] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.829 [2024-12-09 06:28:58.411158] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.091 [2024-12-09 06:28:58.423850] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.091 [2024-12-09 06:28:58.424425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.091 [2024-12-09 06:28:58.424454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:04.091 [2024-12-09 06:28:58.424463] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:04.091 [2024-12-09 06:28:58.424672] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:04.091 [2024-12-09 06:28:58.424878] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.091 [2024-12-09 06:28:58.424887] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.091 [2024-12-09 06:28:58.424903] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.091 [2024-12-09 06:28:58.424910] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.091 [2024-12-09 06:28:58.437584] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.091 [2024-12-09 06:28:58.438263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.091 [2024-12-09 06:28:58.438321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:04.091 [2024-12-09 06:28:58.438333] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:04.091 [2024-12-09 06:28:58.438583] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:04.091 [2024-12-09 06:28:58.438796] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.091 [2024-12-09 06:28:58.438806] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.091 [2024-12-09 06:28:58.438814] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.091 [2024-12-09 06:28:58.438822] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.091 [2024-12-09 06:28:58.451340] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.091 [2024-12-09 06:28:58.452034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.091 [2024-12-09 06:28:58.452094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:04.091 [2024-12-09 06:28:58.452105] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:04.091 [2024-12-09 06:28:58.452346] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:04.091 [2024-12-09 06:28:58.452573] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.091 [2024-12-09 06:28:58.452583] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.091 [2024-12-09 06:28:58.452590] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.091 [2024-12-09 06:28:58.452599] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.091 [2024-12-09 06:28:58.464924] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.091 [2024-12-09 06:28:58.465385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.091 [2024-12-09 06:28:58.465413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:04.091 [2024-12-09 06:28:58.465421] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:04.091 [2024-12-09 06:28:58.465637] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:04.091 [2024-12-09 06:28:58.465844] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.091 [2024-12-09 06:28:58.465852] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.091 [2024-12-09 06:28:58.465860] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.091 [2024-12-09 06:28:58.465867] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.091 [2024-12-09 06:28:58.478597] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.091 [2024-12-09 06:28:58.479120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.091 [2024-12-09 06:28:58.479142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:04.091 [2024-12-09 06:28:58.479149] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:04.091 [2024-12-09 06:28:58.479357] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:04.091 [2024-12-09 06:28:58.479574] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.091 [2024-12-09 06:28:58.479583] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.091 [2024-12-09 06:28:58.479590] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.091 [2024-12-09 06:28:58.479597] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.091 [2024-12-09 06:28:58.492296] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.091 [2024-12-09 06:28:58.492845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.091 [2024-12-09 06:28:58.492868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:04.091 [2024-12-09 06:28:58.492876] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:04.091 [2024-12-09 06:28:58.493082] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:04.091 [2024-12-09 06:28:58.493288] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.091 [2024-12-09 06:28:58.493297] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.091 [2024-12-09 06:28:58.493304] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.091 [2024-12-09 06:28:58.493311] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.091 [2024-12-09 06:28:58.506049] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.091 [2024-12-09 06:28:58.506624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.091 [2024-12-09 06:28:58.506648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:04.091 [2024-12-09 06:28:58.506656] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:04.091 [2024-12-09 06:28:58.506863] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:04.091 [2024-12-09 06:28:58.507069] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.092 [2024-12-09 06:28:58.507079] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.092 [2024-12-09 06:28:58.507086] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.092 [2024-12-09 06:28:58.507093] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.092 [2024-12-09 06:28:58.519638] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.092 [2024-12-09 06:28:58.520221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.092 [2024-12-09 06:28:58.520245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:04.092 [2024-12-09 06:28:58.520259] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:04.092 [2024-12-09 06:28:58.520475] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:04.092 [2024-12-09 06:28:58.520684] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.092 [2024-12-09 06:28:58.520693] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.092 [2024-12-09 06:28:58.520701] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.092 [2024-12-09 06:28:58.520709] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.092 [2024-12-09 06:28:58.533223] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.092 [2024-12-09 06:28:58.533821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.092 [2024-12-09 06:28:58.533845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:04.092 [2024-12-09 06:28:58.533853] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:04.092 [2024-12-09 06:28:58.534059] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:04.092 [2024-12-09 06:28:58.534264] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.092 [2024-12-09 06:28:58.534273] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.092 [2024-12-09 06:28:58.534280] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.092 [2024-12-09 06:28:58.534286] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.092 [2024-12-09 06:28:58.546826] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.092 [2024-12-09 06:28:58.547502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.092 [2024-12-09 06:28:58.547562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:04.092 [2024-12-09 06:28:58.547574] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:04.092 [2024-12-09 06:28:58.547814] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:04.092 [2024-12-09 06:28:58.548026] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.092 [2024-12-09 06:28:58.548036] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.092 [2024-12-09 06:28:58.548043] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.092 [2024-12-09 06:28:58.548053] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.092 [2024-12-09 06:28:58.560374] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.092 [2024-12-09 06:28:58.561029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.092 [2024-12-09 06:28:58.561089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:04.092 [2024-12-09 06:28:58.561101] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:04.092 [2024-12-09 06:28:58.561342] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:04.092 [2024-12-09 06:28:58.561575] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.092 [2024-12-09 06:28:58.561585] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.092 [2024-12-09 06:28:58.561593] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.092 [2024-12-09 06:28:58.561601] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.092 [2024-12-09 06:28:58.574107] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.092 [2024-12-09 06:28:58.574808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.092 [2024-12-09 06:28:58.574869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:04.092 [2024-12-09 06:28:58.574881] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:04.092 [2024-12-09 06:28:58.575122] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:04.092 [2024-12-09 06:28:58.575335] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.092 [2024-12-09 06:28:58.575345] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.092 [2024-12-09 06:28:58.575353] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.092 [2024-12-09 06:28:58.575363] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.092 [2024-12-09 06:28:58.587695] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.092 [2024-12-09 06:28:58.588382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.092 [2024-12-09 06:28:58.588442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:04.092 [2024-12-09 06:28:58.588466] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:04.092 [2024-12-09 06:28:58.588708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:04.092 [2024-12-09 06:28:58.588920] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.092 [2024-12-09 06:28:58.588929] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.092 [2024-12-09 06:28:58.588937] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.092 [2024-12-09 06:28:58.588945] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.092 [2024-12-09 06:28:58.601282] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.092 [2024-12-09 06:28:58.601911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.092 [2024-12-09 06:28:58.601941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:04.092 [2024-12-09 06:28:58.601950] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:04.092 [2024-12-09 06:28:58.602160] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:04.092 [2024-12-09 06:28:58.602366] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.092 [2024-12-09 06:28:58.602376] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.092 [2024-12-09 06:28:58.602391] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.092 [2024-12-09 06:28:58.602399] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.092 [2024-12-09 06:28:58.614921] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.092 [2024-12-09 06:28:58.615541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.092 [2024-12-09 06:28:58.615584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:04.092 [2024-12-09 06:28:58.615593] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:04.092 [2024-12-09 06:28:58.615819] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:04.092 [2024-12-09 06:28:58.616027] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.092 [2024-12-09 06:28:58.616035] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.092 [2024-12-09 06:28:58.616043] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.092 [2024-12-09 06:28:58.616051] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.092 [2024-12-09 06:28:58.628549] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.092 [2024-12-09 06:28:58.629160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.092 [2024-12-09 06:28:58.629219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:04.092 [2024-12-09 06:28:58.629231] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:04.092 [2024-12-09 06:28:58.629484] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:04.092 [2024-12-09 06:28:58.629697] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.092 [2024-12-09 06:28:58.629705] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.092 [2024-12-09 06:28:58.629713] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.092 [2024-12-09 06:28:58.629722] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.092 [2024-12-09 06:28:58.642239] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.092 [2024-12-09 06:28:58.642870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.092 [2024-12-09 06:28:58.642930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:04.092 [2024-12-09 06:28:58.642941] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:04.092 [2024-12-09 06:28:58.643182] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:04.092 [2024-12-09 06:28:58.643394] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.093 [2024-12-09 06:28:58.643403] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.093 [2024-12-09 06:28:58.643411] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.093 [2024-12-09 06:28:58.643420] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.093 [2024-12-09 06:28:58.655941] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.093 [2024-12-09 06:28:58.656559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.093 [2024-12-09 06:28:58.656619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:04.093 [2024-12-09 06:28:58.656631] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:04.093 [2024-12-09 06:28:58.656872] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:04.093 [2024-12-09 06:28:58.657083] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.093 [2024-12-09 06:28:58.657093] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.093 [2024-12-09 06:28:58.657101] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.093 [2024-12-09 06:28:58.657110] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.093 [2024-12-09 06:28:58.669636] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.093 [2024-12-09 06:28:58.670321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.093 [2024-12-09 06:28:58.670379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:04.093 [2024-12-09 06:28:58.670391] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:04.093 [2024-12-09 06:28:58.670645] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:04.093 [2024-12-09 06:28:58.670858] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.093 [2024-12-09 06:28:58.670867] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.093 [2024-12-09 06:28:58.670876] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.093 [2024-12-09 06:28:58.670885] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.354 [2024-12-09 06:28:58.683213] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.354 [2024-12-09 06:28:58.683920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.354 [2024-12-09 06:28:58.683980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:04.355 [2024-12-09 06:28:58.683993] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:04.355 [2024-12-09 06:28:58.684233] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:04.355 [2024-12-09 06:28:58.684445] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.355 [2024-12-09 06:28:58.684468] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.355 [2024-12-09 06:28:58.684476] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.355 [2024-12-09 06:28:58.684485] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.355 [2024-12-09 06:28:58.696791] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.355 [2024-12-09 06:28:58.697516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.355 [2024-12-09 06:28:58.697577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:04.355 [2024-12-09 06:28:58.697598] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:04.355 [2024-12-09 06:28:58.697839] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:04.355 [2024-12-09 06:28:58.698051] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.355 [2024-12-09 06:28:58.698061] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.355 [2024-12-09 06:28:58.698069] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.355 [2024-12-09 06:28:58.698078] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.355 [2024-12-09 06:28:58.710426] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.355 [2024-12-09 06:28:58.711123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.355 [2024-12-09 06:28:58.711183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:04.355 [2024-12-09 06:28:58.711195] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:04.355 [2024-12-09 06:28:58.711435] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:04.355 [2024-12-09 06:28:58.711660] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.355 [2024-12-09 06:28:58.711670] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.355 [2024-12-09 06:28:58.711679] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.355 [2024-12-09 06:28:58.711687] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.355 [2024-12-09 06:28:58.724029] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.355 [2024-12-09 06:28:58.724715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.355 [2024-12-09 06:28:58.724773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:04.355 [2024-12-09 06:28:58.724785] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:04.355 [2024-12-09 06:28:58.725025] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:04.355 [2024-12-09 06:28:58.725236] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.355 [2024-12-09 06:28:58.725245] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.355 [2024-12-09 06:28:58.725254] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.355 [2024-12-09 06:28:58.725262] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.355 [2024-12-09 06:28:58.737774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.355 [2024-12-09 06:28:58.738489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.355 [2024-12-09 06:28:58.738549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:04.355 [2024-12-09 06:28:58.738562] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:04.355 [2024-12-09 06:28:58.738801] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:04.355 [2024-12-09 06:28:58.739019] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.355 [2024-12-09 06:28:58.739028] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.355 [2024-12-09 06:28:58.739036] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.355 [2024-12-09 06:28:58.739045] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.355 [2024-12-09 06:28:58.751381] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.355 [2024-12-09 06:28:58.752046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.355 [2024-12-09 06:28:58.752106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:04.355 [2024-12-09 06:28:58.752118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:04.355 [2024-12-09 06:28:58.752359] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:04.355 [2024-12-09 06:28:58.752584] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.355 [2024-12-09 06:28:58.752594] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.355 [2024-12-09 06:28:58.752602] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.355 [2024-12-09 06:28:58.752611] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.355 [2024-12-09 06:28:58.765120] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.355 [2024-12-09 06:28:58.765839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.355 [2024-12-09 06:28:58.765899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:04.355 [2024-12-09 06:28:58.765911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:04.355 [2024-12-09 06:28:58.766151] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:04.355 [2024-12-09 06:28:58.766364] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.355 [2024-12-09 06:28:58.766373] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.355 [2024-12-09 06:28:58.766380] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.355 [2024-12-09 06:28:58.766389] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.355 [2024-12-09 06:28:58.778708] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.355 [2024-12-09 06:28:58.779392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.355 [2024-12-09 06:28:58.779462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:04.355 [2024-12-09 06:28:58.779475] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:04.355 [2024-12-09 06:28:58.779715] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:04.355 [2024-12-09 06:28:58.779927] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.355 [2024-12-09 06:28:58.779936] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.355 [2024-12-09 06:28:58.779951] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.355 [2024-12-09 06:28:58.779960] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.355 [2024-12-09 06:28:58.792271] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.355 [2024-12-09 06:28:58.792864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.355 [2024-12-09 06:28:58.792924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:04.355 [2024-12-09 06:28:58.792936] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:04.355 [2024-12-09 06:28:58.793176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:04.355 [2024-12-09 06:28:58.793388] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.355 [2024-12-09 06:28:58.793398] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.355 [2024-12-09 06:28:58.793406] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.355 [2024-12-09 06:28:58.793415] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.355 [2024-12-09 06:28:58.805946] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.355 [2024-12-09 06:28:58.806579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.355 [2024-12-09 06:28:58.806639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:04.355 [2024-12-09 06:28:58.806652] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:04.355 [2024-12-09 06:28:58.806892] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:04.355 [2024-12-09 06:28:58.807104] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.355 [2024-12-09 06:28:58.807113] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.355 [2024-12-09 06:28:58.807120] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.355 [2024-12-09 06:28:58.807129] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.356 [2024-12-09 06:28:58.819664] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.356 [2024-12-09 06:28:58.820337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.356 [2024-12-09 06:28:58.820397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:04.356 [2024-12-09 06:28:58.820409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:04.356 [2024-12-09 06:28:58.820662] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:04.356 [2024-12-09 06:28:58.820875] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.356 [2024-12-09 06:28:58.820884] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.356 [2024-12-09 06:28:58.820891] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.356 [2024-12-09 06:28:58.820900] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.356 [2024-12-09 06:28:58.833411] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.356 [2024-12-09 06:28:58.834116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.356 [2024-12-09 06:28:58.834176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:04.356 [2024-12-09 06:28:58.834188] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:04.356 [2024-12-09 06:28:58.834428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:04.356 [2024-12-09 06:28:58.834654] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.356 [2024-12-09 06:28:58.834665] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.356 [2024-12-09 06:28:58.834673] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.356 [2024-12-09 06:28:58.834682] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.356 [2024-12-09 06:28:58.847081] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.356 [2024-12-09 06:28:58.847807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.356 [2024-12-09 06:28:58.847867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:04.356 [2024-12-09 06:28:58.847879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:04.356 [2024-12-09 06:28:58.848120] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:04.356 [2024-12-09 06:28:58.848331] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.356 [2024-12-09 06:28:58.848340] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.356 [2024-12-09 06:28:58.848348] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.356 [2024-12-09 06:28:58.848357] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.356 [2024-12-09 06:28:58.860683] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.356 [2024-12-09 06:28:58.861379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.356 [2024-12-09 06:28:58.861437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:04.356 [2024-12-09 06:28:58.861461] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:04.356 [2024-12-09 06:28:58.861703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:04.356 [2024-12-09 06:28:58.861914] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.356 [2024-12-09 06:28:58.861923] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.356 [2024-12-09 06:28:58.861931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.356 [2024-12-09 06:28:58.861940] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.356 [2024-12-09 06:28:58.874261] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.356 [2024-12-09 06:28:58.874969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.356 [2024-12-09 06:28:58.875029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:04.356 [2024-12-09 06:28:58.875048] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:04.356 [2024-12-09 06:28:58.875289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:04.356 [2024-12-09 06:28:58.875512] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.356 [2024-12-09 06:28:58.875522] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.356 [2024-12-09 06:28:58.875530] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.356 [2024-12-09 06:28:58.875539] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.356 [2024-12-09 06:28:58.887846] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.356 [2024-12-09 06:28:58.888541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.356 [2024-12-09 06:28:58.888602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:04.356 [2024-12-09 06:28:58.888614] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:04.356 [2024-12-09 06:28:58.888855] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:04.356 [2024-12-09 06:28:58.889067] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.356 [2024-12-09 06:28:58.889075] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.356 [2024-12-09 06:28:58.889083] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.356 [2024-12-09 06:28:58.889091] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.356 [2024-12-09 06:28:58.901418] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.356 [2024-12-09 06:28:58.902095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.356 [2024-12-09 06:28:58.902156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:04.356 [2024-12-09 06:28:58.902167] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:04.356 [2024-12-09 06:28:58.902408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:04.356 [2024-12-09 06:28:58.902634] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.356 [2024-12-09 06:28:58.902644] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.356 [2024-12-09 06:28:58.902652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.356 [2024-12-09 06:28:58.902660] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.356 [2024-12-09 06:28:58.915164] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.356 [2024-12-09 06:28:58.915889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.356 [2024-12-09 06:28:58.915948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:04.356 [2024-12-09 06:28:58.915960] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:04.356 [2024-12-09 06:28:58.916200] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:04.356 [2024-12-09 06:28:58.916426] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.356 [2024-12-09 06:28:58.916435] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.356 [2024-12-09 06:28:58.916443] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.356 [2024-12-09 06:28:58.916472] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.356 [2024-12-09 06:28:58.928782] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.356 [2024-12-09 06:28:58.929380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.356 [2024-12-09 06:28:58.929409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:04.356 [2024-12-09 06:28:58.929417] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:04.356 [2024-12-09 06:28:58.929635] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:04.356 [2024-12-09 06:28:58.929843] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.356 [2024-12-09 06:28:58.929852] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.356 [2024-12-09 06:28:58.929859] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.356 [2024-12-09 06:28:58.929867] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.618 [2024-12-09 06:28:58.942376] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.618 [2024-12-09 06:28:58.943036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.618 [2024-12-09 06:28:58.943097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:04.618 [2024-12-09 06:28:58.943109] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:04.618 [2024-12-09 06:28:58.943349] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:04.618 [2024-12-09 06:28:58.943575] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.618 [2024-12-09 06:28:58.943586] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.618 [2024-12-09 06:28:58.943594] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.618 [2024-12-09 06:28:58.943603] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.618 [2024-12-09 06:28:58.956122] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.618 [2024-12-09 06:28:58.956826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.618 [2024-12-09 06:28:58.956886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:04.618 [2024-12-09 06:28:58.956898] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:04.618 [2024-12-09 06:28:58.957139] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:04.618 [2024-12-09 06:28:58.957350] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.618 [2024-12-09 06:28:58.957359] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.618 [2024-12-09 06:28:58.957374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.618 [2024-12-09 06:28:58.957383] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.618 [2024-12-09 06:28:58.969706] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.618 [2024-12-09 06:28:58.970388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.618 [2024-12-09 06:28:58.970461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:04.618 [2024-12-09 06:28:58.970473] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:04.618 [2024-12-09 06:28:58.970714] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:04.618 [2024-12-09 06:28:58.970925] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.618 [2024-12-09 06:28:58.970934] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.618 [2024-12-09 06:28:58.970942] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.618 [2024-12-09 06:28:58.970950] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.618 [2024-12-09 06:28:58.982466] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.618 [2024-12-09 06:28:58.983032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.618 [2024-12-09 06:28:58.983086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:04.619 [2024-12-09 06:28:58.983096] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:04.619 [2024-12-09 06:28:58.983287] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:04.619 [2024-12-09 06:28:58.983462] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.619 [2024-12-09 06:28:58.983473] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.619 [2024-12-09 06:28:58.983480] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.619 [2024-12-09 06:28:58.983487] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.619 [2024-12-09 06:28:58.995287] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.619 [2024-12-09 06:28:58.995920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.619 [2024-12-09 06:28:58.995971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:04.619 [2024-12-09 06:28:58.995980] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:04.619 [2024-12-09 06:28:58.996168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:04.619 [2024-12-09 06:28:58.996330] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.619 [2024-12-09 06:28:58.996338] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.619 [2024-12-09 06:28:58.996344] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.619 [2024-12-09 06:28:58.996352] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.619 [2024-12-09 06:28:59.008022] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.619 [2024-12-09 06:28:59.008689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.619 [2024-12-09 06:28:59.008737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:04.619 [2024-12-09 06:28:59.008746] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:04.619 [2024-12-09 06:28:59.008932] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:04.619 [2024-12-09 06:28:59.009094] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.619 [2024-12-09 06:28:59.009100] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.619 [2024-12-09 06:28:59.009107] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.619 [2024-12-09 06:28:59.009113] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.619 [2024-12-09 06:28:59.020774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.619 [2024-12-09 06:28:59.021323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.619 [2024-12-09 06:28:59.021365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:04.619 [2024-12-09 06:28:59.021375] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:04.619 [2024-12-09 06:28:59.021564] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:04.619 [2024-12-09 06:28:59.021727] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.619 [2024-12-09 06:28:59.021734] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.619 [2024-12-09 06:28:59.021739] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.619 [2024-12-09 06:28:59.021746] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.619 [2024-12-09 06:28:59.033544] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.619 [2024-12-09 06:28:59.034066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.619 [2024-12-09 06:28:59.034086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:04.619 [2024-12-09 06:28:59.034092] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:04.619 [2024-12-09 06:28:59.034251] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:04.619 [2024-12-09 06:28:59.034408] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.619 [2024-12-09 06:28:59.034417] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.619 [2024-12-09 06:28:59.034423] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.619 [2024-12-09 06:28:59.034429] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.619 [2024-12-09 06:28:59.046270] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.619 7215.75 IOPS, 28.19 MiB/s [2024-12-09T05:28:59.206Z] [2024-12-09 06:28:59.047932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.619 [2024-12-09 06:28:59.047949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:04.619 [2024-12-09 06:28:59.047959] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:04.619 [2024-12-09 06:28:59.048117] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:04.619 [2024-12-09 06:28:59.048274] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.619 [2024-12-09 06:28:59.048280] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.619 [2024-12-09 06:28:59.048285] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.619 [2024-12-09 06:28:59.048291] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.619 [2024-12-09 06:28:59.059021] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.619 [2024-12-09 06:28:59.059521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.619 [2024-12-09 06:28:59.059536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:04.619 [2024-12-09 06:28:59.059542] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:04.619 [2024-12-09 06:28:59.059698] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:04.619 [2024-12-09 06:28:59.059855] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.619 [2024-12-09 06:28:59.059862] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.619 [2024-12-09 06:28:59.059867] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.619 [2024-12-09 06:28:59.059872] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.619 [2024-12-09 06:28:59.071788] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.619 [2024-12-09 06:28:59.072343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.619 [2024-12-09 06:28:59.072378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:04.619 [2024-12-09 06:28:59.072387] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:04.619 [2024-12-09 06:28:59.072570] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:04.619 [2024-12-09 06:28:59.072732] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.619 [2024-12-09 06:28:59.072739] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.619 [2024-12-09 06:28:59.072746] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.619 [2024-12-09 06:28:59.072752] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.619 [2024-12-09 06:28:59.084522] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.619 [2024-12-09 06:28:59.085083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.619 [2024-12-09 06:28:59.085116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:04.619 [2024-12-09 06:28:59.085124] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:04.619 [2024-12-09 06:28:59.085299] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:04.619 [2024-12-09 06:28:59.085471] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.619 [2024-12-09 06:28:59.085478] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.619 [2024-12-09 06:28:59.085484] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.619 [2024-12-09 06:28:59.085490] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.619 [2024-12-09 06:28:59.097260] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.619 [2024-12-09 06:28:59.097756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.619 [2024-12-09 06:28:59.097789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:04.619 [2024-12-09 06:28:59.097797] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:04.619 [2024-12-09 06:28:59.097970] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:04.619 [2024-12-09 06:28:59.098129] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.619 [2024-12-09 06:28:59.098136] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.619 [2024-12-09 06:28:59.098142] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.619 [2024-12-09 06:28:59.098149] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.619 [2024-12-09 06:28:59.110070] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.619 [2024-12-09 06:28:59.110655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.620 [2024-12-09 06:28:59.110687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:04.620 [2024-12-09 06:28:59.110695] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:04.620 [2024-12-09 06:28:59.110868] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:04.620 [2024-12-09 06:28:59.111028] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.620 [2024-12-09 06:28:59.111034] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.620 [2024-12-09 06:28:59.111040] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.620 [2024-12-09 06:28:59.111046] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.620 [2024-12-09 06:28:59.122832] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.620 [2024-12-09 06:28:59.123237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.620 [2024-12-09 06:28:59.123266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:04.620 [2024-12-09 06:28:59.123275] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:04.620 [2024-12-09 06:28:59.123454] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:04.620 [2024-12-09 06:28:59.123614] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.620 [2024-12-09 06:28:59.123620] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.620 [2024-12-09 06:28:59.123629] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.620 [2024-12-09 06:28:59.123635] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.620 [2024-12-09 06:28:59.135686] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.620 [2024-12-09 06:28:59.136161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.620 [2024-12-09 06:28:59.136176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:04.620 [2024-12-09 06:28:59.136182] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:04.620 [2024-12-09 06:28:59.136339] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:04.620 [2024-12-09 06:28:59.136501] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.620 [2024-12-09 06:28:59.136507] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.620 [2024-12-09 06:28:59.136513] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.620 [2024-12-09 06:28:59.136518] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.620 [2024-12-09 06:28:59.148423] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.620 [2024-12-09 06:28:59.148995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.620 [2024-12-09 06:28:59.149026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:04.620 [2024-12-09 06:28:59.149034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:04.620 [2024-12-09 06:28:59.149206] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:04.620 [2024-12-09 06:28:59.149366] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.620 [2024-12-09 06:28:59.149372] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.620 [2024-12-09 06:28:59.149378] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.620 [2024-12-09 06:28:59.149384] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.620 [2024-12-09 06:28:59.161156] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.620 [2024-12-09 06:28:59.161719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.620 [2024-12-09 06:28:59.161750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:04.620 [2024-12-09 06:28:59.161758] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:04.620 [2024-12-09 06:28:59.161931] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:04.620 [2024-12-09 06:28:59.162090] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.620 [2024-12-09 06:28:59.162097] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.620 [2024-12-09 06:28:59.162103] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.620 [2024-12-09 06:28:59.162109] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.620 [2024-12-09 06:28:59.173883] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.620 [2024-12-09 06:28:59.174435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.620 [2024-12-09 06:28:59.174471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:04.620 [2024-12-09 06:28:59.174480] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:04.620 [2024-12-09 06:28:59.174654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:04.620 [2024-12-09 06:28:59.174813] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.620 [2024-12-09 06:28:59.174820] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.620 [2024-12-09 06:28:59.174826] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.620 [2024-12-09 06:28:59.174832] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.620 [2024-12-09 06:28:59.186595] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.620 [2024-12-09 06:28:59.187148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.620 [2024-12-09 06:28:59.187179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:04.620 [2024-12-09 06:28:59.187188] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:04.620 [2024-12-09 06:28:59.187361] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:04.620 [2024-12-09 06:28:59.187528] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.620 [2024-12-09 06:28:59.187535] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.620 [2024-12-09 06:28:59.187541] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.620 [2024-12-09 06:28:59.187547] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.620 [2024-12-09 06:28:59.199311] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.620 [2024-12-09 06:28:59.199855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.620 [2024-12-09 06:28:59.199885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:04.620 [2024-12-09 06:28:59.199894] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:04.620 [2024-12-09 06:28:59.200066] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:04.620 [2024-12-09 06:28:59.200225] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.620 [2024-12-09 06:28:59.200232] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.620 [2024-12-09 06:28:59.200237] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.620 [2024-12-09 06:28:59.200243] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.882 [2024-12-09 06:28:59.212172] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.882 [2024-12-09 06:28:59.212731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.882 [2024-12-09 06:28:59.212761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:04.882 [2024-12-09 06:28:59.212774] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:04.882 [2024-12-09 06:28:59.212946] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:04.882 [2024-12-09 06:28:59.213106] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.882 [2024-12-09 06:28:59.213112] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.882 [2024-12-09 06:28:59.213119] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.882 [2024-12-09 06:28:59.213125] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.882 [2024-12-09 06:28:59.224909] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.883 [2024-12-09 06:28:59.225483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.883 [2024-12-09 06:28:59.225513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:04.883 [2024-12-09 06:28:59.225521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:04.883 [2024-12-09 06:28:59.225694] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:04.883 [2024-12-09 06:28:59.225853] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.883 [2024-12-09 06:28:59.225859] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.883 [2024-12-09 06:28:59.225865] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.883 [2024-12-09 06:28:59.225871] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.883 [2024-12-09 06:28:59.237644] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.883 [2024-12-09 06:28:59.238193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.883 [2024-12-09 06:28:59.238224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:04.883 [2024-12-09 06:28:59.238232] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:04.883 [2024-12-09 06:28:59.238404] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:04.883 [2024-12-09 06:28:59.238569] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.883 [2024-12-09 06:28:59.238576] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.883 [2024-12-09 06:28:59.238582] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.883 [2024-12-09 06:28:59.238588] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.883 [2024-12-09 06:28:59.250361] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.883 [2024-12-09 06:28:59.250899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.883 [2024-12-09 06:28:59.250929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:04.883 [2024-12-09 06:28:59.250937] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:04.883 [2024-12-09 06:28:59.251111] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:04.883 [2024-12-09 06:28:59.251273] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.883 [2024-12-09 06:28:59.251280] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.883 [2024-12-09 06:28:59.251286] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.883 [2024-12-09 06:28:59.251291] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.883 [2024-12-09 06:28:59.263211] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.883 [2024-12-09 06:28:59.263759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.883 [2024-12-09 06:28:59.263789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:04.883 [2024-12-09 06:28:59.263798] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:04.883 [2024-12-09 06:28:59.263970] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:04.883 [2024-12-09 06:28:59.264130] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.883 [2024-12-09 06:28:59.264137] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.883 [2024-12-09 06:28:59.264143] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.883 [2024-12-09 06:28:59.264149] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.883 [2024-12-09 06:28:59.276063] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.883 [2024-12-09 06:28:59.276656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.883 [2024-12-09 06:28:59.276686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:04.883 [2024-12-09 06:28:59.276695] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:04.883 [2024-12-09 06:28:59.276868] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:04.883 [2024-12-09 06:28:59.277027] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.883 [2024-12-09 06:28:59.277033] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.883 [2024-12-09 06:28:59.277039] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.883 [2024-12-09 06:28:59.277045] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.883 [2024-12-09 06:28:59.288814] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.883 [2024-12-09 06:28:59.289362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.883 [2024-12-09 06:28:59.289392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:04.883 [2024-12-09 06:28:59.289401] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:04.883 [2024-12-09 06:28:59.289580] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:04.883 [2024-12-09 06:28:59.289740] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.883 [2024-12-09 06:28:59.289746] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.883 [2024-12-09 06:28:59.289756] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.883 [2024-12-09 06:28:59.289762] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.883 [2024-12-09 06:28:59.301620] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.883 [2024-12-09 06:28:59.302141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.883 [2024-12-09 06:28:59.302171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:04.883 [2024-12-09 06:28:59.302180] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:04.883 [2024-12-09 06:28:59.302352] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:04.883 [2024-12-09 06:28:59.302518] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.883 [2024-12-09 06:28:59.302526] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.883 [2024-12-09 06:28:59.302532] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.883 [2024-12-09 06:28:59.302538] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.883 [2024-12-09 06:28:59.314445] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.883 [2024-12-09 06:28:59.314956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.883 [2024-12-09 06:28:59.314986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:04.883 [2024-12-09 06:28:59.314995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:04.883 [2024-12-09 06:28:59.315169] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:04.883 [2024-12-09 06:28:59.315328] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.883 [2024-12-09 06:28:59.315335] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.883 [2024-12-09 06:28:59.315340] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.883 [2024-12-09 06:28:59.315346] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.883 [2024-12-09 06:28:59.327269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.883 [2024-12-09 06:28:59.327755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.883 [2024-12-09 06:28:59.327785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:04.883 [2024-12-09 06:28:59.327794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:04.883 [2024-12-09 06:28:59.327966] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:04.883 [2024-12-09 06:28:59.328125] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.883 [2024-12-09 06:28:59.328132] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.883 [2024-12-09 06:28:59.328137] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.883 [2024-12-09 06:28:59.328143] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.883 [2024-12-09 06:28:59.340066] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.883 [2024-12-09 06:28:59.340532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.883 [2024-12-09 06:28:59.340562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:04.883 [2024-12-09 06:28:59.340570] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:04.883 [2024-12-09 06:28:59.340743] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:04.883 [2024-12-09 06:28:59.340902] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.883 [2024-12-09 06:28:59.340908] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.883 [2024-12-09 06:28:59.340914] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.884 [2024-12-09 06:28:59.340920] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.884 [2024-12-09 06:28:59.352846] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.884 [2024-12-09 06:28:59.353428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.884 [2024-12-09 06:28:59.353463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:04.884 [2024-12-09 06:28:59.353472] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:04.884 [2024-12-09 06:28:59.353644] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:04.884 [2024-12-09 06:28:59.353803] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.884 [2024-12-09 06:28:59.353810] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.884 [2024-12-09 06:28:59.353816] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.884 [2024-12-09 06:28:59.353823] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.884 [2024-12-09 06:28:59.365586] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.884 [2024-12-09 06:28:59.366123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.884 [2024-12-09 06:28:59.366154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:04.884 [2024-12-09 06:28:59.366163] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:04.884 [2024-12-09 06:28:59.366337] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:04.884 [2024-12-09 06:28:59.366503] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.884 [2024-12-09 06:28:59.366511] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.884 [2024-12-09 06:28:59.366517] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.884 [2024-12-09 06:28:59.366523] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.884 [2024-12-09 06:28:59.378437] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.884 [2024-12-09 06:28:59.378800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.884 [2024-12-09 06:28:59.378817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:04.884 [2024-12-09 06:28:59.378827] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:04.884 [2024-12-09 06:28:59.378984] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:04.884 [2024-12-09 06:28:59.379140] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.884 [2024-12-09 06:28:59.379146] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.884 [2024-12-09 06:28:59.379151] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.884 [2024-12-09 06:28:59.379156] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.884 [2024-12-09 06:28:59.391201] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.884 [2024-12-09 06:28:59.391634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.884 [2024-12-09 06:28:59.391648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:04.884 [2024-12-09 06:28:59.391654] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:04.884 [2024-12-09 06:28:59.391810] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:04.884 [2024-12-09 06:28:59.391967] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.884 [2024-12-09 06:28:59.391973] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.884 [2024-12-09 06:28:59.391978] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.884 [2024-12-09 06:28:59.391983] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.884 [2024-12-09 06:28:59.404039] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.884 [2024-12-09 06:28:59.404475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.884 [2024-12-09 06:28:59.404488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:04.884 [2024-12-09 06:28:59.404493] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:04.884 [2024-12-09 06:28:59.404649] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:04.884 [2024-12-09 06:28:59.404805] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.884 [2024-12-09 06:28:59.404811] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.884 [2024-12-09 06:28:59.404816] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.884 [2024-12-09 06:28:59.404821] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.884 [2024-12-09 06:28:59.416871] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.884 [2024-12-09 06:28:59.417398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.884 [2024-12-09 06:28:59.417428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:04.884 [2024-12-09 06:28:59.417437] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:04.884 [2024-12-09 06:28:59.417622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:04.884 [2024-12-09 06:28:59.417786] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.884 [2024-12-09 06:28:59.417793] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.884 [2024-12-09 06:28:59.417798] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.884 [2024-12-09 06:28:59.417805] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.884 [2024-12-09 06:28:59.429714] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.884 [2024-12-09 06:28:59.430300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.884 [2024-12-09 06:28:59.430331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:04.884 [2024-12-09 06:28:59.430340] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:04.884 [2024-12-09 06:28:59.430520] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:04.884 [2024-12-09 06:28:59.430681] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.884 [2024-12-09 06:28:59.430687] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.884 [2024-12-09 06:28:59.430692] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.884 [2024-12-09 06:28:59.430698] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.884 [2024-12-09 06:28:59.442470] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.884 [2024-12-09 06:28:59.442998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.884 [2024-12-09 06:28:59.443028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:04.884 [2024-12-09 06:28:59.443037] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:04.884 [2024-12-09 06:28:59.443217] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:04.884 [2024-12-09 06:28:59.443378] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.884 [2024-12-09 06:28:59.443384] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.884 [2024-12-09 06:28:59.443390] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.884 [2024-12-09 06:28:59.443396] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.884 [2024-12-09 06:28:59.455314] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.884 [2024-12-09 06:28:59.455798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.884 [2024-12-09 06:28:59.455829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:04.884 [2024-12-09 06:28:59.455837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:04.884 [2024-12-09 06:28:59.456010] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:04.884 [2024-12-09 06:28:59.456169] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.884 [2024-12-09 06:28:59.456175] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.884 [2024-12-09 06:28:59.456185] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.884 [2024-12-09 06:28:59.456191] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.146 [2024-12-09 06:28:59.468122] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.146 [2024-12-09 06:28:59.468694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.146 [2024-12-09 06:28:59.468724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.146 [2024-12-09 06:28:59.468732] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.146 [2024-12-09 06:28:59.468905] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.146 [2024-12-09 06:28:59.469064] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.146 [2024-12-09 06:28:59.469071] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.146 [2024-12-09 06:28:59.469077] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.146 [2024-12-09 06:28:59.469083] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.146 [2024-12-09 06:28:59.480868] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.146 [2024-12-09 06:28:59.481475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.146 [2024-12-09 06:28:59.481506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.146 [2024-12-09 06:28:59.481515] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.146 [2024-12-09 06:28:59.481690] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.146 [2024-12-09 06:28:59.481849] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.146 [2024-12-09 06:28:59.481856] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.146 [2024-12-09 06:28:59.481862] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.146 [2024-12-09 06:28:59.481868] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.146 [2024-12-09 06:28:59.493641] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.146 [2024-12-09 06:28:59.494194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.146 [2024-12-09 06:28:59.494224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.146 [2024-12-09 06:28:59.494233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.146 [2024-12-09 06:28:59.494407] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.146 [2024-12-09 06:28:59.494572] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.146 [2024-12-09 06:28:59.494579] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.146 [2024-12-09 06:28:59.494585] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.146 [2024-12-09 06:28:59.494591] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.146 [2024-12-09 06:28:59.506368] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.146 [2024-12-09 06:28:59.506918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.146 [2024-12-09 06:28:59.506948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.147 [2024-12-09 06:28:59.506957] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.147 [2024-12-09 06:28:59.507130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.147 [2024-12-09 06:28:59.507289] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.147 [2024-12-09 06:28:59.507296] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.147 [2024-12-09 06:28:59.507302] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.147 [2024-12-09 06:28:59.507307] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.147 [2024-12-09 06:28:59.519230] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.147 [2024-12-09 06:28:59.519819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.147 [2024-12-09 06:28:59.519850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.147 [2024-12-09 06:28:59.519858] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.147 [2024-12-09 06:28:59.520031] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.147 [2024-12-09 06:28:59.520190] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.147 [2024-12-09 06:28:59.520197] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.147 [2024-12-09 06:28:59.520203] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.147 [2024-12-09 06:28:59.520209] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.147 [2024-12-09 06:28:59.531994] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.147 [2024-12-09 06:28:59.532545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.147 [2024-12-09 06:28:59.532575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.147 [2024-12-09 06:28:59.532584] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.147 [2024-12-09 06:28:59.532758] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.147 [2024-12-09 06:28:59.532918] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.147 [2024-12-09 06:28:59.532924] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.147 [2024-12-09 06:28:59.532930] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.147 [2024-12-09 06:28:59.532936] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.147 [2024-12-09 06:28:59.544715] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.147 [2024-12-09 06:28:59.545271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.147 [2024-12-09 06:28:59.545302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.147 [2024-12-09 06:28:59.545318] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.147 [2024-12-09 06:28:59.545497] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.147 [2024-12-09 06:28:59.545658] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.147 [2024-12-09 06:28:59.545664] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.147 [2024-12-09 06:28:59.545670] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.147 [2024-12-09 06:28:59.545676] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.147 [2024-12-09 06:28:59.557441] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.147 [2024-12-09 06:28:59.558001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.147 [2024-12-09 06:28:59.558031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.147 [2024-12-09 06:28:59.558040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.147 [2024-12-09 06:28:59.558212] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.147 [2024-12-09 06:28:59.558371] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.147 [2024-12-09 06:28:59.558378] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.147 [2024-12-09 06:28:59.558384] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.147 [2024-12-09 06:28:59.558390] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.147 [2024-12-09 06:28:59.570157] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.147 [2024-12-09 06:28:59.570802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.147 [2024-12-09 06:28:59.570832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.147 [2024-12-09 06:28:59.570841] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.147 [2024-12-09 06:28:59.571013] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.147 [2024-12-09 06:28:59.571173] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.147 [2024-12-09 06:28:59.571179] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.147 [2024-12-09 06:28:59.571185] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.147 [2024-12-09 06:28:59.571191] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.147 [2024-12-09 06:28:59.582961] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.147 [2024-12-09 06:28:59.583557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.147 [2024-12-09 06:28:59.583586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.147 [2024-12-09 06:28:59.583595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.147 [2024-12-09 06:28:59.583767] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.147 [2024-12-09 06:28:59.583930] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.147 [2024-12-09 06:28:59.583937] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.147 [2024-12-09 06:28:59.583943] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.147 [2024-12-09 06:28:59.583949] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.147 [2024-12-09 06:28:59.595717] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.147 [2024-12-09 06:28:59.596187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.147 [2024-12-09 06:28:59.596202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.147 [2024-12-09 06:28:59.596207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.147 [2024-12-09 06:28:59.596364] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.147 [2024-12-09 06:28:59.596526] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.147 [2024-12-09 06:28:59.596532] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.147 [2024-12-09 06:28:59.596538] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.147 [2024-12-09 06:28:59.596543] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.147 [2024-12-09 06:28:59.608455] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.147 [2024-12-09 06:28:59.608922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.147 [2024-12-09 06:28:59.608953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.147 [2024-12-09 06:28:59.608961] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.147 [2024-12-09 06:28:59.609133] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.147 [2024-12-09 06:28:59.609292] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.147 [2024-12-09 06:28:59.609298] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.147 [2024-12-09 06:28:59.609304] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.147 [2024-12-09 06:28:59.609310] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.147 [2024-12-09 06:28:59.621230] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.147 [2024-12-09 06:28:59.621773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.147 [2024-12-09 06:28:59.621803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.147 [2024-12-09 06:28:59.621812] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.147 [2024-12-09 06:28:59.621985] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.147 [2024-12-09 06:28:59.622144] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.147 [2024-12-09 06:28:59.622151] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.147 [2024-12-09 06:28:59.622160] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.147 [2024-12-09 06:28:59.622166] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.147 [2024-12-09 06:28:59.634018] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.147 [2024-12-09 06:28:59.634555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.147 [2024-12-09 06:28:59.634585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.148 [2024-12-09 06:28:59.634594] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.148 [2024-12-09 06:28:59.634770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.148 [2024-12-09 06:28:59.634929] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.148 [2024-12-09 06:28:59.634936] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.148 [2024-12-09 06:28:59.634941] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.148 [2024-12-09 06:28:59.634947] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.148 [2024-12-09 06:28:59.646880] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.148 [2024-12-09 06:28:59.647391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.148 [2024-12-09 06:28:59.647421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.148 [2024-12-09 06:28:59.647430] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.148 [2024-12-09 06:28:59.647612] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.148 [2024-12-09 06:28:59.647772] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.148 [2024-12-09 06:28:59.647778] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.148 [2024-12-09 06:28:59.647784] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.148 [2024-12-09 06:28:59.647790] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.148 [2024-12-09 06:28:59.659705] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.148 [2024-12-09 06:28:59.660153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.148 [2024-12-09 06:28:59.660181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.148 [2024-12-09 06:28:59.660190] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.148 [2024-12-09 06:28:59.660362] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.148 [2024-12-09 06:28:59.660528] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.148 [2024-12-09 06:28:59.660536] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.148 [2024-12-09 06:28:59.660542] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.148 [2024-12-09 06:28:59.660548] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.148 [2024-12-09 06:28:59.672463] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.148 [2024-12-09 06:28:59.673014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.148 [2024-12-09 06:28:59.673044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.148 [2024-12-09 06:28:59.673053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.148 [2024-12-09 06:28:59.673225] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.148 [2024-12-09 06:28:59.673385] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.148 [2024-12-09 06:28:59.673391] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.148 [2024-12-09 06:28:59.673397] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.148 [2024-12-09 06:28:59.673403] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.148 [2024-12-09 06:28:59.685179] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.148 [2024-12-09 06:28:59.685655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.148 [2024-12-09 06:28:59.685671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.148 [2024-12-09 06:28:59.685677] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.148 [2024-12-09 06:28:59.685834] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.148 [2024-12-09 06:28:59.685990] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.148 [2024-12-09 06:28:59.685996] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.148 [2024-12-09 06:28:59.686001] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.148 [2024-12-09 06:28:59.686006] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.148 [2024-12-09 06:28:59.697901] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.148 [2024-12-09 06:28:59.698365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.148 [2024-12-09 06:28:59.698378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.148 [2024-12-09 06:28:59.698384] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.148 [2024-12-09 06:28:59.698544] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.148 [2024-12-09 06:28:59.698701] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.148 [2024-12-09 06:28:59.698707] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.148 [2024-12-09 06:28:59.698712] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.148 [2024-12-09 06:28:59.698717] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.148 [2024-12-09 06:28:59.710621] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.148 [2024-12-09 06:28:59.711082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.148 [2024-12-09 06:28:59.711095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.148 [2024-12-09 06:28:59.711104] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.148 [2024-12-09 06:28:59.711260] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.148 [2024-12-09 06:28:59.711416] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.148 [2024-12-09 06:28:59.711422] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.148 [2024-12-09 06:28:59.711427] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.148 [2024-12-09 06:28:59.711432] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.148 [2024-12-09 06:28:59.723383] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.148 [2024-12-09 06:28:59.723935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.148 [2024-12-09 06:28:59.723966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.148 [2024-12-09 06:28:59.723975] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.148 [2024-12-09 06:28:59.724149] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.148 [2024-12-09 06:28:59.724308] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.148 [2024-12-09 06:28:59.724315] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.148 [2024-12-09 06:28:59.724320] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.148 [2024-12-09 06:28:59.724326] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.411 [2024-12-09 06:28:59.736092] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.411 [2024-12-09 06:28:59.736532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.411 [2024-12-09 06:28:59.736561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.411 [2024-12-09 06:28:59.736570] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.411 [2024-12-09 06:28:59.736744] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.411 [2024-12-09 06:28:59.736903] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.411 [2024-12-09 06:28:59.736909] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.411 [2024-12-09 06:28:59.736915] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.411 [2024-12-09 06:28:59.736921] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.411 [2024-12-09 06:28:59.748855] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.411 [2024-12-09 06:28:59.749348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.411 [2024-12-09 06:28:59.749379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.411 [2024-12-09 06:28:59.749388] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.411 [2024-12-09 06:28:59.749567] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.411 [2024-12-09 06:28:59.749731] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.411 [2024-12-09 06:28:59.749738] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.411 [2024-12-09 06:28:59.749743] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.411 [2024-12-09 06:28:59.749750] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.411 [2024-12-09 06:28:59.761674] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.411 [2024-12-09 06:28:59.762251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.411 [2024-12-09 06:28:59.762281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.411 [2024-12-09 06:28:59.762290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.411 [2024-12-09 06:28:59.762469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.411 [2024-12-09 06:28:59.762629] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.411 [2024-12-09 06:28:59.762635] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.411 [2024-12-09 06:28:59.762641] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.411 [2024-12-09 06:28:59.762647] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.411 [2024-12-09 06:28:59.774421] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.411 [2024-12-09 06:28:59.774959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.411 [2024-12-09 06:28:59.774989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.411 [2024-12-09 06:28:59.774999] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.411 [2024-12-09 06:28:59.775173] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.411 [2024-12-09 06:28:59.775332] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.411 [2024-12-09 06:28:59.775338] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.411 [2024-12-09 06:28:59.775344] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.411 [2024-12-09 06:28:59.775350] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.411 [2024-12-09 06:28:59.787268] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.411 [2024-12-09 06:28:59.787669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.411 [2024-12-09 06:28:59.787684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.411 [2024-12-09 06:28:59.787690] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.411 [2024-12-09 06:28:59.787847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.411 [2024-12-09 06:28:59.788003] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.411 [2024-12-09 06:28:59.788009] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.411 [2024-12-09 06:28:59.788018] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.411 [2024-12-09 06:28:59.788024] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.411 [2024-12-09 06:28:59.800080] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.411 [2024-12-09 06:28:59.800618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.411 [2024-12-09 06:28:59.800648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.411 [2024-12-09 06:28:59.800657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.411 [2024-12-09 06:28:59.800832] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.411 [2024-12-09 06:28:59.800991] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.411 [2024-12-09 06:28:59.800997] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.411 [2024-12-09 06:28:59.801003] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.411 [2024-12-09 06:28:59.801009] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.411 [2024-12-09 06:28:59.812936] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.411 [2024-12-09 06:28:59.813283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.411 [2024-12-09 06:28:59.813298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.411 [2024-12-09 06:28:59.813303] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.411 [2024-12-09 06:28:59.813465] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.411 [2024-12-09 06:28:59.813623] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.411 [2024-12-09 06:28:59.813629] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.411 [2024-12-09 06:28:59.813634] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.411 [2024-12-09 06:28:59.813638] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.411 [2024-12-09 06:28:59.825701] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.411 [2024-12-09 06:28:59.826070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.411 [2024-12-09 06:28:59.826083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.411 [2024-12-09 06:28:59.826088] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.411 [2024-12-09 06:28:59.826244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.412 [2024-12-09 06:28:59.826400] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.412 [2024-12-09 06:28:59.826406] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.412 [2024-12-09 06:28:59.826411] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.412 [2024-12-09 06:28:59.826416] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.412 [2024-12-09 06:28:59.838486] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.412 [2024-12-09 06:28:59.838969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.412 [2024-12-09 06:28:59.838998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.412 [2024-12-09 06:28:59.839007] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.412 [2024-12-09 06:28:59.839180] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.412 [2024-12-09 06:28:59.839339] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.412 [2024-12-09 06:28:59.839345] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.412 [2024-12-09 06:28:59.839351] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.412 [2024-12-09 06:28:59.839357] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.412 [2024-12-09 06:28:59.851287] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.412 [2024-12-09 06:28:59.851828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.412 [2024-12-09 06:28:59.851843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.412 [2024-12-09 06:28:59.851849] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.412 [2024-12-09 06:28:59.852006] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.412 [2024-12-09 06:28:59.852163] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.412 [2024-12-09 06:28:59.852168] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.412 [2024-12-09 06:28:59.852174] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.412 [2024-12-09 06:28:59.852179] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.412 [2024-12-09 06:28:59.864085] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.412 [2024-12-09 06:28:59.864551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.412 [2024-12-09 06:28:59.864564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.412 [2024-12-09 06:28:59.864570] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.412 [2024-12-09 06:28:59.864726] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.412 [2024-12-09 06:28:59.864883] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.412 [2024-12-09 06:28:59.864889] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.412 [2024-12-09 06:28:59.864894] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.412 [2024-12-09 06:28:59.864899] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.412 [2024-12-09 06:28:59.876803] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.412 [2024-12-09 06:28:59.877288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.412 [2024-12-09 06:28:59.877300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.412 [2024-12-09 06:28:59.877309] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.412 [2024-12-09 06:28:59.877472] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.412 [2024-12-09 06:28:59.877632] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.412 [2024-12-09 06:28:59.877638] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.412 [2024-12-09 06:28:59.877643] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.412 [2024-12-09 06:28:59.877648] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.412 [2024-12-09 06:28:59.889540] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.412 [2024-12-09 06:28:59.890015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.412 [2024-12-09 06:28:59.890027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.412 [2024-12-09 06:28:59.890033] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.412 [2024-12-09 06:28:59.890189] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.412 [2024-12-09 06:28:59.890346] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.412 [2024-12-09 06:28:59.890352] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.412 [2024-12-09 06:28:59.890357] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.412 [2024-12-09 06:28:59.890361] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.412 [2024-12-09 06:28:59.902261] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.412 [2024-12-09 06:28:59.902849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.412 [2024-12-09 06:28:59.902879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.412 [2024-12-09 06:28:59.902888] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.412 [2024-12-09 06:28:59.903060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.412 [2024-12-09 06:28:59.903219] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.412 [2024-12-09 06:28:59.903226] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.412 [2024-12-09 06:28:59.903231] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.412 [2024-12-09 06:28:59.903238] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.412 [2024-12-09 06:28:59.915001] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.412 [2024-12-09 06:28:59.915387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.412 [2024-12-09 06:28:59.915418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.412 [2024-12-09 06:28:59.915427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.412 [2024-12-09 06:28:59.915607] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.412 [2024-12-09 06:28:59.915771] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.412 [2024-12-09 06:28:59.915777] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.412 [2024-12-09 06:28:59.915783] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.412 [2024-12-09 06:28:59.915788] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.412 [2024-12-09 06:28:59.927718] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.412 [2024-12-09 06:28:59.928361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.412 [2024-12-09 06:28:59.928391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.412 [2024-12-09 06:28:59.928400] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.412 [2024-12-09 06:28:59.928580] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.412 [2024-12-09 06:28:59.928740] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.412 [2024-12-09 06:28:59.928747] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.412 [2024-12-09 06:28:59.928752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.412 [2024-12-09 06:28:59.928759] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.412 [2024-12-09 06:28:59.940531] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.412 [2024-12-09 06:28:59.941104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.412 [2024-12-09 06:28:59.941134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.412 [2024-12-09 06:28:59.941142] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.412 [2024-12-09 06:28:59.941315] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.412 [2024-12-09 06:28:59.941481] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.412 [2024-12-09 06:28:59.941488] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.412 [2024-12-09 06:28:59.941493] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.412 [2024-12-09 06:28:59.941499] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.412 [2024-12-09 06:28:59.953282] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.412 [2024-12-09 06:28:59.953796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.412 [2024-12-09 06:28:59.953826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.412 [2024-12-09 06:28:59.953835] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.412 [2024-12-09 06:28:59.954007] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.413 [2024-12-09 06:28:59.954166] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.413 [2024-12-09 06:28:59.954173] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.413 [2024-12-09 06:28:59.954183] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.413 [2024-12-09 06:28:59.954189] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.413 [2024-12-09 06:28:59.966123] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.413 [2024-12-09 06:28:59.966712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.413 [2024-12-09 06:28:59.966743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.413 [2024-12-09 06:28:59.966752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.413 [2024-12-09 06:28:59.966924] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.413 [2024-12-09 06:28:59.967083] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.413 [2024-12-09 06:28:59.967090] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.413 [2024-12-09 06:28:59.967095] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.413 [2024-12-09 06:28:59.967101] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.413 [2024-12-09 06:28:59.978877] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.413 [2024-12-09 06:28:59.979370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.413 [2024-12-09 06:28:59.979400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.413 [2024-12-09 06:28:59.979409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.413 [2024-12-09 06:28:59.979590] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.413 [2024-12-09 06:28:59.979751] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.413 [2024-12-09 06:28:59.979757] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.413 [2024-12-09 06:28:59.979763] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.413 [2024-12-09 06:28:59.979769] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.413 [2024-12-09 06:28:59.991692] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.413 [2024-12-09 06:28:59.992132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.413 [2024-12-09 06:28:59.992161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.413 [2024-12-09 06:28:59.992170] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.413 [2024-12-09 06:28:59.992342] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.413 [2024-12-09 06:28:59.992513] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.413 [2024-12-09 06:28:59.992522] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.413 [2024-12-09 06:28:59.992527] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.413 [2024-12-09 06:28:59.992533] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.675 [2024-12-09 06:29:00.004879] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.675 [2024-12-09 06:29:00.005364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.675 [2024-12-09 06:29:00.005380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.675 [2024-12-09 06:29:00.005387] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.675 [2024-12-09 06:29:00.005550] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.675 [2024-12-09 06:29:00.005708] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.675 [2024-12-09 06:29:00.005714] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.675 [2024-12-09 06:29:00.005720] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.675 [2024-12-09 06:29:00.005725] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.675 [2024-12-09 06:29:00.017628] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.675 [2024-12-09 06:29:00.018059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.675 [2024-12-09 06:29:00.018072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.675 [2024-12-09 06:29:00.018078] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.675 [2024-12-09 06:29:00.018234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.675 [2024-12-09 06:29:00.018390] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.675 [2024-12-09 06:29:00.018397] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.675 [2024-12-09 06:29:00.018402] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.675 [2024-12-09 06:29:00.018407] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.675 [2024-12-09 06:29:00.030464] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.675 [2024-12-09 06:29:00.030985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.675 [2024-12-09 06:29:00.031016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.675 [2024-12-09 06:29:00.031024] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.675 [2024-12-09 06:29:00.031198] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.675 [2024-12-09 06:29:00.031357] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.675 [2024-12-09 06:29:00.031364] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.675 [2024-12-09 06:29:00.031371] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.675 [2024-12-09 06:29:00.031377] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.675 [2024-12-09 06:29:00.043301] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.675 [2024-12-09 06:29:00.043685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.675 [2024-12-09 06:29:00.043701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.675 [2024-12-09 06:29:00.043711] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.675 [2024-12-09 06:29:00.043868] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.675 [2024-12-09 06:29:00.044025] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.675 [2024-12-09 06:29:00.044030] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.675 [2024-12-09 06:29:00.044036] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.675 [2024-12-09 06:29:00.044041] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.675 5772.60 IOPS, 22.55 MiB/s [2024-12-09T05:29:00.262Z] [2024-12-09 06:29:00.056087] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.675 [2024-12-09 06:29:00.056564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.675 [2024-12-09 06:29:00.056595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.675 [2024-12-09 06:29:00.056604] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.675 [2024-12-09 06:29:00.056777] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.675 [2024-12-09 06:29:00.056936] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.675 [2024-12-09 06:29:00.056943] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.675 [2024-12-09 06:29:00.056948] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.675 [2024-12-09 06:29:00.056954] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.675 [2024-12-09 06:29:00.068887] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.675 [2024-12-09 06:29:00.069324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.675 [2024-12-09 06:29:00.069339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.675 [2024-12-09 06:29:00.069345] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.675 [2024-12-09 06:29:00.069506] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.675 [2024-12-09 06:29:00.069663] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.675 [2024-12-09 06:29:00.069669] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.675 [2024-12-09 06:29:00.069675] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.675 [2024-12-09 06:29:00.069680] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.675 [2024-12-09 06:29:00.081744] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.675 [2024-12-09 06:29:00.082302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.675 [2024-12-09 06:29:00.082332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.675 [2024-12-09 06:29:00.082342] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.675 [2024-12-09 06:29:00.082525] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.675 [2024-12-09 06:29:00.082690] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.675 [2024-12-09 06:29:00.082698] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.675 [2024-12-09 06:29:00.082704] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.675 [2024-12-09 06:29:00.082710] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.675 [2024-12-09 06:29:00.094501] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.675 [2024-12-09 06:29:00.095000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.675 [2024-12-09 06:29:00.095015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.675 [2024-12-09 06:29:00.095021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.675 [2024-12-09 06:29:00.095178] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.675 [2024-12-09 06:29:00.095334] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.675 [2024-12-09 06:29:00.095341] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.675 [2024-12-09 06:29:00.095346] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.676 [2024-12-09 06:29:00.095351] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.676 [2024-12-09 06:29:00.107287] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.676 [2024-12-09 06:29:00.107717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.676 [2024-12-09 06:29:00.107748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.676 [2024-12-09 06:29:00.107757] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.676 [2024-12-09 06:29:00.107930] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.676 [2024-12-09 06:29:00.108089] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.676 [2024-12-09 06:29:00.108096] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.676 [2024-12-09 06:29:00.108101] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.676 [2024-12-09 06:29:00.108108] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.676 [2024-12-09 06:29:00.120049] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.676 [2024-12-09 06:29:00.120594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.676 [2024-12-09 06:29:00.120624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.676 [2024-12-09 06:29:00.120633] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.676 [2024-12-09 06:29:00.120807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.676 [2024-12-09 06:29:00.120967] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.676 [2024-12-09 06:29:00.120974] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.676 [2024-12-09 06:29:00.120984] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.676 [2024-12-09 06:29:00.120990] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.676 [2024-12-09 06:29:00.132774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.676 [2024-12-09 06:29:00.133253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.676 [2024-12-09 06:29:00.133268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.676 [2024-12-09 06:29:00.133274] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.676 [2024-12-09 06:29:00.133430] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.676 [2024-12-09 06:29:00.133592] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.676 [2024-12-09 06:29:00.133598] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.676 [2024-12-09 06:29:00.133604] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.676 [2024-12-09 06:29:00.133609] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.676 [2024-12-09 06:29:00.145540] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.676 [2024-12-09 06:29:00.145962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.676 [2024-12-09 06:29:00.145975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.676 [2024-12-09 06:29:00.145980] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.676 [2024-12-09 06:29:00.146137] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.676 [2024-12-09 06:29:00.146293] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.676 [2024-12-09 06:29:00.146298] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.676 [2024-12-09 06:29:00.146304] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.676 [2024-12-09 06:29:00.146309] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.676 [2024-12-09 06:29:00.158373] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.676 [2024-12-09 06:29:00.158784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.676 [2024-12-09 06:29:00.158798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.676 [2024-12-09 06:29:00.158803] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.676 [2024-12-09 06:29:00.158959] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.676 [2024-12-09 06:29:00.159115] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.676 [2024-12-09 06:29:00.159121] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.676 [2024-12-09 06:29:00.159126] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.676 [2024-12-09 06:29:00.159131] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.676 [2024-12-09 06:29:00.171216] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.676 [2024-12-09 06:29:00.171695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.676 [2024-12-09 06:29:00.171709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.676 [2024-12-09 06:29:00.171714] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.676 [2024-12-09 06:29:00.171871] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.676 [2024-12-09 06:29:00.172027] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.676 [2024-12-09 06:29:00.172033] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.676 [2024-12-09 06:29:00.172038] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.676 [2024-12-09 06:29:00.172042] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.676 [2024-12-09 06:29:00.183966] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.676 [2024-12-09 06:29:00.184416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.676 [2024-12-09 06:29:00.184429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.676 [2024-12-09 06:29:00.184434] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.676 [2024-12-09 06:29:00.184595] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.676 [2024-12-09 06:29:00.184751] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.676 [2024-12-09 06:29:00.184757] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.676 [2024-12-09 06:29:00.184762] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.676 [2024-12-09 06:29:00.184767] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.676 [2024-12-09 06:29:00.196679] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.676 [2024-12-09 06:29:00.197136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.676 [2024-12-09 06:29:00.197149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.676 [2024-12-09 06:29:00.197154] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.676 [2024-12-09 06:29:00.197310] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.676 [2024-12-09 06:29:00.197471] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.676 [2024-12-09 06:29:00.197477] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.676 [2024-12-09 06:29:00.197483] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.676 [2024-12-09 06:29:00.197488] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.676 [2024-12-09 06:29:00.209405] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.676 [2024-12-09 06:29:00.209965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.676 [2024-12-09 06:29:00.209995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.676 [2024-12-09 06:29:00.210007] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.676 [2024-12-09 06:29:00.210180] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.676 [2024-12-09 06:29:00.210339] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.676 [2024-12-09 06:29:00.210346] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.676 [2024-12-09 06:29:00.210352] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.676 [2024-12-09 06:29:00.210358] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.676 [2024-12-09 06:29:00.222165] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.676 [2024-12-09 06:29:00.222661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.676 [2024-12-09 06:29:00.222677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.676 [2024-12-09 06:29:00.222684] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.676 [2024-12-09 06:29:00.222842] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.676 [2024-12-09 06:29:00.222998] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.677 [2024-12-09 06:29:00.223005] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.677 [2024-12-09 06:29:00.223010] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.677 [2024-12-09 06:29:00.223015] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.677 [2024-12-09 06:29:00.234950] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.677 [2024-12-09 06:29:00.235410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.677 [2024-12-09 06:29:00.235424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.677 [2024-12-09 06:29:00.235429] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.677 [2024-12-09 06:29:00.235591] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.677 [2024-12-09 06:29:00.235748] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.677 [2024-12-09 06:29:00.235754] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.677 [2024-12-09 06:29:00.235759] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.677 [2024-12-09 06:29:00.235764] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.677 [2024-12-09 06:29:00.247709] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.677 [2024-12-09 06:29:00.248258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.677 [2024-12-09 06:29:00.248289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.677 [2024-12-09 06:29:00.248298] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.677 [2024-12-09 06:29:00.248478] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.677 [2024-12-09 06:29:00.248642] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.677 [2024-12-09 06:29:00.248649] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.677 [2024-12-09 06:29:00.248654] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.677 [2024-12-09 06:29:00.248660] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.939 [2024-12-09 06:29:00.260435] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.939 [2024-12-09 06:29:00.260880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.939 [2024-12-09 06:29:00.260896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.939 [2024-12-09 06:29:00.260902] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.939 [2024-12-09 06:29:00.261059] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.939 [2024-12-09 06:29:00.261215] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.939 [2024-12-09 06:29:00.261222] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.939 [2024-12-09 06:29:00.261227] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.939 [2024-12-09 06:29:00.261232] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.939 [2024-12-09 06:29:00.273155] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.939 [2024-12-09 06:29:00.273722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.939 [2024-12-09 06:29:00.273752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.939 [2024-12-09 06:29:00.273761] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.939 [2024-12-09 06:29:00.273934] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.939 [2024-12-09 06:29:00.274093] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.939 [2024-12-09 06:29:00.274100] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.939 [2024-12-09 06:29:00.274106] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.939 [2024-12-09 06:29:00.274112] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.939 [2024-12-09 06:29:00.285883] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.939 [2024-12-09 06:29:00.286482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.939 [2024-12-09 06:29:00.286513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.939 [2024-12-09 06:29:00.286522] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.939 [2024-12-09 06:29:00.286698] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.939 [2024-12-09 06:29:00.286857] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.939 [2024-12-09 06:29:00.286863] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.939 [2024-12-09 06:29:00.286873] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.939 [2024-12-09 06:29:00.286879] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.939 [2024-12-09 06:29:00.298658] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.939 [2024-12-09 06:29:00.299184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.939 [2024-12-09 06:29:00.299214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.939 [2024-12-09 06:29:00.299222] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.939 [2024-12-09 06:29:00.299395] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.939 [2024-12-09 06:29:00.299561] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.939 [2024-12-09 06:29:00.299568] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.940 [2024-12-09 06:29:00.299573] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.940 [2024-12-09 06:29:00.299579] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.940 [2024-12-09 06:29:00.311494] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.940 [2024-12-09 06:29:00.312039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.940 [2024-12-09 06:29:00.312069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.940 [2024-12-09 06:29:00.312078] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.940 [2024-12-09 06:29:00.312250] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.940 [2024-12-09 06:29:00.312410] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.940 [2024-12-09 06:29:00.312417] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.940 [2024-12-09 06:29:00.312423] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.940 [2024-12-09 06:29:00.312429] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.940 [2024-12-09 06:29:00.324299] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.940 [2024-12-09 06:29:00.324824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.940 [2024-12-09 06:29:00.324854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.940 [2024-12-09 06:29:00.324862] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.940 [2024-12-09 06:29:00.325035] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.940 [2024-12-09 06:29:00.325194] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.940 [2024-12-09 06:29:00.325200] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.940 [2024-12-09 06:29:00.325207] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.940 [2024-12-09 06:29:00.325213] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.940 [2024-12-09 06:29:00.337153] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.940 [2024-12-09 06:29:00.337599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.940 [2024-12-09 06:29:00.337629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.940 [2024-12-09 06:29:00.337638] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.940 [2024-12-09 06:29:00.337810] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.940 [2024-12-09 06:29:00.337969] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.940 [2024-12-09 06:29:00.337975] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.940 [2024-12-09 06:29:00.337981] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.940 [2024-12-09 06:29:00.337987] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.940 [2024-12-09 06:29:00.349927] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.940 [2024-12-09 06:29:00.350475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.940 [2024-12-09 06:29:00.350505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.940 [2024-12-09 06:29:00.350514] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.940 [2024-12-09 06:29:00.350689] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.940 [2024-12-09 06:29:00.350849] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.940 [2024-12-09 06:29:00.350855] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.940 [2024-12-09 06:29:00.350861] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.940 [2024-12-09 06:29:00.350867] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.940 [2024-12-09 06:29:00.362645] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.940 [2024-12-09 06:29:00.363228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.940 [2024-12-09 06:29:00.363258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.940 [2024-12-09 06:29:00.363267] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.940 [2024-12-09 06:29:00.363439] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.940 [2024-12-09 06:29:00.363607] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.940 [2024-12-09 06:29:00.363614] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.940 [2024-12-09 06:29:00.363620] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.940 [2024-12-09 06:29:00.363625] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.940 [2024-12-09 06:29:00.375395] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.940 [2024-12-09 06:29:00.375906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.940 [2024-12-09 06:29:00.375921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.940 [2024-12-09 06:29:00.375930] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.940 [2024-12-09 06:29:00.376087] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.940 [2024-12-09 06:29:00.376244] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.940 [2024-12-09 06:29:00.376250] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.940 [2024-12-09 06:29:00.376255] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.940 [2024-12-09 06:29:00.376260] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.940 [2024-12-09 06:29:00.388177] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.940 [2024-12-09 06:29:00.388600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.940 [2024-12-09 06:29:00.388614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.940 [2024-12-09 06:29:00.388620] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.940 [2024-12-09 06:29:00.388777] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.940 [2024-12-09 06:29:00.388932] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.940 [2024-12-09 06:29:00.388938] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.940 [2024-12-09 06:29:00.388944] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.940 [2024-12-09 06:29:00.388949] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.940 [2024-12-09 06:29:00.401020] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.940 [2024-12-09 06:29:00.401683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.940 [2024-12-09 06:29:00.401713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.940 [2024-12-09 06:29:00.401721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.940 [2024-12-09 06:29:00.401894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.940 [2024-12-09 06:29:00.402053] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.940 [2024-12-09 06:29:00.402060] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.940 [2024-12-09 06:29:00.402065] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.940 [2024-12-09 06:29:00.402071] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.940 [2024-12-09 06:29:00.413853] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.940 [2024-12-09 06:29:00.414282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.940 [2024-12-09 06:29:00.414298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.940 [2024-12-09 06:29:00.414303] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.940 [2024-12-09 06:29:00.414465] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.940 [2024-12-09 06:29:00.414626] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.940 [2024-12-09 06:29:00.414632] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.940 [2024-12-09 06:29:00.414637] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.940 [2024-12-09 06:29:00.414642] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.940 [2024-12-09 06:29:00.426569] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.940 [2024-12-09 06:29:00.427116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.940 [2024-12-09 06:29:00.427146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.940 [2024-12-09 06:29:00.427155] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.940 [2024-12-09 06:29:00.427327] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.941 [2024-12-09 06:29:00.427495] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.941 [2024-12-09 06:29:00.427503] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.941 [2024-12-09 06:29:00.427508] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.941 [2024-12-09 06:29:00.427514] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.941 [2024-12-09 06:29:00.439287] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.941 [2024-12-09 06:29:00.439745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.941 [2024-12-09 06:29:00.439775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.941 [2024-12-09 06:29:00.439783] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.941 [2024-12-09 06:29:00.439956] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.941 [2024-12-09 06:29:00.440115] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.941 [2024-12-09 06:29:00.440121] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.941 [2024-12-09 06:29:00.440127] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.941 [2024-12-09 06:29:00.440134] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.941 [2024-12-09 06:29:00.452067] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.941 [2024-12-09 06:29:00.452631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.941 [2024-12-09 06:29:00.452661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.941 [2024-12-09 06:29:00.452671] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.941 [2024-12-09 06:29:00.452843] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.941 [2024-12-09 06:29:00.453002] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.941 [2024-12-09 06:29:00.453008] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.941 [2024-12-09 06:29:00.453018] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.941 [2024-12-09 06:29:00.453024] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.941 [2024-12-09 06:29:00.464803] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.941 [2024-12-09 06:29:00.465274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.941 [2024-12-09 06:29:00.465303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.941 [2024-12-09 06:29:00.465312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.941 [2024-12-09 06:29:00.465492] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.941 [2024-12-09 06:29:00.465652] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.941 [2024-12-09 06:29:00.465658] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.941 [2024-12-09 06:29:00.465664] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.941 [2024-12-09 06:29:00.465670] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.941 [2024-12-09 06:29:00.477591] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.941 [2024-12-09 06:29:00.478104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.941 [2024-12-09 06:29:00.478134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.941 [2024-12-09 06:29:00.478142] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.941 [2024-12-09 06:29:00.478315] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.941 [2024-12-09 06:29:00.478483] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.941 [2024-12-09 06:29:00.478490] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.941 [2024-12-09 06:29:00.478495] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.941 [2024-12-09 06:29:00.478501] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.941 [2024-12-09 06:29:00.490424] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.941 [2024-12-09 06:29:00.490984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.941 [2024-12-09 06:29:00.491015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.941 [2024-12-09 06:29:00.491023] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.941 [2024-12-09 06:29:00.491196] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.941 [2024-12-09 06:29:00.491355] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.941 [2024-12-09 06:29:00.491362] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.941 [2024-12-09 06:29:00.491367] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.941 [2024-12-09 06:29:00.491373] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.941 [2024-12-09 06:29:00.503152] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.941 [2024-12-09 06:29:00.503745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.941 [2024-12-09 06:29:00.503774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.941 [2024-12-09 06:29:00.503783] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.941 [2024-12-09 06:29:00.503955] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.941 [2024-12-09 06:29:00.504114] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.941 [2024-12-09 06:29:00.504121] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.941 [2024-12-09 06:29:00.504127] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.941 [2024-12-09 06:29:00.504133] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.941 [2024-12-09 06:29:00.515926] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.941 [2024-12-09 06:29:00.516480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.941 [2024-12-09 06:29:00.516510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:05.941 [2024-12-09 06:29:00.516519] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:05.941 [2024-12-09 06:29:00.516691] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:05.941 [2024-12-09 06:29:00.516851] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.941 [2024-12-09 06:29:00.516857] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.941 [2024-12-09 06:29:00.516863] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.941 [2024-12-09 06:29:00.516869] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.204 [2024-12-09 06:29:00.528660] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.204 [2024-12-09 06:29:00.529216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.204 [2024-12-09 06:29:00.529246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:06.204 [2024-12-09 06:29:00.529254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:06.204 [2024-12-09 06:29:00.529427] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:06.204 [2024-12-09 06:29:00.529594] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.204 [2024-12-09 06:29:00.529602] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.204 [2024-12-09 06:29:00.529607] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.204 [2024-12-09 06:29:00.529613] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.204 [2024-12-09 06:29:00.541384] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.204 [2024-12-09 06:29:00.541980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.204 [2024-12-09 06:29:00.542011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:06.204 [2024-12-09 06:29:00.542024] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:06.204 [2024-12-09 06:29:00.542196] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:06.204 [2024-12-09 06:29:00.542355] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.204 [2024-12-09 06:29:00.542361] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.204 [2024-12-09 06:29:00.542367] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.204 [2024-12-09 06:29:00.542373] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.204 [2024-12-09 06:29:00.554165] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.204 [2024-12-09 06:29:00.554724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.204 [2024-12-09 06:29:00.554754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:06.204 [2024-12-09 06:29:00.554763] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:06.204 [2024-12-09 06:29:00.554936] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:06.204 [2024-12-09 06:29:00.555095] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.204 [2024-12-09 06:29:00.555102] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.204 [2024-12-09 06:29:00.555107] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.204 [2024-12-09 06:29:00.555113] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 497182 Killed "${NVMF_APP[@]}" "$@" 00:30:06.204 06:29:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:30:06.204 06:29:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:30:06.204 06:29:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:06.204 06:29:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:06.204 06:29:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:06.204 [2024-12-09 06:29:00.566900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.204 [2024-12-09 06:29:00.567466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.204 [2024-12-09 06:29:00.567496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:06.204 [2024-12-09 06:29:00.567506] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:06.204 [2024-12-09 06:29:00.567681] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:06.204 [2024-12-09 06:29:00.567840] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.204 [2024-12-09 06:29:00.567846] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.204 [2024-12-09 06:29:00.567852] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.204 [2024-12-09 06:29:00.567859] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.204 06:29:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=498709 00:30:06.204 06:29:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 498709 00:30:06.204 06:29:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:06.204 06:29:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 498709 ']' 00:30:06.204 06:29:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:06.204 06:29:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:06.205 06:29:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:06.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:06.205 06:29:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:06.205 06:29:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:06.205 [2024-12-09 06:29:00.579677] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.205 [2024-12-09 06:29:00.580143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.205 [2024-12-09 06:29:00.580174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:06.205 [2024-12-09 06:29:00.580183] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:06.205 [2024-12-09 06:29:00.580357] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:06.205 [2024-12-09 06:29:00.580522] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.205 [2024-12-09 06:29:00.580529] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.205 [2024-12-09 06:29:00.580536] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.205 [2024-12-09 06:29:00.580543] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.205 [2024-12-09 06:29:00.592456] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.205 [2024-12-09 06:29:00.592989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.205 [2024-12-09 06:29:00.593018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:06.205 [2024-12-09 06:29:00.593027] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:06.205 [2024-12-09 06:29:00.593200] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:06.205 [2024-12-09 06:29:00.593361] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.205 [2024-12-09 06:29:00.593368] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.205 [2024-12-09 06:29:00.593374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.205 [2024-12-09 06:29:00.593380] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.205 [2024-12-09 06:29:00.605309] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.205 [2024-12-09 06:29:00.605806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.205 [2024-12-09 06:29:00.605821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:06.205 [2024-12-09 06:29:00.605827] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:06.205 [2024-12-09 06:29:00.605988] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:06.205 [2024-12-09 06:29:00.606145] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.205 [2024-12-09 06:29:00.606150] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.205 [2024-12-09 06:29:00.606155] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.205 [2024-12-09 06:29:00.606160] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.205 [2024-12-09 06:29:00.618089] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.205 [2024-12-09 06:29:00.618672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.205 [2024-12-09 06:29:00.618703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:06.205 [2024-12-09 06:29:00.618712] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:06.205 [2024-12-09 06:29:00.618884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:06.205 [2024-12-09 06:29:00.619044] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.205 [2024-12-09 06:29:00.619050] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.205 [2024-12-09 06:29:00.619056] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.205 [2024-12-09 06:29:00.619062] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.205 [2024-12-09 06:29:00.620524] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:30:06.205 [2024-12-09 06:29:00.620575] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:06.205 [2024-12-09 06:29:00.630860] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.205 [2024-12-09 06:29:00.631431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.205 [2024-12-09 06:29:00.631467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:06.205 [2024-12-09 06:29:00.631476] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:06.205 [2024-12-09 06:29:00.631651] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:06.205 [2024-12-09 06:29:00.631810] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.205 [2024-12-09 06:29:00.631817] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.205 [2024-12-09 06:29:00.631823] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.205 [2024-12-09 06:29:00.631830] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.205 [2024-12-09 06:29:00.643603] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.205 [2024-12-09 06:29:00.644167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.205 [2024-12-09 06:29:00.644198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:06.205 [2024-12-09 06:29:00.644211] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:06.205 [2024-12-09 06:29:00.644384] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:06.205 [2024-12-09 06:29:00.644550] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.205 [2024-12-09 06:29:00.644558] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.205 [2024-12-09 06:29:00.644563] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.205 [2024-12-09 06:29:00.644570] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.205 [2024-12-09 06:29:00.656363] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.205 [2024-12-09 06:29:00.657035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.205 [2024-12-09 06:29:00.657065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:06.205 [2024-12-09 06:29:00.657074] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:06.205 [2024-12-09 06:29:00.657247] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:06.205 [2024-12-09 06:29:00.657407] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.205 [2024-12-09 06:29:00.657413] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.205 [2024-12-09 06:29:00.657419] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.205 [2024-12-09 06:29:00.657425] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.205 [2024-12-09 06:29:00.669189] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.205 [2024-12-09 06:29:00.669668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.205 [2024-12-09 06:29:00.669684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:06.205 [2024-12-09 06:29:00.669690] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:06.205 [2024-12-09 06:29:00.669847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:06.205 [2024-12-09 06:29:00.670003] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.205 [2024-12-09 06:29:00.670009] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.205 [2024-12-09 06:29:00.670015] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.205 [2024-12-09 06:29:00.670020] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.205 [2024-12-09 06:29:00.681928] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.205 [2024-12-09 06:29:00.682431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.205 [2024-12-09 06:29:00.682444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:06.206 [2024-12-09 06:29:00.682454] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:06.206 [2024-12-09 06:29:00.682611] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:06.206 [2024-12-09 06:29:00.682771] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.206 [2024-12-09 06:29:00.682778] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.206 [2024-12-09 06:29:00.682783] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.206 [2024-12-09 06:29:00.682788] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.206 [2024-12-09 06:29:00.686660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:06.206 [2024-12-09 06:29:00.694696] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.206 [2024-12-09 06:29:00.695048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.206 [2024-12-09 06:29:00.695062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:06.206 [2024-12-09 06:29:00.695068] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:06.206 [2024-12-09 06:29:00.695226] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:06.206 [2024-12-09 06:29:00.695382] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.206 [2024-12-09 06:29:00.695388] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.206 [2024-12-09 06:29:00.695394] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.206 [2024-12-09 06:29:00.695398] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.206 [2024-12-09 06:29:00.707454] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.206 [2024-12-09 06:29:00.708040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.206 [2024-12-09 06:29:00.708072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:06.206 [2024-12-09 06:29:00.708081] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:06.206 [2024-12-09 06:29:00.708257] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:06.206 [2024-12-09 06:29:00.708417] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.206 [2024-12-09 06:29:00.708423] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.206 [2024-12-09 06:29:00.708429] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.206 [2024-12-09 06:29:00.708436] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.206 [2024-12-09 06:29:00.715993] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:06.206 [2024-12-09 06:29:00.716018] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:06.206 [2024-12-09 06:29:00.716025] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:06.206 [2024-12-09 06:29:00.716030] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:06.206 [2024-12-09 06:29:00.716035] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:06.206 [2024-12-09 06:29:00.717265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:06.206 [2024-12-09 06:29:00.717410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:06.206 [2024-12-09 06:29:00.717412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:06.206 [2024-12-09 06:29:00.720221] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.206 [2024-12-09 06:29:00.720737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.206 [2024-12-09 06:29:00.720768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:06.206 [2024-12-09 06:29:00.720778] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:06.206 [2024-12-09 06:29:00.720954] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:06.206 [2024-12-09 06:29:00.721113] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.206 [2024-12-09 06:29:00.721119] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.206 [2024-12-09 06:29:00.721126] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.206 [2024-12-09 06:29:00.721132] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.206 [2024-12-09 06:29:00.733072] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.206 [2024-12-09 06:29:00.733706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.206 [2024-12-09 06:29:00.733738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:06.206 [2024-12-09 06:29:00.733747] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:06.206 [2024-12-09 06:29:00.733922] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:06.206 [2024-12-09 06:29:00.734081] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.206 [2024-12-09 06:29:00.734088] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.206 [2024-12-09 06:29:00.734094] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.206 [2024-12-09 06:29:00.734100] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.206 [2024-12-09 06:29:00.745878] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.206 [2024-12-09 06:29:00.746388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.206 [2024-12-09 06:29:00.746420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:06.206 [2024-12-09 06:29:00.746430] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:06.206 [2024-12-09 06:29:00.746612] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:06.206 [2024-12-09 06:29:00.746772] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.206 [2024-12-09 06:29:00.746779] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.206 [2024-12-09 06:29:00.746786] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.206 [2024-12-09 06:29:00.746792] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.206 [2024-12-09 06:29:00.758730] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.206 [2024-12-09 06:29:00.759309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.206 [2024-12-09 06:29:00.759341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:06.206 [2024-12-09 06:29:00.759359] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:06.206 [2024-12-09 06:29:00.759538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:06.206 [2024-12-09 06:29:00.759698] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.206 [2024-12-09 06:29:00.759705] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.206 [2024-12-09 06:29:00.759710] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.206 [2024-12-09 06:29:00.759717] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.206 [2024-12-09 06:29:00.771487] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.206 [2024-12-09 06:29:00.771958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.206 [2024-12-09 06:29:00.771973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:06.206 [2024-12-09 06:29:00.771979] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:06.206 [2024-12-09 06:29:00.772136] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:06.206 [2024-12-09 06:29:00.772292] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.206 [2024-12-09 06:29:00.772298] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.206 [2024-12-09 06:29:00.772304] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.206 [2024-12-09 06:29:00.772309] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.206 [2024-12-09 06:29:00.784215] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.206 [2024-12-09 06:29:00.784680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.206 [2024-12-09 06:29:00.784710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:06.206 [2024-12-09 06:29:00.784719] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:06.206 [2024-12-09 06:29:00.784895] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:06.206 [2024-12-09 06:29:00.785054] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.206 [2024-12-09 06:29:00.785061] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.206 [2024-12-09 06:29:00.785066] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.207 [2024-12-09 06:29:00.785072] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.468 [2024-12-09 06:29:00.796989] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.468 [2024-12-09 06:29:00.797476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.468 [2024-12-09 06:29:00.797492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:06.468 [2024-12-09 06:29:00.797498] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:06.468 [2024-12-09 06:29:00.797654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:06.468 [2024-12-09 06:29:00.797815] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.468 [2024-12-09 06:29:00.797821] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.468 [2024-12-09 06:29:00.797826] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.468 [2024-12-09 06:29:00.797831] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.468 06:29:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:06.468 06:29:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:30:06.468 06:29:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:06.468 06:29:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:06.468 06:29:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:06.468 [2024-12-09 06:29:00.809746] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.468 [2024-12-09 06:29:00.810101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.468 [2024-12-09 06:29:00.810114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:06.468 [2024-12-09 06:29:00.810120] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:06.468 [2024-12-09 06:29:00.810276] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:06.468 [2024-12-09 06:29:00.810433] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.468 [2024-12-09 06:29:00.810439] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.468 [2024-12-09 06:29:00.810444] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.468 [2024-12-09 06:29:00.810454] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.468 [2024-12-09 06:29:00.822520] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.468 [2024-12-09 06:29:00.822991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.468 [2024-12-09 06:29:00.823004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:06.468 [2024-12-09 06:29:00.823009] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:06.468 [2024-12-09 06:29:00.823165] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:06.468 [2024-12-09 06:29:00.823321] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.468 [2024-12-09 06:29:00.823328] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.468 [2024-12-09 06:29:00.823333] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.468 [2024-12-09 06:29:00.823338] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.468 [2024-12-09 06:29:00.835271] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.468 [2024-12-09 06:29:00.835741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.468 [2024-12-09 06:29:00.835756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:06.468 [2024-12-09 06:29:00.835762] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:06.468 [2024-12-09 06:29:00.835922] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:06.468 [2024-12-09 06:29:00.836078] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.468 [2024-12-09 06:29:00.836084] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.468 [2024-12-09 06:29:00.836089] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.468 [2024-12-09 06:29:00.836094] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.468 06:29:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:06.468 06:29:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:06.468 06:29:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.468 06:29:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:06.468 [2024-12-09 06:29:00.848017] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.468 [2024-12-09 06:29:00.848509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.468 [2024-12-09 06:29:00.848523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:06.468 [2024-12-09 06:29:00.848528] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:06.468 [2024-12-09 06:29:00.848685] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:06.468 [2024-12-09 06:29:00.848841] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.468 [2024-12-09 06:29:00.848847] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.469 [2024-12-09 06:29:00.848852] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.469 [2024-12-09 06:29:00.848857] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.469 [2024-12-09 06:29:00.852100] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:06.469 06:29:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.469 06:29:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:06.469 06:29:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.469 06:29:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:06.469 [2024-12-09 06:29:00.860768] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.469 [2024-12-09 06:29:00.861257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.469 [2024-12-09 06:29:00.861270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:06.469 [2024-12-09 06:29:00.861275] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:06.469 [2024-12-09 06:29:00.861431] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:06.469 [2024-12-09 06:29:00.861592] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.469 [2024-12-09 06:29:00.861599] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.469 [2024-12-09 06:29:00.861604] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.469 [2024-12-09 06:29:00.861609] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.469 [2024-12-09 06:29:00.873514] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.469 [2024-12-09 06:29:00.874115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.469 [2024-12-09 06:29:00.874145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:06.469 [2024-12-09 06:29:00.874154] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:06.469 [2024-12-09 06:29:00.874327] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:06.469 [2024-12-09 06:29:00.874492] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.469 [2024-12-09 06:29:00.874499] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.469 [2024-12-09 06:29:00.874505] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.469 [2024-12-09 06:29:00.874511] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.469 [2024-12-09 06:29:00.886276] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.469 [2024-12-09 06:29:00.886664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.469 [2024-12-09 06:29:00.886680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:06.469 [2024-12-09 06:29:00.886687] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:06.469 [2024-12-09 06:29:00.886844] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:06.469 [2024-12-09 06:29:00.887001] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.469 [2024-12-09 06:29:00.887006] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.469 [2024-12-09 06:29:00.887012] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.469 [2024-12-09 06:29:00.887018] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.469 Malloc0 00:30:06.469 06:29:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.469 06:29:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:06.469 06:29:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.469 06:29:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:06.469 06:29:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.469 06:29:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:06.469 06:29:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.469 06:29:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:06.469 [2024-12-09 06:29:00.899077] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.469 [2024-12-09 06:29:00.899671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.469 [2024-12-09 06:29:00.899702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146d6e0 with addr=10.0.0.2, port=4420 00:30:06.469 [2024-12-09 06:29:00.899711] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d6e0 is same with the state(6) to be set 00:30:06.469 [2024-12-09 06:29:00.899887] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d6e0 (9): Bad file descriptor 00:30:06.469 [2024-12-09 06:29:00.900050] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.469 [2024-12-09 06:29:00.900057] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.469 [2024-12-09 06:29:00.900063] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.469 [2024-12-09 06:29:00.900069] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.469 06:29:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.469 06:29:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:06.469 06:29:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.469 06:29:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:06.469 [2024-12-09 06:29:00.906949] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:06.469 06:29:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.469 [2024-12-09 06:29:00.911850] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.469 06:29:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 497736 00:30:06.469 [2024-12-09 06:29:00.937917] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:30:07.853 5049.50 IOPS, 19.72 MiB/s [2024-12-09T05:29:03.379Z] 6118.71 IOPS, 23.90 MiB/s [2024-12-09T05:29:04.319Z] 6935.38 IOPS, 27.09 MiB/s [2024-12-09T05:29:05.258Z] 7582.33 IOPS, 29.62 MiB/s [2024-12-09T05:29:06.198Z] 8073.40 IOPS, 31.54 MiB/s [2024-12-09T05:29:07.138Z] 8473.82 IOPS, 33.10 MiB/s [2024-12-09T05:29:08.077Z] 8817.83 IOPS, 34.44 MiB/s [2024-12-09T05:29:09.455Z] 9112.69 IOPS, 35.60 MiB/s [2024-12-09T05:29:10.393Z] 9364.57 IOPS, 36.58 MiB/s [2024-12-09T05:29:10.393Z] 9579.40 IOPS, 37.42 MiB/s 00:30:15.806 Latency(us) 00:30:15.806 [2024-12-09T05:29:10.393Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:15.806 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:15.806 Verification LBA range: start 0x0 length 0x4000 00:30:15.806 Nvme1n1 : 15.01 9577.21 37.41 10394.37 0.00 6389.10 570.29 20971.52 00:30:15.806 [2024-12-09T05:29:10.393Z] =================================================================================================================== 00:30:15.806 [2024-12-09T05:29:10.393Z] Total : 9577.21 37.41 10394.37 0.00 6389.10 570.29 20971.52 00:30:15.806 06:29:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:30:15.806 06:29:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:15.806 06:29:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.806 06:29:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:15.806 06:29:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.806 06:29:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:30:15.806 06:29:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:30:15.806 06:29:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:15.806 06:29:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:30:15.806 06:29:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:15.806 06:29:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:30:15.806 06:29:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:15.806 06:29:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:15.806 rmmod nvme_tcp 00:30:15.806 rmmod nvme_fabrics 00:30:15.806 rmmod nvme_keyring 00:30:15.806 06:29:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:15.806 06:29:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:30:15.806 06:29:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:30:15.806 06:29:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 498709 ']' 00:30:15.806 06:29:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 498709 00:30:15.806 06:29:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 498709 ']' 00:30:15.806 06:29:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 498709 00:30:15.806 06:29:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:30:15.806 06:29:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:15.806 06:29:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 498709 00:30:15.806 06:29:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:15.806 06:29:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:15.806 06:29:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 498709' 00:30:15.806 killing process with pid 498709 00:30:15.806 06:29:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 498709 00:30:15.806 06:29:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 498709 00:30:16.066 06:29:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:16.066 06:29:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:16.066 06:29:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:16.066 06:29:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:30:16.066 06:29:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:30:16.066 06:29:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:16.066 06:29:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:30:16.066 06:29:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:16.066 06:29:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:16.066 06:29:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:16.066 06:29:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:16.066 06:29:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:17.975 06:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:17.975 00:30:17.975 real 0m28.066s 00:30:17.975 user 1m3.101s 00:30:17.975 sys 0m7.517s 00:30:17.975 06:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:17.975 06:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:17.975 ************************************ 00:30:17.975 END TEST nvmf_bdevperf 00:30:17.975 ************************************ 00:30:18.236 06:29:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:30:18.236 06:29:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:18.236 06:29:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:18.236 06:29:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:18.236 ************************************ 00:30:18.236 START TEST nvmf_target_disconnect 00:30:18.236 ************************************ 00:30:18.236 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:30:18.236 * Looking for test storage... 00:30:18.236 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:18.236 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:18.236 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:30:18.236 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:18.236 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:18.236 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:18.236 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:18.236 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:18.236 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:30:18.236 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:30:18.236 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:30:18.236 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:30:18.236 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:30:18.236 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:30:18.236 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:30:18.236 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:18.236 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:30:18.236 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:30:18.236 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:18.236 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:18.236 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:30:18.236 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:30:18.236 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:18.236 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:30:18.236 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:30:18.236 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:30:18.236 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:30:18.236 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:18.236 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:30:18.236 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:30:18.236 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:18.236 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:18.236 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:30:18.236 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:18.236 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:18.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:18.236 --rc genhtml_branch_coverage=1 00:30:18.236 --rc genhtml_function_coverage=1 00:30:18.236 --rc genhtml_legend=1 00:30:18.236 --rc geninfo_all_blocks=1 00:30:18.236 --rc geninfo_unexecuted_blocks=1 00:30:18.236 00:30:18.236 ' 00:30:18.236 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:18.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:18.236 --rc genhtml_branch_coverage=1 00:30:18.236 --rc genhtml_function_coverage=1 00:30:18.236 --rc genhtml_legend=1 00:30:18.236 --rc geninfo_all_blocks=1 00:30:18.236 --rc geninfo_unexecuted_blocks=1 00:30:18.236 00:30:18.236 ' 00:30:18.236 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:18.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:18.236 --rc genhtml_branch_coverage=1 00:30:18.236 --rc genhtml_function_coverage=1 00:30:18.236 --rc genhtml_legend=1 00:30:18.236 --rc geninfo_all_blocks=1 00:30:18.236 --rc geninfo_unexecuted_blocks=1 00:30:18.236 00:30:18.236 ' 00:30:18.236 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:18.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:18.236 --rc genhtml_branch_coverage=1 00:30:18.236 --rc genhtml_function_coverage=1 00:30:18.236 --rc genhtml_legend=1 00:30:18.236 --rc geninfo_all_blocks=1 00:30:18.236 --rc geninfo_unexecuted_blocks=1 00:30:18.236 00:30:18.236 ' 00:30:18.236 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:18.236 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:30:18.237 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:18.237 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:18.237 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:18.237 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:18.237 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:18.237 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:18.237 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:18.237 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:18.237 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:18.237 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:18.497 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:30:18.497 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:30:18.497 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:18.497 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:18.497 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:18.497 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:18.497 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:18.497 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:30:18.497 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:18.497 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:18.497 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:18.498 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.498 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.498 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.498 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:30:18.498 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.498 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:30:18.498 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:18.498 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:18.498 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:18.498 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:18.498 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:18.498 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:18.498 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:18.498 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:18.498 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:18.498 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:18.498 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:30:18.498 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:30:18.498 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:30:18.498 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:30:18.498 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:18.498 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:18.498 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:18.498 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:18.498 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:18.498 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:18.498 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:18.498 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:18.498 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:18.498 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:18.498 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:30:18.498 06:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:26.632 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:26.632 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:30:26.632 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:26.632 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:26.632 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:26.632 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:26.632 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:26.632 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:30:26.632 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:26.632 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:30:26.632 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:30:26.632 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:30:26.632 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:30:26.632 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:30:26.632 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:30:26.632 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:26.632 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:26.632 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:26.632 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:26.632 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:26.632 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:26.632 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:26.632 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:26.632 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:26.632 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:26.632 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:26.632 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:26.632 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:26.632 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:26.632 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:26.632 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:26.632 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:26.632 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:26.632 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:26.632 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:26.632 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:26.632 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:26.633 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:26.633 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:26.633 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:26.633 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:26.633 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:26.633 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:26.633 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:26.633 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:26.633 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:26.633 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:26.633 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:26.633 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:26.633 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:26.633 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:26.633 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:26.633 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:26.633 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:26.633 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:26.633 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:26.633 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:26.633 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:26.633 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:26.633 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:26.633 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:26.633 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:26.633 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:26.633 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:26.633 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:26.633 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:26.633 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:26.633 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:26.633 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:26.633 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:26.633 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:26.633 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:26.633 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:26.633 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:30:26.633 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:26.633 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:26.633 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:26.633 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:26.633 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:26.633 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:26.633 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:26.633 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:26.633 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:26.633 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:26.633 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:26.633 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:26.633 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:26.633 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:26.633 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:26.633 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:26.633 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:26.633 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:26.633 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:26.633 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:26.633 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:26.634 06:29:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:26.634 06:29:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:26.634 06:29:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:26.634 06:29:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:26.634 06:29:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:26.634 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:26.634 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.527 ms 00:30:26.634 00:30:26.634 --- 10.0.0.2 ping statistics --- 00:30:26.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:26.634 rtt min/avg/max/mdev = 0.527/0.527/0.527/0.000 ms 00:30:26.634 06:29:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:26.634 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:26.634 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:30:26.634 00:30:26.634 --- 10.0.0.1 ping statistics --- 00:30:26.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:26.634 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:30:26.634 06:29:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:26.634 06:29:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:30:26.634 06:29:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:26.634 06:29:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:26.634 06:29:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:26.634 06:29:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:26.634 06:29:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:26.634 06:29:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:26.634 06:29:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:26.634 06:29:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:30:26.634 06:29:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:26.634 06:29:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:26.634 06:29:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:26.634 ************************************ 00:30:26.634 START TEST nvmf_target_disconnect_tc1 00:30:26.634 ************************************ 00:30:26.634 06:29:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:30:26.634 06:29:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:26.634 06:29:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:30:26.634 06:29:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:26.634 06:29:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:26.634 06:29:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:26.634 06:29:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:26.634 06:29:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:26.634 06:29:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:26.634 06:29:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:26.634 06:29:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:26.634 06:29:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:30:26.634 06:29:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:26.634 [2024-12-09 06:29:20.327743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.634 [2024-12-09 06:29:20.327813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19d9570 with addr=10.0.0.2, port=4420 00:30:26.634 [2024-12-09 06:29:20.327842] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:26.634 [2024-12-09 06:29:20.327854] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:26.634 [2024-12-09 06:29:20.327862] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:30:26.634 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:30:26.634 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:30:26.634 Initializing NVMe Controllers 00:30:26.634 06:29:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:30:26.634 06:29:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:26.634 06:29:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:26.634 06:29:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:26.634 00:30:26.634 real 0m0.135s 00:30:26.634 user 0m0.056s 00:30:26.634 sys 0m0.078s 00:30:26.635 06:29:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:26.635 06:29:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:26.635 ************************************ 00:30:26.635 END TEST nvmf_target_disconnect_tc1 00:30:26.635 ************************************ 00:30:26.635 06:29:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:30:26.635 06:29:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:26.635 06:29:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:26.635 06:29:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:26.635 ************************************ 00:30:26.635 START TEST nvmf_target_disconnect_tc2 00:30:26.635 ************************************ 00:30:26.635 06:29:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:30:26.635 06:29:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:30:26.635 06:29:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:26.635 06:29:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:26.635 06:29:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:26.635 06:29:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:26.635 06:29:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=504357 00:30:26.635 06:29:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 504357 00:30:26.635 06:29:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 504357 ']' 00:30:26.635 06:29:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:26.635 06:29:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:26.635 06:29:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:26.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:26.635 06:29:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:26.635 06:29:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:26.635 06:29:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:26.635 [2024-12-09 06:29:20.467081] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:30:26.635 [2024-12-09 06:29:20.467138] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:26.635 [2024-12-09 06:29:20.545724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:26.635 [2024-12-09 06:29:20.597544] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:26.635 [2024-12-09 06:29:20.597599] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:26.635 [2024-12-09 06:29:20.597607] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:26.635 [2024-12-09 06:29:20.597613] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:26.635 [2024-12-09 06:29:20.597620] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:26.635 [2024-12-09 06:29:20.599645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:26.635 [2024-12-09 06:29:20.599885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:26.635 [2024-12-09 06:29:20.600038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:30:26.635 [2024-12-09 06:29:20.600040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:26.897 06:29:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:26.897 06:29:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:30:26.897 06:29:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:26.897 06:29:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:26.897 06:29:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:26.897 06:29:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:26.897 06:29:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:26.897 06:29:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.897 06:29:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:26.897 Malloc0 00:30:26.897 06:29:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.897 06:29:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:26.897 06:29:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.897 06:29:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:26.897 [2024-12-09 06:29:21.374962] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:26.897 06:29:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.897 06:29:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:26.897 06:29:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.897 06:29:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:26.897 06:29:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.897 06:29:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:26.897 06:29:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.897 06:29:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:26.897 06:29:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.897 06:29:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:26.897 06:29:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.897 06:29:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:26.897 [2024-12-09 06:29:21.415343] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:26.897 06:29:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.897 06:29:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:26.897 06:29:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.897 06:29:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:26.897 06:29:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.897 06:29:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=504507 00:30:26.897 06:29:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:30:26.898 06:29:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:29.500 06:29:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 504357 00:30:29.500 06:29:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:30:29.500 Read completed with error (sct=0, sc=8) 00:30:29.500 starting I/O failed 00:30:29.500 Read completed with error (sct=0, sc=8) 00:30:29.500 starting I/O failed 00:30:29.500 Read completed with error (sct=0, sc=8) 00:30:29.500 starting I/O failed 00:30:29.500 Read completed with error (sct=0, sc=8) 00:30:29.500 starting I/O failed 00:30:29.500 Read completed with error (sct=0, sc=8) 00:30:29.500 starting I/O failed 00:30:29.500 Read completed with error (sct=0, sc=8) 00:30:29.500 starting I/O failed 00:30:29.500 Read completed with error (sct=0, sc=8) 00:30:29.500 starting I/O failed 00:30:29.500 Read completed with error (sct=0, sc=8) 00:30:29.500 starting I/O failed 00:30:29.500 Read completed with error (sct=0, sc=8) 00:30:29.500 starting I/O failed 00:30:29.500 Read completed with error (sct=0, sc=8) 00:30:29.500 starting I/O failed 00:30:29.500 Read completed with error (sct=0, sc=8) 00:30:29.500 starting I/O failed 00:30:29.500 Read completed with error (sct=0, sc=8) 00:30:29.500 starting I/O failed 00:30:29.500 Read completed with error (sct=0, sc=8) 00:30:29.500 starting I/O failed 00:30:29.500 Read completed with error (sct=0, sc=8) 00:30:29.500 starting I/O failed 00:30:29.500 Read completed with error (sct=0, sc=8) 00:30:29.500 starting I/O failed 00:30:29.500 Write completed with error (sct=0, sc=8) 00:30:29.500 starting I/O failed 00:30:29.500 Read completed with error (sct=0, sc=8) 00:30:29.500 starting I/O failed 00:30:29.500 Read completed with error (sct=0, sc=8) 00:30:29.500 starting I/O failed 00:30:29.500 Write completed with error (sct=0, sc=8) 00:30:29.500 starting I/O failed 00:30:29.500 Write completed with error (sct=0, sc=8) 00:30:29.500 starting I/O failed 00:30:29.500 Read completed with error (sct=0, sc=8) 00:30:29.500 starting I/O failed 00:30:29.500 Write completed with error (sct=0, sc=8) 00:30:29.500 starting I/O failed 00:30:29.500 Read completed with error (sct=0, sc=8) 00:30:29.500 starting I/O failed 00:30:29.500 Read completed with error (sct=0, sc=8) 00:30:29.500 starting I/O failed 00:30:29.500 Write completed with error (sct=0, sc=8) 00:30:29.500 starting I/O failed 00:30:29.500 Read completed with error (sct=0, sc=8) 00:30:29.500 starting I/O failed 00:30:29.500 Read completed with error (sct=0, sc=8) 00:30:29.500 starting I/O failed 00:30:29.500 Read completed with error (sct=0, sc=8) 00:30:29.500 starting I/O failed 00:30:29.500 Write completed with error (sct=0, sc=8) 00:30:29.500 starting I/O failed 00:30:29.500 Read completed with error (sct=0, sc=8) 00:30:29.500 starting I/O failed 00:30:29.500 Write completed with error (sct=0, sc=8) 00:30:29.500 starting I/O failed 00:30:29.500 Write completed with error (sct=0, sc=8) 00:30:29.500 starting I/O failed 00:30:29.500 [2024-12-09 06:29:23.449844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.500 [2024-12-09 06:29:23.450248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.500 [2024-12-09 06:29:23.450270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.500 qpair failed and we were unable to recover it. 00:30:29.500 [2024-12-09 06:29:23.450509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.500 [2024-12-09 06:29:23.450536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.500 qpair failed and we were unable to recover it. 00:30:29.500 [2024-12-09 06:29:23.450959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.500 [2024-12-09 06:29:23.451002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.500 qpair failed and we were unable to recover it. 00:30:29.500 [2024-12-09 06:29:23.451318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.500 [2024-12-09 06:29:23.451335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.500 qpair failed and we were unable to recover it. 00:30:29.500 [2024-12-09 06:29:23.451828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.500 [2024-12-09 06:29:23.451873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.500 qpair failed and we were unable to recover it. 00:30:29.500 [2024-12-09 06:29:23.452106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.500 [2024-12-09 06:29:23.452121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.500 qpair failed and we were unable to recover it. 00:30:29.500 [2024-12-09 06:29:23.452419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.500 [2024-12-09 06:29:23.452432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.501 qpair failed and we were unable to recover it. 00:30:29.501 [2024-12-09 06:29:23.452724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.501 [2024-12-09 06:29:23.452736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.501 qpair failed and we were unable to recover it. 00:30:29.501 [2024-12-09 06:29:23.453051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.501 [2024-12-09 06:29:23.453062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.501 qpair failed and we were unable to recover it. 00:30:29.501 [2024-12-09 06:29:23.453356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.501 [2024-12-09 06:29:23.453368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.501 qpair failed and we were unable to recover it. 00:30:29.501 [2024-12-09 06:29:23.453675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.501 [2024-12-09 06:29:23.453687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.501 qpair failed and we were unable to recover it. 00:30:29.501 [2024-12-09 06:29:23.453860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.501 [2024-12-09 06:29:23.453873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.501 qpair failed and we were unable to recover it. 00:30:29.501 [2024-12-09 06:29:23.454205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.501 [2024-12-09 06:29:23.454219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.501 qpair failed and we were unable to recover it. 00:30:29.501 [2024-12-09 06:29:23.454400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.501 [2024-12-09 06:29:23.454413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.501 qpair failed and we were unable to recover it. 00:30:29.501 [2024-12-09 06:29:23.454692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.501 [2024-12-09 06:29:23.454705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.501 qpair failed and we were unable to recover it. 00:30:29.501 [2024-12-09 06:29:23.454997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.501 [2024-12-09 06:29:23.455009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.501 qpair failed and we were unable to recover it. 00:30:29.501 [2024-12-09 06:29:23.455320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.501 [2024-12-09 06:29:23.455332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.501 qpair failed and we were unable to recover it. 00:30:29.501 [2024-12-09 06:29:23.455635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.501 [2024-12-09 06:29:23.455648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.501 qpair failed and we were unable to recover it. 00:30:29.501 [2024-12-09 06:29:23.455914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.501 [2024-12-09 06:29:23.455928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.501 qpair failed and we were unable to recover it. 00:30:29.501 [2024-12-09 06:29:23.456217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.501 [2024-12-09 06:29:23.456229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.501 qpair failed and we were unable to recover it. 00:30:29.501 [2024-12-09 06:29:23.456566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.501 [2024-12-09 06:29:23.456580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.501 qpair failed and we were unable to recover it. 00:30:29.501 [2024-12-09 06:29:23.456865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.501 [2024-12-09 06:29:23.456877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.501 qpair failed and we were unable to recover it. 00:30:29.501 [2024-12-09 06:29:23.457155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.501 [2024-12-09 06:29:23.457167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.501 qpair failed and we were unable to recover it. 00:30:29.501 [2024-12-09 06:29:23.457452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.501 [2024-12-09 06:29:23.457465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.501 qpair failed and we were unable to recover it. 00:30:29.501 [2024-12-09 06:29:23.457552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.501 [2024-12-09 06:29:23.457563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.501 qpair failed and we were unable to recover it. 00:30:29.501 [2024-12-09 06:29:23.457936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.501 [2024-12-09 06:29:23.457949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.501 qpair failed and we were unable to recover it. 00:30:29.501 [2024-12-09 06:29:23.458238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.501 [2024-12-09 06:29:23.458251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.501 qpair failed and we were unable to recover it. 00:30:29.501 [2024-12-09 06:29:23.458440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.501 [2024-12-09 06:29:23.458459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.501 qpair failed and we were unable to recover it. 00:30:29.501 [2024-12-09 06:29:23.458748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.501 [2024-12-09 06:29:23.458760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.501 qpair failed and we were unable to recover it. 00:30:29.501 [2024-12-09 06:29:23.459017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.501 [2024-12-09 06:29:23.459029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.501 qpair failed and we were unable to recover it. 00:30:29.501 [2024-12-09 06:29:23.459327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.501 [2024-12-09 06:29:23.459340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.501 qpair failed and we were unable to recover it. 00:30:29.501 [2024-12-09 06:29:23.459656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.501 [2024-12-09 06:29:23.459669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.501 qpair failed and we were unable to recover it. 00:30:29.501 [2024-12-09 06:29:23.459867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.501 [2024-12-09 06:29:23.459880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.501 qpair failed and we were unable to recover it. 00:30:29.501 [2024-12-09 06:29:23.460197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.501 [2024-12-09 06:29:23.460211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.501 qpair failed and we were unable to recover it. 00:30:29.501 [2024-12-09 06:29:23.460476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.501 [2024-12-09 06:29:23.460489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.501 qpair failed and we were unable to recover it. 00:30:29.501 [2024-12-09 06:29:23.460814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.501 [2024-12-09 06:29:23.460827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.501 qpair failed and we were unable to recover it. 00:30:29.501 [2024-12-09 06:29:23.461138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.501 [2024-12-09 06:29:23.461150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.501 qpair failed and we were unable to recover it. 00:30:29.501 [2024-12-09 06:29:23.461437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.501 [2024-12-09 06:29:23.461453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.501 qpair failed and we were unable to recover it. 00:30:29.501 [2024-12-09 06:29:23.461753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.501 [2024-12-09 06:29:23.461766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.501 qpair failed and we were unable to recover it. 00:30:29.501 [2024-12-09 06:29:23.462045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.501 [2024-12-09 06:29:23.462058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.501 qpair failed and we were unable to recover it. 00:30:29.501 [2024-12-09 06:29:23.462344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.501 [2024-12-09 06:29:23.462358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.501 qpair failed and we were unable to recover it. 00:30:29.501 [2024-12-09 06:29:23.462558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.501 [2024-12-09 06:29:23.462571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.501 qpair failed and we were unable to recover it. 00:30:29.501 [2024-12-09 06:29:23.462852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.501 [2024-12-09 06:29:23.462865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.501 qpair failed and we were unable to recover it. 00:30:29.501 [2024-12-09 06:29:23.463140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.501 [2024-12-09 06:29:23.463153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.501 qpair failed and we were unable to recover it. 00:30:29.502 [2024-12-09 06:29:23.463322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.502 [2024-12-09 06:29:23.463334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.502 qpair failed and we were unable to recover it. 00:30:29.502 [2024-12-09 06:29:23.463495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.502 [2024-12-09 06:29:23.463510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.502 qpair failed and we were unable to recover it. 00:30:29.502 [2024-12-09 06:29:23.463814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.502 [2024-12-09 06:29:23.463826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.502 qpair failed and we were unable to recover it. 00:30:29.502 [2024-12-09 06:29:23.464141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.502 [2024-12-09 06:29:23.464154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.502 qpair failed and we were unable to recover it. 00:30:29.502 [2024-12-09 06:29:23.464474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.502 [2024-12-09 06:29:23.464486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.502 qpair failed and we were unable to recover it. 00:30:29.502 [2024-12-09 06:29:23.464703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.502 [2024-12-09 06:29:23.464716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.502 qpair failed and we were unable to recover it. 00:30:29.502 [2024-12-09 06:29:23.465031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.502 [2024-12-09 06:29:23.465044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.502 qpair failed and we were unable to recover it. 00:30:29.502 [2024-12-09 06:29:23.465336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.502 [2024-12-09 06:29:23.465349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.502 qpair failed and we were unable to recover it. 00:30:29.502 [2024-12-09 06:29:23.465676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.502 [2024-12-09 06:29:23.465689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.502 qpair failed and we were unable to recover it. 00:30:29.502 [2024-12-09 06:29:23.466011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.502 [2024-12-09 06:29:23.466024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.502 qpair failed and we were unable to recover it. 00:30:29.502 [2024-12-09 06:29:23.466331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.502 [2024-12-09 06:29:23.466344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.502 qpair failed and we were unable to recover it. 00:30:29.502 [2024-12-09 06:29:23.466695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.502 [2024-12-09 06:29:23.466708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.502 qpair failed and we were unable to recover it. 00:30:29.502 [2024-12-09 06:29:23.467054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.502 [2024-12-09 06:29:23.467067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.502 qpair failed and we were unable to recover it. 00:30:29.502 [2024-12-09 06:29:23.467390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.502 [2024-12-09 06:29:23.467404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.502 qpair failed and we were unable to recover it. 00:30:29.502 [2024-12-09 06:29:23.467655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.502 [2024-12-09 06:29:23.467668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.502 qpair failed and we were unable to recover it. 00:30:29.502 [2024-12-09 06:29:23.467961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.502 [2024-12-09 06:29:23.467974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.502 qpair failed and we were unable to recover it. 00:30:29.502 [2024-12-09 06:29:23.468277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.502 [2024-12-09 06:29:23.468289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.502 qpair failed and we were unable to recover it. 00:30:29.502 [2024-12-09 06:29:23.468553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.502 [2024-12-09 06:29:23.468567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.502 qpair failed and we were unable to recover it. 00:30:29.502 [2024-12-09 06:29:23.468732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.502 [2024-12-09 06:29:23.468747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.502 qpair failed and we were unable to recover it. 00:30:29.502 [2024-12-09 06:29:23.469026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.502 [2024-12-09 06:29:23.469040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.502 qpair failed and we were unable to recover it. 00:30:29.502 [2024-12-09 06:29:23.469375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.502 [2024-12-09 06:29:23.469388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.502 qpair failed and we were unable to recover it. 00:30:29.502 [2024-12-09 06:29:23.469570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.502 [2024-12-09 06:29:23.469582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.502 qpair failed and we were unable to recover it. 00:30:29.502 [2024-12-09 06:29:23.469885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.502 [2024-12-09 06:29:23.469897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.502 qpair failed and we were unable to recover it. 00:30:29.502 [2024-12-09 06:29:23.470209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.502 [2024-12-09 06:29:23.470223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.502 qpair failed and we were unable to recover it. 00:30:29.502 [2024-12-09 06:29:23.470378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.502 [2024-12-09 06:29:23.470392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.502 qpair failed and we were unable to recover it. 00:30:29.502 [2024-12-09 06:29:23.470707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.502 [2024-12-09 06:29:23.470719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.502 qpair failed and we were unable to recover it. 00:30:29.502 [2024-12-09 06:29:23.471007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.502 [2024-12-09 06:29:23.471019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.502 qpair failed and we were unable to recover it. 00:30:29.502 [2024-12-09 06:29:23.471300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.502 [2024-12-09 06:29:23.471312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.502 qpair failed and we were unable to recover it. 00:30:29.502 [2024-12-09 06:29:23.471564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.502 [2024-12-09 06:29:23.471576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.502 qpair failed and we were unable to recover it. 00:30:29.502 [2024-12-09 06:29:23.471891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.502 [2024-12-09 06:29:23.471903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.502 qpair failed and we were unable to recover it. 00:30:29.502 [2024-12-09 06:29:23.472179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.502 [2024-12-09 06:29:23.472190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.502 qpair failed and we were unable to recover it. 00:30:29.502 [2024-12-09 06:29:23.472491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.502 [2024-12-09 06:29:23.472504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.502 qpair failed and we were unable to recover it. 00:30:29.502 [2024-12-09 06:29:23.472787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.502 [2024-12-09 06:29:23.472799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.502 qpair failed and we were unable to recover it. 00:30:29.502 [2024-12-09 06:29:23.473089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.502 [2024-12-09 06:29:23.473100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.502 qpair failed and we were unable to recover it. 00:30:29.502 [2024-12-09 06:29:23.473393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.502 [2024-12-09 06:29:23.473405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.502 qpair failed and we were unable to recover it. 00:30:29.502 [2024-12-09 06:29:23.473706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.502 [2024-12-09 06:29:23.473718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.502 qpair failed and we were unable to recover it. 00:30:29.502 [2024-12-09 06:29:23.474008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.502 [2024-12-09 06:29:23.474019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.502 qpair failed and we were unable to recover it. 00:30:29.503 [2024-12-09 06:29:23.474330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.503 [2024-12-09 06:29:23.474342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.503 qpair failed and we were unable to recover it. 00:30:29.503 [2024-12-09 06:29:23.474562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.503 [2024-12-09 06:29:23.474573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.503 qpair failed and we were unable to recover it. 00:30:29.503 [2024-12-09 06:29:23.474846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.503 [2024-12-09 06:29:23.474859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.503 qpair failed and we were unable to recover it. 00:30:29.503 [2024-12-09 06:29:23.475152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.503 [2024-12-09 06:29:23.475165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.503 qpair failed and we were unable to recover it. 00:30:29.503 [2024-12-09 06:29:23.475476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.503 [2024-12-09 06:29:23.475487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.503 qpair failed and we were unable to recover it. 00:30:29.503 [2024-12-09 06:29:23.475776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.503 [2024-12-09 06:29:23.475788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.503 qpair failed and we were unable to recover it. 00:30:29.503 [2024-12-09 06:29:23.476079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.503 [2024-12-09 06:29:23.476092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.503 qpair failed and we were unable to recover it. 00:30:29.503 [2024-12-09 06:29:23.476410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.503 [2024-12-09 06:29:23.476422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.503 qpair failed and we were unable to recover it. 00:30:29.503 [2024-12-09 06:29:23.476731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.503 [2024-12-09 06:29:23.476743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.503 qpair failed and we were unable to recover it. 00:30:29.503 [2024-12-09 06:29:23.477058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.503 [2024-12-09 06:29:23.477071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.503 qpair failed and we were unable to recover it. 00:30:29.503 [2024-12-09 06:29:23.477387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.503 [2024-12-09 06:29:23.477403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.503 qpair failed and we were unable to recover it. 00:30:29.503 [2024-12-09 06:29:23.477693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.503 [2024-12-09 06:29:23.477708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.503 qpair failed and we were unable to recover it. 00:30:29.503 [2024-12-09 06:29:23.478023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.503 [2024-12-09 06:29:23.478037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.503 qpair failed and we were unable to recover it. 00:30:29.503 [2024-12-09 06:29:23.478359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.503 [2024-12-09 06:29:23.478373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.503 qpair failed and we were unable to recover it. 00:30:29.503 [2024-12-09 06:29:23.478672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.503 [2024-12-09 06:29:23.478687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.503 qpair failed and we were unable to recover it. 00:30:29.503 [2024-12-09 06:29:23.478975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.503 [2024-12-09 06:29:23.478989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.503 qpair failed and we were unable to recover it. 00:30:29.503 [2024-12-09 06:29:23.479287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.503 [2024-12-09 06:29:23.479301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.503 qpair failed and we were unable to recover it. 00:30:29.503 [2024-12-09 06:29:23.479593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.503 [2024-12-09 06:29:23.479608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.503 qpair failed and we were unable to recover it. 00:30:29.503 [2024-12-09 06:29:23.479922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.503 [2024-12-09 06:29:23.479936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.503 qpair failed and we were unable to recover it. 00:30:29.503 [2024-12-09 06:29:23.480248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.503 [2024-12-09 06:29:23.480265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.503 qpair failed and we were unable to recover it. 00:30:29.503 [2024-12-09 06:29:23.480555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.503 [2024-12-09 06:29:23.480570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.503 qpair failed and we were unable to recover it. 00:30:29.503 [2024-12-09 06:29:23.480877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.503 [2024-12-09 06:29:23.480894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.503 qpair failed and we were unable to recover it. 00:30:29.503 [2024-12-09 06:29:23.481205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.503 [2024-12-09 06:29:23.481221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.503 qpair failed and we were unable to recover it. 00:30:29.503 [2024-12-09 06:29:23.481533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.503 [2024-12-09 06:29:23.481548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.503 qpair failed and we were unable to recover it. 00:30:29.503 [2024-12-09 06:29:23.481853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.503 [2024-12-09 06:29:23.481867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.503 qpair failed and we were unable to recover it. 00:30:29.503 [2024-12-09 06:29:23.482178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.503 [2024-12-09 06:29:23.482192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.503 qpair failed and we were unable to recover it. 00:30:29.503 [2024-12-09 06:29:23.482496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.503 [2024-12-09 06:29:23.482512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.503 qpair failed and we were unable to recover it. 00:30:29.503 [2024-12-09 06:29:23.482819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.503 [2024-12-09 06:29:23.482836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.503 qpair failed and we were unable to recover it. 00:30:29.503 [2024-12-09 06:29:23.483146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.503 [2024-12-09 06:29:23.483160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.503 qpair failed and we were unable to recover it. 00:30:29.503 [2024-12-09 06:29:23.483469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.503 [2024-12-09 06:29:23.483483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.503 qpair failed and we were unable to recover it. 00:30:29.503 [2024-12-09 06:29:23.483793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.503 [2024-12-09 06:29:23.483807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.503 qpair failed and we were unable to recover it. 00:30:29.503 [2024-12-09 06:29:23.484113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.503 [2024-12-09 06:29:23.484127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.503 qpair failed and we were unable to recover it. 00:30:29.503 [2024-12-09 06:29:23.484276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.503 [2024-12-09 06:29:23.484293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.503 qpair failed and we were unable to recover it. 00:30:29.503 [2024-12-09 06:29:23.484619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.503 [2024-12-09 06:29:23.484634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.503 qpair failed and we were unable to recover it. 00:30:29.503 [2024-12-09 06:29:23.484937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.503 [2024-12-09 06:29:23.484951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.503 qpair failed and we were unable to recover it. 00:30:29.503 [2024-12-09 06:29:23.485269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.503 [2024-12-09 06:29:23.485283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.503 qpair failed and we were unable to recover it. 00:30:29.503 [2024-12-09 06:29:23.485573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.503 [2024-12-09 06:29:23.485588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.503 qpair failed and we were unable to recover it. 00:30:29.503 [2024-12-09 06:29:23.485901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.503 [2024-12-09 06:29:23.485915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.504 qpair failed and we were unable to recover it. 00:30:29.504 [2024-12-09 06:29:23.486215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.504 [2024-12-09 06:29:23.486230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.504 qpair failed and we were unable to recover it. 00:30:29.504 [2024-12-09 06:29:23.486416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.504 [2024-12-09 06:29:23.486430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.504 qpair failed and we were unable to recover it. 00:30:29.504 [2024-12-09 06:29:23.486688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.504 [2024-12-09 06:29:23.486704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.504 qpair failed and we were unable to recover it. 00:30:29.504 [2024-12-09 06:29:23.487006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.504 [2024-12-09 06:29:23.487020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.504 qpair failed and we were unable to recover it. 00:30:29.504 [2024-12-09 06:29:23.487311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.504 [2024-12-09 06:29:23.487325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.504 qpair failed and we were unable to recover it. 00:30:29.504 [2024-12-09 06:29:23.487650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.504 [2024-12-09 06:29:23.487665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.504 qpair failed and we were unable to recover it. 00:30:29.504 [2024-12-09 06:29:23.487949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.504 [2024-12-09 06:29:23.487967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.504 qpair failed and we were unable to recover it. 00:30:29.504 [2024-12-09 06:29:23.488276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.504 [2024-12-09 06:29:23.488294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.504 qpair failed and we were unable to recover it. 00:30:29.504 [2024-12-09 06:29:23.488586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.504 [2024-12-09 06:29:23.488606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.504 qpair failed and we were unable to recover it. 00:30:29.504 [2024-12-09 06:29:23.488913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.504 [2024-12-09 06:29:23.488931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.504 qpair failed and we were unable to recover it. 00:30:29.504 [2024-12-09 06:29:23.489250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.504 [2024-12-09 06:29:23.489268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.504 qpair failed and we were unable to recover it. 00:30:29.504 [2024-12-09 06:29:23.489347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.504 [2024-12-09 06:29:23.489365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.504 qpair failed and we were unable to recover it. 00:30:29.504 [2024-12-09 06:29:23.489700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.504 [2024-12-09 06:29:23.489719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.504 qpair failed and we were unable to recover it. 00:30:29.504 [2024-12-09 06:29:23.490076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.504 [2024-12-09 06:29:23.490094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.504 qpair failed and we were unable to recover it. 00:30:29.504 [2024-12-09 06:29:23.490360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.504 [2024-12-09 06:29:23.490378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.504 qpair failed and we were unable to recover it. 00:30:29.504 [2024-12-09 06:29:23.490690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.504 [2024-12-09 06:29:23.490709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.504 qpair failed and we were unable to recover it. 00:30:29.504 [2024-12-09 06:29:23.491009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.504 [2024-12-09 06:29:23.491032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.504 qpair failed and we were unable to recover it. 00:30:29.504 [2024-12-09 06:29:23.491447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.504 [2024-12-09 06:29:23.491475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.504 qpair failed and we were unable to recover it. 00:30:29.504 [2024-12-09 06:29:23.491797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.504 [2024-12-09 06:29:23.491815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.504 qpair failed and we were unable to recover it. 00:30:29.504 [2024-12-09 06:29:23.492112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.504 [2024-12-09 06:29:23.492130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.504 qpair failed and we were unable to recover it. 00:30:29.504 [2024-12-09 06:29:23.492439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.504 [2024-12-09 06:29:23.492466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.504 qpair failed and we were unable to recover it. 00:30:29.504 [2024-12-09 06:29:23.492778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.504 [2024-12-09 06:29:23.492797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.504 qpair failed and we were unable to recover it. 00:30:29.504 [2024-12-09 06:29:23.493106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.504 [2024-12-09 06:29:23.493125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.504 qpair failed and we were unable to recover it. 00:30:29.504 [2024-12-09 06:29:23.493403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.504 [2024-12-09 06:29:23.493421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.504 qpair failed and we were unable to recover it. 00:30:29.504 [2024-12-09 06:29:23.493751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.504 [2024-12-09 06:29:23.493770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.504 qpair failed and we were unable to recover it. 00:30:29.504 [2024-12-09 06:29:23.494070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.504 [2024-12-09 06:29:23.494089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.504 qpair failed and we were unable to recover it. 00:30:29.504 [2024-12-09 06:29:23.494421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.504 [2024-12-09 06:29:23.494440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.504 qpair failed and we were unable to recover it. 00:30:29.504 [2024-12-09 06:29:23.494814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.504 [2024-12-09 06:29:23.494833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.504 qpair failed and we were unable to recover it. 00:30:29.504 [2024-12-09 06:29:23.494998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.504 [2024-12-09 06:29:23.495018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.504 qpair failed and we were unable to recover it. 00:30:29.504 [2024-12-09 06:29:23.495340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.504 [2024-12-09 06:29:23.495358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.504 qpair failed and we were unable to recover it. 00:30:29.504 [2024-12-09 06:29:23.495679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.504 [2024-12-09 06:29:23.495699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.504 qpair failed and we were unable to recover it. 00:30:29.504 [2024-12-09 06:29:23.496006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.504 [2024-12-09 06:29:23.496025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.504 qpair failed and we were unable to recover it. 00:30:29.504 [2024-12-09 06:29:23.496371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.504 [2024-12-09 06:29:23.496391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.504 qpair failed and we were unable to recover it. 00:30:29.504 [2024-12-09 06:29:23.496729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.504 [2024-12-09 06:29:23.496748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.504 qpair failed and we were unable to recover it. 00:30:29.504 [2024-12-09 06:29:23.497108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.504 [2024-12-09 06:29:23.497128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.504 qpair failed and we were unable to recover it. 00:30:29.504 [2024-12-09 06:29:23.497332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.504 [2024-12-09 06:29:23.497350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.504 qpair failed and we were unable to recover it. 00:30:29.504 [2024-12-09 06:29:23.497649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.504 [2024-12-09 06:29:23.497669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.504 qpair failed and we were unable to recover it. 00:30:29.504 [2024-12-09 06:29:23.497968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.505 [2024-12-09 06:29:23.497986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.505 qpair failed and we were unable to recover it. 00:30:29.505 [2024-12-09 06:29:23.498300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.505 [2024-12-09 06:29:23.498318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.505 qpair failed and we were unable to recover it. 00:30:29.505 [2024-12-09 06:29:23.498697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.505 [2024-12-09 06:29:23.498716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.505 qpair failed and we were unable to recover it. 00:30:29.505 [2024-12-09 06:29:23.499016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.505 [2024-12-09 06:29:23.499034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.505 qpair failed and we were unable to recover it. 00:30:29.505 [2024-12-09 06:29:23.499347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.505 [2024-12-09 06:29:23.499366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.505 qpair failed and we were unable to recover it. 00:30:29.505 [2024-12-09 06:29:23.499566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.505 [2024-12-09 06:29:23.499585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.505 qpair failed and we were unable to recover it. 00:30:29.505 [2024-12-09 06:29:23.499925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.505 [2024-12-09 06:29:23.499943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.505 qpair failed and we were unable to recover it. 00:30:29.505 [2024-12-09 06:29:23.500256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.505 [2024-12-09 06:29:23.500282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.505 qpair failed and we were unable to recover it. 00:30:29.505 [2024-12-09 06:29:23.500597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.505 [2024-12-09 06:29:23.500623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.505 qpair failed and we were unable to recover it. 00:30:29.505 [2024-12-09 06:29:23.500933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.505 [2024-12-09 06:29:23.500958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.505 qpair failed and we were unable to recover it. 00:30:29.505 [2024-12-09 06:29:23.501286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.505 [2024-12-09 06:29:23.501312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.505 qpair failed and we were unable to recover it. 00:30:29.505 [2024-12-09 06:29:23.501659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.505 [2024-12-09 06:29:23.501686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.505 qpair failed and we were unable to recover it. 00:30:29.505 [2024-12-09 06:29:23.501995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.505 [2024-12-09 06:29:23.502020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.505 qpair failed and we were unable to recover it. 00:30:29.505 [2024-12-09 06:29:23.502339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.505 [2024-12-09 06:29:23.502365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.505 qpair failed and we were unable to recover it. 00:30:29.505 [2024-12-09 06:29:23.502689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.505 [2024-12-09 06:29:23.502715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.505 qpair failed and we were unable to recover it. 00:30:29.505 [2024-12-09 06:29:23.503020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.505 [2024-12-09 06:29:23.503044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.505 qpair failed and we were unable to recover it. 00:30:29.505 [2024-12-09 06:29:23.503341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.505 [2024-12-09 06:29:23.503366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.505 qpair failed and we were unable to recover it. 00:30:29.505 [2024-12-09 06:29:23.503658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.505 [2024-12-09 06:29:23.503686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.505 qpair failed and we were unable to recover it. 00:30:29.505 [2024-12-09 06:29:23.503995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.505 [2024-12-09 06:29:23.504022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.505 qpair failed and we were unable to recover it. 00:30:29.505 [2024-12-09 06:29:23.504351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.505 [2024-12-09 06:29:23.504377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.505 qpair failed and we were unable to recover it. 00:30:29.505 [2024-12-09 06:29:23.504726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.505 [2024-12-09 06:29:23.504759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.505 qpair failed and we were unable to recover it. 00:30:29.505 [2024-12-09 06:29:23.505073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.505 [2024-12-09 06:29:23.505100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.505 qpair failed and we were unable to recover it. 00:30:29.505 [2024-12-09 06:29:23.505437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.505 [2024-12-09 06:29:23.505477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.505 qpair failed and we were unable to recover it. 00:30:29.505 [2024-12-09 06:29:23.505812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.505 [2024-12-09 06:29:23.505837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.505 qpair failed and we were unable to recover it. 00:30:29.505 [2024-12-09 06:29:23.506163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.505 [2024-12-09 06:29:23.506187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.505 qpair failed and we were unable to recover it. 00:30:29.505 [2024-12-09 06:29:23.506520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.505 [2024-12-09 06:29:23.506547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.505 qpair failed and we were unable to recover it. 00:30:29.505 [2024-12-09 06:29:23.506876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.505 [2024-12-09 06:29:23.506900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.505 qpair failed and we were unable to recover it. 00:30:29.505 [2024-12-09 06:29:23.507213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.505 [2024-12-09 06:29:23.507238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.505 qpair failed and we were unable to recover it. 00:30:29.505 [2024-12-09 06:29:23.507572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.505 [2024-12-09 06:29:23.507598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.505 qpair failed and we were unable to recover it. 00:30:29.505 [2024-12-09 06:29:23.507955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.505 [2024-12-09 06:29:23.507981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.505 qpair failed and we were unable to recover it. 00:30:29.505 [2024-12-09 06:29:23.508281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.505 [2024-12-09 06:29:23.508306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.505 qpair failed and we were unable to recover it. 00:30:29.505 [2024-12-09 06:29:23.508624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.505 [2024-12-09 06:29:23.508649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.505 qpair failed and we were unable to recover it. 00:30:29.505 [2024-12-09 06:29:23.508971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.506 [2024-12-09 06:29:23.508995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.506 qpair failed and we were unable to recover it. 00:30:29.506 [2024-12-09 06:29:23.509318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.506 [2024-12-09 06:29:23.509344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.506 qpair failed and we were unable to recover it. 00:30:29.506 [2024-12-09 06:29:23.509671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.506 [2024-12-09 06:29:23.509698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.506 qpair failed and we were unable to recover it. 00:30:29.506 [2024-12-09 06:29:23.510029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.506 [2024-12-09 06:29:23.510053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.506 qpair failed and we were unable to recover it. 00:30:29.506 [2024-12-09 06:29:23.510378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.506 [2024-12-09 06:29:23.510403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.506 qpair failed and we were unable to recover it. 00:30:29.506 [2024-12-09 06:29:23.510704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.506 [2024-12-09 06:29:23.510730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.506 qpair failed and we were unable to recover it. 00:30:29.506 [2024-12-09 06:29:23.511040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.506 [2024-12-09 06:29:23.511064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.506 qpair failed and we were unable to recover it. 00:30:29.506 [2024-12-09 06:29:23.511397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.506 [2024-12-09 06:29:23.511429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.506 qpair failed and we were unable to recover it. 00:30:29.506 [2024-12-09 06:29:23.511786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.506 [2024-12-09 06:29:23.511820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.506 qpair failed and we were unable to recover it. 00:30:29.506 [2024-12-09 06:29:23.512165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.506 [2024-12-09 06:29:23.512196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.506 qpair failed and we were unable to recover it. 00:30:29.506 [2024-12-09 06:29:23.512508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.506 [2024-12-09 06:29:23.512541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.506 qpair failed and we were unable to recover it. 00:30:29.506 [2024-12-09 06:29:23.512860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.506 [2024-12-09 06:29:23.512892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.506 qpair failed and we were unable to recover it. 00:30:29.506 [2024-12-09 06:29:23.513210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.506 [2024-12-09 06:29:23.513240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.506 qpair failed and we were unable to recover it. 00:30:29.506 [2024-12-09 06:29:23.513444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.506 [2024-12-09 06:29:23.513497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.506 qpair failed and we were unable to recover it. 00:30:29.506 [2024-12-09 06:29:23.513814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.506 [2024-12-09 06:29:23.513845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.506 qpair failed and we were unable to recover it. 00:30:29.506 [2024-12-09 06:29:23.514059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.506 [2024-12-09 06:29:23.514103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.506 qpair failed and we were unable to recover it. 00:30:29.506 [2024-12-09 06:29:23.514353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.506 [2024-12-09 06:29:23.514383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.506 qpair failed and we were unable to recover it. 00:30:29.506 [2024-12-09 06:29:23.514751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.506 [2024-12-09 06:29:23.514784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.506 qpair failed and we were unable to recover it. 00:30:29.506 [2024-12-09 06:29:23.515024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.506 [2024-12-09 06:29:23.515057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.506 qpair failed and we were unable to recover it. 00:30:29.506 [2024-12-09 06:29:23.515447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.506 [2024-12-09 06:29:23.515488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.506 qpair failed and we were unable to recover it. 00:30:29.506 [2024-12-09 06:29:23.515823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.506 [2024-12-09 06:29:23.515854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.506 qpair failed and we were unable to recover it. 00:30:29.506 [2024-12-09 06:29:23.516177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.506 [2024-12-09 06:29:23.516210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.506 qpair failed and we were unable to recover it. 00:30:29.506 [2024-12-09 06:29:23.516551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.506 [2024-12-09 06:29:23.516582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.506 qpair failed and we were unable to recover it. 00:30:29.506 [2024-12-09 06:29:23.516919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.506 [2024-12-09 06:29:23.516951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.506 qpair failed and we were unable to recover it. 00:30:29.506 [2024-12-09 06:29:23.517277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.506 [2024-12-09 06:29:23.517308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.506 qpair failed and we were unable to recover it. 00:30:29.506 [2024-12-09 06:29:23.517657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.506 [2024-12-09 06:29:23.517690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.506 qpair failed and we were unable to recover it. 00:30:29.506 [2024-12-09 06:29:23.517900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.506 [2024-12-09 06:29:23.517934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.506 qpair failed and we were unable to recover it. 00:30:29.506 [2024-12-09 06:29:23.518252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.506 [2024-12-09 06:29:23.518284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.506 qpair failed and we were unable to recover it. 00:30:29.506 [2024-12-09 06:29:23.518597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.506 [2024-12-09 06:29:23.518630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.506 qpair failed and we were unable to recover it. 00:30:29.506 [2024-12-09 06:29:23.518985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.506 [2024-12-09 06:29:23.519018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.506 qpair failed and we were unable to recover it. 00:30:29.506 [2024-12-09 06:29:23.519335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.506 [2024-12-09 06:29:23.519366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.506 qpair failed and we were unable to recover it. 00:30:29.506 [2024-12-09 06:29:23.519685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.506 [2024-12-09 06:29:23.519717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.506 qpair failed and we were unable to recover it. 00:30:29.506 [2024-12-09 06:29:23.520031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.506 [2024-12-09 06:29:23.520062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.506 qpair failed and we were unable to recover it. 00:30:29.506 [2024-12-09 06:29:23.520286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.506 [2024-12-09 06:29:23.520318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.506 qpair failed and we were unable to recover it. 00:30:29.506 [2024-12-09 06:29:23.520654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.506 [2024-12-09 06:29:23.520687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.506 qpair failed and we were unable to recover it. 00:30:29.506 [2024-12-09 06:29:23.521026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.506 [2024-12-09 06:29:23.521057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.506 qpair failed and we were unable to recover it. 00:30:29.506 [2024-12-09 06:29:23.521414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.506 [2024-12-09 06:29:23.521445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.506 qpair failed and we were unable to recover it. 00:30:29.506 [2024-12-09 06:29:23.521839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.506 [2024-12-09 06:29:23.521871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.506 qpair failed and we were unable to recover it. 00:30:29.507 [2024-12-09 06:29:23.522184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.507 [2024-12-09 06:29:23.522215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.507 qpair failed and we were unable to recover it. 00:30:29.507 [2024-12-09 06:29:23.522547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.507 [2024-12-09 06:29:23.522578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.507 qpair failed and we were unable to recover it. 00:30:29.507 [2024-12-09 06:29:23.522893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.507 [2024-12-09 06:29:23.522922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.507 qpair failed and we were unable to recover it. 00:30:29.507 [2024-12-09 06:29:23.523254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.507 [2024-12-09 06:29:23.523286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.507 qpair failed and we were unable to recover it. 00:30:29.507 [2024-12-09 06:29:23.523686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.507 [2024-12-09 06:29:23.523718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.507 qpair failed and we were unable to recover it. 00:30:29.507 [2024-12-09 06:29:23.523935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.507 [2024-12-09 06:29:23.523969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.507 qpair failed and we were unable to recover it. 00:30:29.507 [2024-12-09 06:29:23.524346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.507 [2024-12-09 06:29:23.524378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.507 qpair failed and we were unable to recover it. 00:30:29.507 [2024-12-09 06:29:23.524700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.507 [2024-12-09 06:29:23.524734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.507 qpair failed and we were unable to recover it. 00:30:29.507 [2024-12-09 06:29:23.525065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.507 [2024-12-09 06:29:23.525096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.507 qpair failed and we were unable to recover it. 00:30:29.507 [2024-12-09 06:29:23.525435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.507 [2024-12-09 06:29:23.525477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.507 qpair failed and we were unable to recover it. 00:30:29.507 [2024-12-09 06:29:23.525832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.507 [2024-12-09 06:29:23.525864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.507 qpair failed and we were unable to recover it. 00:30:29.507 [2024-12-09 06:29:23.526075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.507 [2024-12-09 06:29:23.526108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.507 qpair failed and we were unable to recover it. 00:30:29.507 [2024-12-09 06:29:23.526467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.507 [2024-12-09 06:29:23.526498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.507 qpair failed and we were unable to recover it. 00:30:29.507 [2024-12-09 06:29:23.526886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.507 [2024-12-09 06:29:23.526917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.507 qpair failed and we were unable to recover it. 00:30:29.507 [2024-12-09 06:29:23.527229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.507 [2024-12-09 06:29:23.527262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.507 qpair failed and we were unable to recover it. 00:30:29.507 [2024-12-09 06:29:23.527590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.507 [2024-12-09 06:29:23.527621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.507 qpair failed and we were unable to recover it. 00:30:29.507 [2024-12-09 06:29:23.527941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.507 [2024-12-09 06:29:23.527971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.507 qpair failed and we were unable to recover it. 00:30:29.507 [2024-12-09 06:29:23.528289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.507 [2024-12-09 06:29:23.528320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.507 qpair failed and we were unable to recover it. 00:30:29.507 [2024-12-09 06:29:23.528580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.507 [2024-12-09 06:29:23.528618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.507 qpair failed and we were unable to recover it. 00:30:29.507 [2024-12-09 06:29:23.528954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.507 [2024-12-09 06:29:23.528985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.507 qpair failed and we were unable to recover it. 00:30:29.507 [2024-12-09 06:29:23.529299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.507 [2024-12-09 06:29:23.529329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.507 qpair failed and we were unable to recover it. 00:30:29.507 [2024-12-09 06:29:23.529676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.507 [2024-12-09 06:29:23.529708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.507 qpair failed and we were unable to recover it. 00:30:29.507 [2024-12-09 06:29:23.530126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.507 [2024-12-09 06:29:23.530157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.507 qpair failed and we were unable to recover it. 00:30:29.507 [2024-12-09 06:29:23.530494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.507 [2024-12-09 06:29:23.530526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.507 qpair failed and we were unable to recover it. 00:30:29.507 [2024-12-09 06:29:23.530847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.507 [2024-12-09 06:29:23.530877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.507 qpair failed and we were unable to recover it. 00:30:29.507 [2024-12-09 06:29:23.531213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.507 [2024-12-09 06:29:23.531245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.507 qpair failed and we were unable to recover it. 00:30:29.507 [2024-12-09 06:29:23.531578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.507 [2024-12-09 06:29:23.531610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.507 qpair failed and we were unable to recover it. 00:30:29.507 [2024-12-09 06:29:23.531971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.507 [2024-12-09 06:29:23.532002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.507 qpair failed and we were unable to recover it. 00:30:29.507 [2024-12-09 06:29:23.532341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.507 [2024-12-09 06:29:23.532373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.507 qpair failed and we were unable to recover it. 00:30:29.507 [2024-12-09 06:29:23.532693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.507 [2024-12-09 06:29:23.532724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.507 qpair failed and we were unable to recover it. 00:30:29.507 [2024-12-09 06:29:23.533053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.507 [2024-12-09 06:29:23.533083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.507 qpair failed and we were unable to recover it. 00:30:29.507 [2024-12-09 06:29:23.533412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.507 [2024-12-09 06:29:23.533445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.507 qpair failed and we were unable to recover it. 00:30:29.507 [2024-12-09 06:29:23.533800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.507 [2024-12-09 06:29:23.533833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.507 qpair failed and we were unable to recover it. 00:30:29.507 [2024-12-09 06:29:23.534090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.507 [2024-12-09 06:29:23.534120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.507 qpair failed and we were unable to recover it. 00:30:29.507 [2024-12-09 06:29:23.534378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.507 [2024-12-09 06:29:23.534411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.507 qpair failed and we were unable to recover it. 00:30:29.507 [2024-12-09 06:29:23.534769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.507 [2024-12-09 06:29:23.534800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.507 qpair failed and we were unable to recover it. 00:30:29.507 [2024-12-09 06:29:23.535123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.507 [2024-12-09 06:29:23.535156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.507 qpair failed and we were unable to recover it. 00:30:29.507 [2024-12-09 06:29:23.535489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.507 [2024-12-09 06:29:23.535521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.508 qpair failed and we were unable to recover it. 00:30:29.508 [2024-12-09 06:29:23.535839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.508 [2024-12-09 06:29:23.535869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.508 qpair failed and we were unable to recover it. 00:30:29.508 [2024-12-09 06:29:23.536185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.508 [2024-12-09 06:29:23.536215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.508 qpair failed and we were unable to recover it. 00:30:29.508 [2024-12-09 06:29:23.536553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.508 [2024-12-09 06:29:23.536585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.508 qpair failed and we were unable to recover it. 00:30:29.508 [2024-12-09 06:29:23.536920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.508 [2024-12-09 06:29:23.536950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.508 qpair failed and we were unable to recover it. 00:30:29.508 [2024-12-09 06:29:23.537279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.508 [2024-12-09 06:29:23.537310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.508 qpair failed and we were unable to recover it. 00:30:29.508 [2024-12-09 06:29:23.537655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.508 [2024-12-09 06:29:23.537687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.508 qpair failed and we were unable to recover it. 00:30:29.508 [2024-12-09 06:29:23.538015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.508 [2024-12-09 06:29:23.538048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.508 qpair failed and we were unable to recover it. 00:30:29.508 [2024-12-09 06:29:23.538372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.508 [2024-12-09 06:29:23.538410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.508 qpair failed and we were unable to recover it. 00:30:29.508 [2024-12-09 06:29:23.538771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.508 [2024-12-09 06:29:23.538804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.508 qpair failed and we were unable to recover it. 00:30:29.508 [2024-12-09 06:29:23.539118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.508 [2024-12-09 06:29:23.539150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.508 qpair failed and we were unable to recover it. 00:30:29.508 [2024-12-09 06:29:23.539478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.508 [2024-12-09 06:29:23.539509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.508 qpair failed and we were unable to recover it. 00:30:29.508 [2024-12-09 06:29:23.539743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.508 [2024-12-09 06:29:23.539774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.508 qpair failed and we were unable to recover it. 00:30:29.508 [2024-12-09 06:29:23.540085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.508 [2024-12-09 06:29:23.540115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.508 qpair failed and we were unable to recover it. 00:30:29.508 [2024-12-09 06:29:23.540467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.508 [2024-12-09 06:29:23.540498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.508 qpair failed and we were unable to recover it. 00:30:29.508 [2024-12-09 06:29:23.540826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.508 [2024-12-09 06:29:23.540860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.508 qpair failed and we were unable to recover it. 00:30:29.508 [2024-12-09 06:29:23.541187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.508 [2024-12-09 06:29:23.541217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.508 qpair failed and we were unable to recover it. 00:30:29.508 [2024-12-09 06:29:23.541553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.508 [2024-12-09 06:29:23.541586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.508 qpair failed and we were unable to recover it. 00:30:29.508 [2024-12-09 06:29:23.541980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.508 [2024-12-09 06:29:23.542012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.508 qpair failed and we were unable to recover it. 00:30:29.508 [2024-12-09 06:29:23.542336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.508 [2024-12-09 06:29:23.542369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.508 qpair failed and we were unable to recover it. 00:30:29.508 [2024-12-09 06:29:23.542680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.508 [2024-12-09 06:29:23.542712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.508 qpair failed and we were unable to recover it. 00:30:29.508 [2024-12-09 06:29:23.543028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.508 [2024-12-09 06:29:23.543059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.508 qpair failed and we were unable to recover it. 00:30:29.508 [2024-12-09 06:29:23.543366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.508 [2024-12-09 06:29:23.543398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.508 qpair failed and we were unable to recover it. 00:30:29.508 [2024-12-09 06:29:23.543766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.508 [2024-12-09 06:29:23.543798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.508 qpair failed and we were unable to recover it. 00:30:29.508 [2024-12-09 06:29:23.544111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.508 [2024-12-09 06:29:23.544141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.508 qpair failed and we were unable to recover it. 00:30:29.508 [2024-12-09 06:29:23.544367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.508 [2024-12-09 06:29:23.544402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.508 qpair failed and we were unable to recover it. 00:30:29.508 [2024-12-09 06:29:23.544798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.508 [2024-12-09 06:29:23.544830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.508 qpair failed and we were unable to recover it. 00:30:29.508 [2024-12-09 06:29:23.545141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.508 [2024-12-09 06:29:23.545171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.508 qpair failed and we were unable to recover it. 00:30:29.508 [2024-12-09 06:29:23.545494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.508 [2024-12-09 06:29:23.545526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.508 qpair failed and we were unable to recover it. 00:30:29.508 [2024-12-09 06:29:23.545832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.508 [2024-12-09 06:29:23.545864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.508 qpair failed and we were unable to recover it. 00:30:29.508 [2024-12-09 06:29:23.546184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.508 [2024-12-09 06:29:23.546214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.508 qpair failed and we were unable to recover it. 00:30:29.508 [2024-12-09 06:29:23.546531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.508 [2024-12-09 06:29:23.546563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.508 qpair failed and we were unable to recover it. 00:30:29.508 [2024-12-09 06:29:23.546895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.508 [2024-12-09 06:29:23.546928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.508 qpair failed and we were unable to recover it. 00:30:29.508 [2024-12-09 06:29:23.547261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.508 [2024-12-09 06:29:23.547292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.508 qpair failed and we were unable to recover it. 00:30:29.508 [2024-12-09 06:29:23.547594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.508 [2024-12-09 06:29:23.547624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.508 qpair failed and we were unable to recover it. 00:30:29.508 [2024-12-09 06:29:23.547968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.508 [2024-12-09 06:29:23.547998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.508 qpair failed and we were unable to recover it. 00:30:29.508 [2024-12-09 06:29:23.548331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.508 [2024-12-09 06:29:23.548365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.508 qpair failed and we were unable to recover it. 00:30:29.508 [2024-12-09 06:29:23.548676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.508 [2024-12-09 06:29:23.548707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.508 qpair failed and we were unable to recover it. 00:30:29.508 [2024-12-09 06:29:23.549041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.509 [2024-12-09 06:29:23.549072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.509 qpair failed and we were unable to recover it. 00:30:29.509 [2024-12-09 06:29:23.549442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.509 [2024-12-09 06:29:23.549486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.509 qpair failed and we were unable to recover it. 00:30:29.509 [2024-12-09 06:29:23.549852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.509 [2024-12-09 06:29:23.549883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.509 qpair failed and we were unable to recover it. 00:30:29.509 [2024-12-09 06:29:23.550268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.509 [2024-12-09 06:29:23.550299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.509 qpair failed and we were unable to recover it. 00:30:29.509 [2024-12-09 06:29:23.550627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.509 [2024-12-09 06:29:23.550658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.509 qpair failed and we were unable to recover it. 00:30:29.509 [2024-12-09 06:29:23.550986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.509 [2024-12-09 06:29:23.551016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.509 qpair failed and we were unable to recover it. 00:30:29.509 [2024-12-09 06:29:23.551340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.509 [2024-12-09 06:29:23.551371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.509 qpair failed and we were unable to recover it. 00:30:29.509 [2024-12-09 06:29:23.551701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.509 [2024-12-09 06:29:23.551733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.509 qpair failed and we were unable to recover it. 00:30:29.509 [2024-12-09 06:29:23.552057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.509 [2024-12-09 06:29:23.552091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.509 qpair failed and we were unable to recover it. 00:30:29.509 [2024-12-09 06:29:23.552408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.509 [2024-12-09 06:29:23.552439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.509 qpair failed and we were unable to recover it. 00:30:29.509 [2024-12-09 06:29:23.552802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.509 [2024-12-09 06:29:23.552835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.509 qpair failed and we were unable to recover it. 00:30:29.509 [2024-12-09 06:29:23.553179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.509 [2024-12-09 06:29:23.553218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.509 qpair failed and we were unable to recover it. 00:30:29.509 [2024-12-09 06:29:23.553553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.509 [2024-12-09 06:29:23.553585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.509 qpair failed and we were unable to recover it. 00:30:29.509 [2024-12-09 06:29:23.553915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.509 [2024-12-09 06:29:23.553945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.509 qpair failed and we were unable to recover it. 00:30:29.509 [2024-12-09 06:29:23.554266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.509 [2024-12-09 06:29:23.554296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.509 qpair failed and we were unable to recover it. 00:30:29.509 [2024-12-09 06:29:23.554607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.509 [2024-12-09 06:29:23.554638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.509 qpair failed and we were unable to recover it. 00:30:29.509 [2024-12-09 06:29:23.554961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.509 [2024-12-09 06:29:23.554993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.509 qpair failed and we were unable to recover it. 00:30:29.509 [2024-12-09 06:29:23.555307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.509 [2024-12-09 06:29:23.555338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.509 qpair failed and we were unable to recover it. 00:30:29.509 [2024-12-09 06:29:23.555677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.509 [2024-12-09 06:29:23.555709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.509 qpair failed and we were unable to recover it. 00:30:29.509 [2024-12-09 06:29:23.556036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.509 [2024-12-09 06:29:23.556067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.509 qpair failed and we were unable to recover it. 00:30:29.509 [2024-12-09 06:29:23.556392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.509 [2024-12-09 06:29:23.556423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.509 qpair failed and we were unable to recover it. 00:30:29.509 [2024-12-09 06:29:23.556787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.509 [2024-12-09 06:29:23.556817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.509 qpair failed and we were unable to recover it. 00:30:29.509 [2024-12-09 06:29:23.557155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.509 [2024-12-09 06:29:23.557188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.509 qpair failed and we were unable to recover it. 00:30:29.509 [2024-12-09 06:29:23.557534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.509 [2024-12-09 06:29:23.557565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.509 qpair failed and we were unable to recover it. 00:30:29.509 [2024-12-09 06:29:23.557895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.509 [2024-12-09 06:29:23.557928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.509 qpair failed and we were unable to recover it. 00:30:29.509 [2024-12-09 06:29:23.558301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.509 [2024-12-09 06:29:23.558333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.509 qpair failed and we were unable to recover it. 00:30:29.509 [2024-12-09 06:29:23.558650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.509 [2024-12-09 06:29:23.558683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.509 qpair failed and we were unable to recover it. 00:30:29.509 [2024-12-09 06:29:23.559049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.509 [2024-12-09 06:29:23.559081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.509 qpair failed and we were unable to recover it. 00:30:29.509 [2024-12-09 06:29:23.559408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.509 [2024-12-09 06:29:23.559439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.509 qpair failed and we were unable to recover it. 00:30:29.509 [2024-12-09 06:29:23.559791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.509 [2024-12-09 06:29:23.559824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.509 qpair failed and we were unable to recover it. 00:30:29.509 [2024-12-09 06:29:23.560177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.509 [2024-12-09 06:29:23.560208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.509 qpair failed and we were unable to recover it. 00:30:29.509 [2024-12-09 06:29:23.560540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.509 [2024-12-09 06:29:23.560572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.509 qpair failed and we were unable to recover it. 00:30:29.509 [2024-12-09 06:29:23.560882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.509 [2024-12-09 06:29:23.560912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.509 qpair failed and we were unable to recover it. 00:30:29.509 [2024-12-09 06:29:23.561243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.509 [2024-12-09 06:29:23.561275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.509 qpair failed and we were unable to recover it. 00:30:29.509 [2024-12-09 06:29:23.561607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.510 [2024-12-09 06:29:23.561639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.510 qpair failed and we were unable to recover it. 00:30:29.510 [2024-12-09 06:29:23.561950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.510 [2024-12-09 06:29:23.561981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.510 qpair failed and we were unable to recover it. 00:30:29.510 [2024-12-09 06:29:23.562309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.510 [2024-12-09 06:29:23.562339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.510 qpair failed and we were unable to recover it. 00:30:29.510 [2024-12-09 06:29:23.562741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.510 [2024-12-09 06:29:23.562774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.510 qpair failed and we were unable to recover it. 00:30:29.510 [2024-12-09 06:29:23.563124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.510 [2024-12-09 06:29:23.563156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.510 qpair failed and we were unable to recover it. 00:30:29.510 [2024-12-09 06:29:23.563488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.510 [2024-12-09 06:29:23.563521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.510 qpair failed and we were unable to recover it. 00:30:29.510 [2024-12-09 06:29:23.563850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.510 [2024-12-09 06:29:23.563880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.510 qpair failed and we were unable to recover it. 00:30:29.510 [2024-12-09 06:29:23.564230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.510 [2024-12-09 06:29:23.564261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.510 qpair failed and we were unable to recover it. 00:30:29.510 [2024-12-09 06:29:23.564571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.510 [2024-12-09 06:29:23.564602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.510 qpair failed and we were unable to recover it. 00:30:29.510 [2024-12-09 06:29:23.564960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.510 [2024-12-09 06:29:23.564991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.510 qpair failed and we were unable to recover it. 00:30:29.510 [2024-12-09 06:29:23.565314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.510 [2024-12-09 06:29:23.565344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.510 qpair failed and we were unable to recover it. 00:30:29.510 [2024-12-09 06:29:23.565661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.510 [2024-12-09 06:29:23.565693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.510 qpair failed and we were unable to recover it. 00:30:29.510 [2024-12-09 06:29:23.566001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.510 [2024-12-09 06:29:23.566032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.510 qpair failed and we were unable to recover it. 00:30:29.510 [2024-12-09 06:29:23.566376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.510 [2024-12-09 06:29:23.566407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.510 qpair failed and we were unable to recover it. 00:30:29.510 [2024-12-09 06:29:23.566838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.510 [2024-12-09 06:29:23.566871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.510 qpair failed and we were unable to recover it. 00:30:29.510 [2024-12-09 06:29:23.567215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.510 [2024-12-09 06:29:23.567248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.510 qpair failed and we were unable to recover it. 00:30:29.510 [2024-12-09 06:29:23.567576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.510 [2024-12-09 06:29:23.567609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.510 qpair failed and we were unable to recover it. 00:30:29.510 [2024-12-09 06:29:23.567913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.510 [2024-12-09 06:29:23.567944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.510 qpair failed and we were unable to recover it. 00:30:29.510 [2024-12-09 06:29:23.568255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.510 [2024-12-09 06:29:23.568287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.510 qpair failed and we were unable to recover it. 00:30:29.510 [2024-12-09 06:29:23.568637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.510 [2024-12-09 06:29:23.568668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.510 qpair failed and we were unable to recover it. 00:30:29.510 [2024-12-09 06:29:23.569000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.510 [2024-12-09 06:29:23.569031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.510 qpair failed and we were unable to recover it. 00:30:29.510 [2024-12-09 06:29:23.569256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.510 [2024-12-09 06:29:23.569288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.510 qpair failed and we were unable to recover it. 00:30:29.510 [2024-12-09 06:29:23.569675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.510 [2024-12-09 06:29:23.569706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.510 qpair failed and we were unable to recover it. 00:30:29.510 [2024-12-09 06:29:23.569925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.510 [2024-12-09 06:29:23.569958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.510 qpair failed and we were unable to recover it. 00:30:29.510 [2024-12-09 06:29:23.570278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.510 [2024-12-09 06:29:23.570311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.510 qpair failed and we were unable to recover it. 00:30:29.510 [2024-12-09 06:29:23.570639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.510 [2024-12-09 06:29:23.570672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.510 qpair failed and we were unable to recover it. 00:30:29.510 [2024-12-09 06:29:23.570989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.510 [2024-12-09 06:29:23.571019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.510 qpair failed and we were unable to recover it. 00:30:29.510 [2024-12-09 06:29:23.571351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.510 [2024-12-09 06:29:23.571381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.510 qpair failed and we were unable to recover it. 00:30:29.510 [2024-12-09 06:29:23.571599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.510 [2024-12-09 06:29:23.571634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.510 qpair failed and we were unable to recover it. 00:30:29.510 [2024-12-09 06:29:23.571982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.510 [2024-12-09 06:29:23.572014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.510 qpair failed and we were unable to recover it. 00:30:29.510 [2024-12-09 06:29:23.572225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.510 [2024-12-09 06:29:23.572257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.510 qpair failed and we were unable to recover it. 00:30:29.510 [2024-12-09 06:29:23.572589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.510 [2024-12-09 06:29:23.572622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.510 qpair failed and we were unable to recover it. 00:30:29.510 [2024-12-09 06:29:23.572955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.510 [2024-12-09 06:29:23.572987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.510 qpair failed and we were unable to recover it. 00:30:29.510 [2024-12-09 06:29:23.573356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.510 [2024-12-09 06:29:23.573387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.510 qpair failed and we were unable to recover it. 00:30:29.510 [2024-12-09 06:29:23.573707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.510 [2024-12-09 06:29:23.573739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.510 qpair failed and we were unable to recover it. 00:30:29.510 [2024-12-09 06:29:23.573961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.510 [2024-12-09 06:29:23.573995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.510 qpair failed and we were unable to recover it. 00:30:29.510 [2024-12-09 06:29:23.574319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.510 [2024-12-09 06:29:23.574349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.510 qpair failed and we were unable to recover it. 00:30:29.510 [2024-12-09 06:29:23.574675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.510 [2024-12-09 06:29:23.574707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.510 qpair failed and we were unable to recover it. 00:30:29.510 [2024-12-09 06:29:23.575034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.511 [2024-12-09 06:29:23.575065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.511 qpair failed and we were unable to recover it. 00:30:29.511 [2024-12-09 06:29:23.575395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.511 [2024-12-09 06:29:23.575428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.511 qpair failed and we were unable to recover it. 00:30:29.511 [2024-12-09 06:29:23.575796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.511 [2024-12-09 06:29:23.575829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.511 qpair failed and we were unable to recover it. 00:30:29.511 [2024-12-09 06:29:23.576144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.511 [2024-12-09 06:29:23.576174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.511 qpair failed and we were unable to recover it. 00:30:29.511 [2024-12-09 06:29:23.576552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.511 [2024-12-09 06:29:23.576585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.511 qpair failed and we were unable to recover it. 00:30:29.511 [2024-12-09 06:29:23.576887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.511 [2024-12-09 06:29:23.576918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.511 qpair failed and we were unable to recover it. 00:30:29.511 [2024-12-09 06:29:23.577244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.511 [2024-12-09 06:29:23.577277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.511 qpair failed and we were unable to recover it. 00:30:29.511 [2024-12-09 06:29:23.577613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.511 [2024-12-09 06:29:23.577651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.511 qpair failed and we were unable to recover it. 00:30:29.511 [2024-12-09 06:29:23.577908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.511 [2024-12-09 06:29:23.577937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.511 qpair failed and we were unable to recover it. 00:30:29.511 [2024-12-09 06:29:23.578255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.511 [2024-12-09 06:29:23.578287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.511 qpair failed and we were unable to recover it. 00:30:29.511 [2024-12-09 06:29:23.578623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.511 [2024-12-09 06:29:23.578655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.511 qpair failed and we were unable to recover it. 00:30:29.511 [2024-12-09 06:29:23.578969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.511 [2024-12-09 06:29:23.579002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.511 qpair failed and we were unable to recover it. 00:30:29.511 [2024-12-09 06:29:23.579330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.511 [2024-12-09 06:29:23.579361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.511 qpair failed and we were unable to recover it. 00:30:29.511 [2024-12-09 06:29:23.579700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.511 [2024-12-09 06:29:23.579731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.511 qpair failed and we were unable to recover it. 00:30:29.511 [2024-12-09 06:29:23.580068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.511 [2024-12-09 06:29:23.580099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.511 qpair failed and we were unable to recover it. 00:30:29.511 [2024-12-09 06:29:23.580403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.511 [2024-12-09 06:29:23.580432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.511 qpair failed and we were unable to recover it. 00:30:29.511 [2024-12-09 06:29:23.580799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.511 [2024-12-09 06:29:23.580830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.511 qpair failed and we were unable to recover it. 00:30:29.511 [2024-12-09 06:29:23.581180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.511 [2024-12-09 06:29:23.581214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.511 qpair failed and we were unable to recover it. 00:30:29.511 [2024-12-09 06:29:23.581549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.511 [2024-12-09 06:29:23.581581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.511 qpair failed and we were unable to recover it. 00:30:29.511 [2024-12-09 06:29:23.581912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.511 [2024-12-09 06:29:23.581942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.511 qpair failed and we were unable to recover it. 00:30:29.511 [2024-12-09 06:29:23.582263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.511 [2024-12-09 06:29:23.582293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.511 qpair failed and we were unable to recover it. 00:30:29.511 [2024-12-09 06:29:23.582515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.511 [2024-12-09 06:29:23.582547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.511 qpair failed and we were unable to recover it. 00:30:29.511 [2024-12-09 06:29:23.582912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.511 [2024-12-09 06:29:23.582943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.511 qpair failed and we were unable to recover it. 00:30:29.511 [2024-12-09 06:29:23.583167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.511 [2024-12-09 06:29:23.583197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.511 qpair failed and we were unable to recover it. 00:30:29.511 [2024-12-09 06:29:23.583524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.511 [2024-12-09 06:29:23.583556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.511 qpair failed and we were unable to recover it. 00:30:29.511 [2024-12-09 06:29:23.583775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.511 [2024-12-09 06:29:23.583808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.511 qpair failed and we were unable to recover it. 00:30:29.511 [2024-12-09 06:29:23.584154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.511 [2024-12-09 06:29:23.584185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.511 qpair failed and we were unable to recover it. 00:30:29.511 [2024-12-09 06:29:23.584509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.511 [2024-12-09 06:29:23.584541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.511 qpair failed and we were unable to recover it. 00:30:29.511 [2024-12-09 06:29:23.584911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.511 [2024-12-09 06:29:23.584941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.511 qpair failed and we were unable to recover it. 00:30:29.511 [2024-12-09 06:29:23.585262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.511 [2024-12-09 06:29:23.585292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.511 qpair failed and we were unable to recover it. 00:30:29.511 [2024-12-09 06:29:23.585607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.511 [2024-12-09 06:29:23.585639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.511 qpair failed and we were unable to recover it. 00:30:29.511 [2024-12-09 06:29:23.585968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.511 [2024-12-09 06:29:23.585998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.511 qpair failed and we were unable to recover it. 00:30:29.511 [2024-12-09 06:29:23.586324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.511 [2024-12-09 06:29:23.586355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.511 qpair failed and we were unable to recover it. 00:30:29.511 [2024-12-09 06:29:23.586675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.511 [2024-12-09 06:29:23.586706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.511 qpair failed and we were unable to recover it. 00:30:29.511 [2024-12-09 06:29:23.587026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.511 [2024-12-09 06:29:23.587056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.511 qpair failed and we were unable to recover it. 00:30:29.511 [2024-12-09 06:29:23.587467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.511 [2024-12-09 06:29:23.587502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.511 qpair failed and we were unable to recover it. 00:30:29.511 [2024-12-09 06:29:23.587849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.511 [2024-12-09 06:29:23.587880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.511 qpair failed and we were unable to recover it. 00:30:29.511 [2024-12-09 06:29:23.588199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.511 [2024-12-09 06:29:23.588230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.512 qpair failed and we were unable to recover it. 00:30:29.512 [2024-12-09 06:29:23.588475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.512 [2024-12-09 06:29:23.588507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.512 qpair failed and we were unable to recover it. 00:30:29.512 [2024-12-09 06:29:23.588819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.512 [2024-12-09 06:29:23.588851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.512 qpair failed and we were unable to recover it. 00:30:29.512 [2024-12-09 06:29:23.589168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.512 [2024-12-09 06:29:23.589199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.512 qpair failed and we were unable to recover it. 00:30:29.512 [2024-12-09 06:29:23.589553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.512 [2024-12-09 06:29:23.589588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.512 qpair failed and we were unable to recover it. 00:30:29.512 [2024-12-09 06:29:23.589985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.512 [2024-12-09 06:29:23.590015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.512 qpair failed and we were unable to recover it. 00:30:29.512 [2024-12-09 06:29:23.590341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.512 [2024-12-09 06:29:23.590374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.512 qpair failed and we were unable to recover it. 00:30:29.512 [2024-12-09 06:29:23.590713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.512 [2024-12-09 06:29:23.590744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.512 qpair failed and we were unable to recover it. 00:30:29.512 [2024-12-09 06:29:23.591065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.512 [2024-12-09 06:29:23.591096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.512 qpair failed and we were unable to recover it. 00:30:29.512 [2024-12-09 06:29:23.591435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.512 [2024-12-09 06:29:23.591493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.512 qpair failed and we were unable to recover it. 00:30:29.512 [2024-12-09 06:29:23.591828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.512 [2024-12-09 06:29:23.591859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.512 qpair failed and we were unable to recover it. 00:30:29.512 [2024-12-09 06:29:23.592180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.512 [2024-12-09 06:29:23.592218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.512 qpair failed and we were unable to recover it. 00:30:29.512 [2024-12-09 06:29:23.592537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.512 [2024-12-09 06:29:23.592569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.512 qpair failed and we were unable to recover it. 00:30:29.512 [2024-12-09 06:29:23.592946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.512 [2024-12-09 06:29:23.592977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.512 qpair failed and we were unable to recover it. 00:30:29.512 [2024-12-09 06:29:23.593303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.512 [2024-12-09 06:29:23.593335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.512 qpair failed and we were unable to recover it. 00:30:29.512 [2024-12-09 06:29:23.593716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.512 [2024-12-09 06:29:23.593748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.512 qpair failed and we were unable to recover it. 00:30:29.512 [2024-12-09 06:29:23.594104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.512 [2024-12-09 06:29:23.594136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.512 qpair failed and we were unable to recover it. 00:30:29.512 [2024-12-09 06:29:23.594490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.512 [2024-12-09 06:29:23.594524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.512 qpair failed and we were unable to recover it. 00:30:29.512 [2024-12-09 06:29:23.595090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.512 [2024-12-09 06:29:23.595123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.512 qpair failed and we were unable to recover it. 00:30:29.512 [2024-12-09 06:29:23.595443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.512 [2024-12-09 06:29:23.595486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.512 qpair failed and we were unable to recover it. 00:30:29.512 [2024-12-09 06:29:23.595851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.512 [2024-12-09 06:29:23.595884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.512 qpair failed and we were unable to recover it. 00:30:29.512 [2024-12-09 06:29:23.596203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.512 [2024-12-09 06:29:23.596236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.512 qpair failed and we were unable to recover it. 00:30:29.512 [2024-12-09 06:29:23.596578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.512 [2024-12-09 06:29:23.596610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.512 qpair failed and we were unable to recover it. 00:30:29.512 [2024-12-09 06:29:23.596937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.512 [2024-12-09 06:29:23.596970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.512 qpair failed and we were unable to recover it. 00:30:29.512 [2024-12-09 06:29:23.597303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.512 [2024-12-09 06:29:23.597338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.512 qpair failed and we were unable to recover it. 00:30:29.512 [2024-12-09 06:29:23.597721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.512 [2024-12-09 06:29:23.597753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.512 qpair failed and we were unable to recover it. 00:30:29.512 [2024-12-09 06:29:23.598119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.512 [2024-12-09 06:29:23.598152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.512 qpair failed and we were unable to recover it. 00:30:29.512 [2024-12-09 06:29:23.598500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.512 [2024-12-09 06:29:23.598532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.512 qpair failed and we were unable to recover it. 00:30:29.512 [2024-12-09 06:29:23.598771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.512 [2024-12-09 06:29:23.598803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.512 qpair failed and we were unable to recover it. 00:30:29.512 [2024-12-09 06:29:23.599152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.512 [2024-12-09 06:29:23.599183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.512 qpair failed and we were unable to recover it. 00:30:29.512 [2024-12-09 06:29:23.599518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.512 [2024-12-09 06:29:23.599551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.512 qpair failed and we were unable to recover it. 00:30:29.512 [2024-12-09 06:29:23.599900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.512 [2024-12-09 06:29:23.599930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.512 qpair failed and we were unable to recover it. 00:30:29.512 [2024-12-09 06:29:23.600268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.512 [2024-12-09 06:29:23.600300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.512 qpair failed and we were unable to recover it. 00:30:29.512 [2024-12-09 06:29:23.600631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.512 [2024-12-09 06:29:23.600662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.512 qpair failed and we were unable to recover it. 00:30:29.512 [2024-12-09 06:29:23.600992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.512 [2024-12-09 06:29:23.601026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.512 qpair failed and we were unable to recover it. 00:30:29.512 [2024-12-09 06:29:23.601376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.512 [2024-12-09 06:29:23.601408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.512 qpair failed and we were unable to recover it. 00:30:29.512 [2024-12-09 06:29:23.601734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.512 [2024-12-09 06:29:23.601768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.512 qpair failed and we were unable to recover it. 00:30:29.512 [2024-12-09 06:29:23.602111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.512 [2024-12-09 06:29:23.602144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.512 qpair failed and we were unable to recover it. 00:30:29.512 [2024-12-09 06:29:23.602497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.513 [2024-12-09 06:29:23.602538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.513 qpair failed and we were unable to recover it. 00:30:29.513 [2024-12-09 06:29:23.602860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.513 [2024-12-09 06:29:23.602892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.513 qpair failed and we were unable to recover it. 00:30:29.513 [2024-12-09 06:29:23.603229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.513 [2024-12-09 06:29:23.603260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.513 qpair failed and we were unable to recover it. 00:30:29.513 [2024-12-09 06:29:23.603627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.513 [2024-12-09 06:29:23.603660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.513 qpair failed and we were unable to recover it. 00:30:29.513 [2024-12-09 06:29:23.603988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.513 [2024-12-09 06:29:23.604022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.513 qpair failed and we were unable to recover it. 00:30:29.513 [2024-12-09 06:29:23.604378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.513 [2024-12-09 06:29:23.604409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.513 qpair failed and we were unable to recover it. 00:30:29.513 [2024-12-09 06:29:23.604749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.513 [2024-12-09 06:29:23.604783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.513 qpair failed and we were unable to recover it. 00:30:29.513 [2024-12-09 06:29:23.605115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.513 [2024-12-09 06:29:23.605147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.513 qpair failed and we were unable to recover it. 00:30:29.513 [2024-12-09 06:29:23.605366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.513 [2024-12-09 06:29:23.605399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.513 qpair failed and we were unable to recover it. 00:30:29.513 [2024-12-09 06:29:23.605664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.513 [2024-12-09 06:29:23.605695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.513 qpair failed and we were unable to recover it. 00:30:29.513 [2024-12-09 06:29:23.606036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.513 [2024-12-09 06:29:23.606069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.513 qpair failed and we were unable to recover it. 00:30:29.513 [2024-12-09 06:29:23.606405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.513 [2024-12-09 06:29:23.606438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.513 qpair failed and we were unable to recover it. 00:30:29.513 [2024-12-09 06:29:23.606792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.513 [2024-12-09 06:29:23.606823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.513 qpair failed and we were unable to recover it. 00:30:29.513 [2024-12-09 06:29:23.607159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.513 [2024-12-09 06:29:23.607192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.513 qpair failed and we were unable to recover it. 00:30:29.513 [2024-12-09 06:29:23.607534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.513 [2024-12-09 06:29:23.607568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.513 qpair failed and we were unable to recover it. 00:30:29.513 [2024-12-09 06:29:23.607885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.513 [2024-12-09 06:29:23.607918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.513 qpair failed and we were unable to recover it. 00:30:29.513 [2024-12-09 06:29:23.608245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.513 [2024-12-09 06:29:23.608277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.513 qpair failed and we were unable to recover it. 00:30:29.513 [2024-12-09 06:29:23.608620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.513 [2024-12-09 06:29:23.608653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.513 qpair failed and we were unable to recover it. 00:30:29.513 [2024-12-09 06:29:23.608984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.513 [2024-12-09 06:29:23.609016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.513 qpair failed and we were unable to recover it. 00:30:29.513 [2024-12-09 06:29:23.609354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.513 [2024-12-09 06:29:23.609385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.513 qpair failed and we were unable to recover it. 00:30:29.513 [2024-12-09 06:29:23.609734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.513 [2024-12-09 06:29:23.609768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.513 qpair failed and we were unable to recover it. 00:30:29.513 [2024-12-09 06:29:23.610117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.513 [2024-12-09 06:29:23.610148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.513 qpair failed and we were unable to recover it. 00:30:29.513 [2024-12-09 06:29:23.610485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.513 [2024-12-09 06:29:23.610519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.513 qpair failed and we were unable to recover it. 00:30:29.513 [2024-12-09 06:29:23.610839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.513 [2024-12-09 06:29:23.610871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.513 qpair failed and we were unable to recover it. 00:30:29.513 [2024-12-09 06:29:23.611192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.513 [2024-12-09 06:29:23.611223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.513 qpair failed and we were unable to recover it. 00:30:29.513 [2024-12-09 06:29:23.611545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.513 [2024-12-09 06:29:23.611578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.513 qpair failed and we were unable to recover it. 00:30:29.513 [2024-12-09 06:29:23.611923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.513 [2024-12-09 06:29:23.611956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.513 qpair failed and we were unable to recover it. 00:30:29.513 [2024-12-09 06:29:23.612308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.513 [2024-12-09 06:29:23.612339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.513 qpair failed and we were unable to recover it. 00:30:29.513 [2024-12-09 06:29:23.612701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.513 [2024-12-09 06:29:23.612734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.513 qpair failed and we were unable to recover it. 00:30:29.513 [2024-12-09 06:29:23.613009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.513 [2024-12-09 06:29:23.613043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.513 qpair failed and we were unable to recover it. 00:30:29.513 [2024-12-09 06:29:23.613289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.513 [2024-12-09 06:29:23.613320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.513 qpair failed and we were unable to recover it. 00:30:29.513 [2024-12-09 06:29:23.613669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.513 [2024-12-09 06:29:23.613701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.513 qpair failed and we were unable to recover it. 00:30:29.513 [2024-12-09 06:29:23.614043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.513 [2024-12-09 06:29:23.614076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.513 qpair failed and we were unable to recover it. 00:30:29.513 [2024-12-09 06:29:23.614414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.513 [2024-12-09 06:29:23.614445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.513 qpair failed and we were unable to recover it. 00:30:29.513 [2024-12-09 06:29:23.614809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.513 [2024-12-09 06:29:23.614843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.513 qpair failed and we were unable to recover it. 00:30:29.513 [2024-12-09 06:29:23.615063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.513 [2024-12-09 06:29:23.615097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.513 qpair failed and we were unable to recover it. 00:30:29.513 [2024-12-09 06:29:23.615466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.513 [2024-12-09 06:29:23.615500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.513 qpair failed and we were unable to recover it. 00:30:29.513 [2024-12-09 06:29:23.615852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.513 [2024-12-09 06:29:23.615884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.513 qpair failed and we were unable to recover it. 00:30:29.513 [2024-12-09 06:29:23.616220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.514 [2024-12-09 06:29:23.616253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.514 qpair failed and we were unable to recover it. 00:30:29.514 [2024-12-09 06:29:23.616600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.514 [2024-12-09 06:29:23.616633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.514 qpair failed and we were unable to recover it. 00:30:29.514 [2024-12-09 06:29:23.617048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.514 [2024-12-09 06:29:23.617080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.514 qpair failed and we were unable to recover it. 00:30:29.514 [2024-12-09 06:29:23.617325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.514 [2024-12-09 06:29:23.617367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.514 qpair failed and we were unable to recover it. 00:30:29.514 [2024-12-09 06:29:23.617701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.514 [2024-12-09 06:29:23.617736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.514 qpair failed and we were unable to recover it. 00:30:29.514 [2024-12-09 06:29:23.618097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.514 [2024-12-09 06:29:23.618130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.514 qpair failed and we were unable to recover it. 00:30:29.514 [2024-12-09 06:29:23.618480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.514 [2024-12-09 06:29:23.618519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.514 qpair failed and we were unable to recover it. 00:30:29.514 [2024-12-09 06:29:23.618863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.514 [2024-12-09 06:29:23.618897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.514 qpair failed and we were unable to recover it. 00:30:29.514 [2024-12-09 06:29:23.619265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.514 [2024-12-09 06:29:23.619296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.514 qpair failed and we were unable to recover it. 00:30:29.514 [2024-12-09 06:29:23.619660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.514 [2024-12-09 06:29:23.619695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.514 qpair failed and we were unable to recover it. 00:30:29.514 [2024-12-09 06:29:23.620069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.514 [2024-12-09 06:29:23.620101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.514 qpair failed and we were unable to recover it. 00:30:29.514 [2024-12-09 06:29:23.620472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.514 [2024-12-09 06:29:23.620507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.514 qpair failed and we were unable to recover it. 00:30:29.514 [2024-12-09 06:29:23.620891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.514 [2024-12-09 06:29:23.620923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.514 qpair failed and we were unable to recover it. 00:30:29.514 [2024-12-09 06:29:23.621245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.514 [2024-12-09 06:29:23.621277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.514 qpair failed and we were unable to recover it. 00:30:29.514 [2024-12-09 06:29:23.621593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.514 [2024-12-09 06:29:23.621626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.514 qpair failed and we were unable to recover it. 00:30:29.514 [2024-12-09 06:29:23.621963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.514 [2024-12-09 06:29:23.621997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.514 qpair failed and we were unable to recover it. 00:30:29.514 [2024-12-09 06:29:23.622316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.514 [2024-12-09 06:29:23.622348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.514 qpair failed and we were unable to recover it. 00:30:29.514 [2024-12-09 06:29:23.622696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.514 [2024-12-09 06:29:23.622731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.514 qpair failed and we were unable to recover it. 00:30:29.514 [2024-12-09 06:29:23.623066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.514 [2024-12-09 06:29:23.623097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.514 qpair failed and we were unable to recover it. 00:30:29.514 [2024-12-09 06:29:23.623424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.514 [2024-12-09 06:29:23.623468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.514 qpair failed and we were unable to recover it. 00:30:29.514 [2024-12-09 06:29:23.623850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.514 [2024-12-09 06:29:23.623882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.514 qpair failed and we were unable to recover it. 00:30:29.514 [2024-12-09 06:29:23.624120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.514 [2024-12-09 06:29:23.624151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.514 qpair failed and we were unable to recover it. 00:30:29.514 [2024-12-09 06:29:23.624497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.514 [2024-12-09 06:29:23.624529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.514 qpair failed and we were unable to recover it. 00:30:29.514 [2024-12-09 06:29:23.624880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.514 [2024-12-09 06:29:23.624914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.514 qpair failed and we were unable to recover it. 00:30:29.514 [2024-12-09 06:29:23.625287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.514 [2024-12-09 06:29:23.625318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.514 qpair failed and we were unable to recover it. 00:30:29.514 [2024-12-09 06:29:23.625667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.514 [2024-12-09 06:29:23.625702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.514 qpair failed and we were unable to recover it. 00:30:29.514 [2024-12-09 06:29:23.626050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.514 [2024-12-09 06:29:23.626081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.514 qpair failed and we were unable to recover it. 00:30:29.514 [2024-12-09 06:29:23.626422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.514 [2024-12-09 06:29:23.626465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.514 qpair failed and we were unable to recover it. 00:30:29.514 [2024-12-09 06:29:23.626831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.514 [2024-12-09 06:29:23.626861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.514 qpair failed and we were unable to recover it. 00:30:29.514 [2024-12-09 06:29:23.627181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.514 [2024-12-09 06:29:23.627215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.514 qpair failed and we were unable to recover it. 00:30:29.514 [2024-12-09 06:29:23.627587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.514 [2024-12-09 06:29:23.627625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.514 qpair failed and we were unable to recover it. 00:30:29.514 [2024-12-09 06:29:23.627945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.514 [2024-12-09 06:29:23.627977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.514 qpair failed and we were unable to recover it. 00:30:29.514 [2024-12-09 06:29:23.628354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.514 [2024-12-09 06:29:23.628390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.514 qpair failed and we were unable to recover it. 00:30:29.514 [2024-12-09 06:29:23.628662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.514 [2024-12-09 06:29:23.628695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.514 qpair failed and we were unable to recover it. 00:30:29.514 [2024-12-09 06:29:23.629027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.514 [2024-12-09 06:29:23.629059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.514 qpair failed and we were unable to recover it. 00:30:29.515 [2024-12-09 06:29:23.629426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.515 [2024-12-09 06:29:23.629470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.515 qpair failed and we were unable to recover it. 00:30:29.515 [2024-12-09 06:29:23.629843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.515 [2024-12-09 06:29:23.629874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.515 qpair failed and we were unable to recover it. 00:30:29.515 [2024-12-09 06:29:23.630211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.515 [2024-12-09 06:29:23.630244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.515 qpair failed and we were unable to recover it. 00:30:29.515 [2024-12-09 06:29:23.630561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.515 [2024-12-09 06:29:23.630594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.515 qpair failed and we were unable to recover it. 00:30:29.515 [2024-12-09 06:29:23.630944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.515 [2024-12-09 06:29:23.630977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.515 qpair failed and we were unable to recover it. 00:30:29.515 [2024-12-09 06:29:23.631211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.515 [2024-12-09 06:29:23.631245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.515 qpair failed and we were unable to recover it. 00:30:29.515 [2024-12-09 06:29:23.631608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.515 [2024-12-09 06:29:23.631642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.515 qpair failed and we were unable to recover it. 00:30:29.515 [2024-12-09 06:29:23.632001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.515 [2024-12-09 06:29:23.632029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.515 qpair failed and we were unable to recover it. 00:30:29.515 [2024-12-09 06:29:23.632363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.515 [2024-12-09 06:29:23.632391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.515 qpair failed and we were unable to recover it. 00:30:29.515 [2024-12-09 06:29:23.632770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.515 [2024-12-09 06:29:23.632800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.515 qpair failed and we were unable to recover it. 00:30:29.515 [2024-12-09 06:29:23.633175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.515 [2024-12-09 06:29:23.633205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.515 qpair failed and we were unable to recover it. 00:30:29.515 [2024-12-09 06:29:23.633542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.515 [2024-12-09 06:29:23.633571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.515 qpair failed and we were unable to recover it. 00:30:29.515 [2024-12-09 06:29:23.633908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.515 [2024-12-09 06:29:23.633937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.515 qpair failed and we were unable to recover it. 00:30:29.515 [2024-12-09 06:29:23.634280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.515 [2024-12-09 06:29:23.634309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.515 qpair failed and we were unable to recover it. 00:30:29.515 [2024-12-09 06:29:23.634659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.515 [2024-12-09 06:29:23.634688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.515 qpair failed and we were unable to recover it. 00:30:29.515 [2024-12-09 06:29:23.635018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.515 [2024-12-09 06:29:23.635046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.515 qpair failed and we were unable to recover it. 00:30:29.515 [2024-12-09 06:29:23.635391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.515 [2024-12-09 06:29:23.635421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.515 qpair failed and we were unable to recover it. 00:30:29.515 [2024-12-09 06:29:23.635752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.515 [2024-12-09 06:29:23.635784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.515 qpair failed and we were unable to recover it. 00:30:29.515 [2024-12-09 06:29:23.636016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.515 [2024-12-09 06:29:23.636047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.515 qpair failed and we were unable to recover it. 00:30:29.515 [2024-12-09 06:29:23.636400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.515 [2024-12-09 06:29:23.636430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.515 qpair failed and we were unable to recover it. 00:30:29.515 [2024-12-09 06:29:23.636808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.515 [2024-12-09 06:29:23.636840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.515 qpair failed and we were unable to recover it. 00:30:29.515 [2024-12-09 06:29:23.637189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.515 [2024-12-09 06:29:23.637221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.515 qpair failed and we were unable to recover it. 00:30:29.515 [2024-12-09 06:29:23.637552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.515 [2024-12-09 06:29:23.637584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.515 qpair failed and we were unable to recover it. 00:30:29.515 [2024-12-09 06:29:23.637908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.515 [2024-12-09 06:29:23.637939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.515 qpair failed and we were unable to recover it. 00:30:29.515 [2024-12-09 06:29:23.638264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.515 [2024-12-09 06:29:23.638296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.515 qpair failed and we were unable to recover it. 00:30:29.515 [2024-12-09 06:29:23.638604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.515 [2024-12-09 06:29:23.638637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.515 qpair failed and we were unable to recover it. 00:30:29.515 [2024-12-09 06:29:23.638965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.515 [2024-12-09 06:29:23.638997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.515 qpair failed and we were unable to recover it. 00:30:29.515 [2024-12-09 06:29:23.639318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.515 [2024-12-09 06:29:23.639350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.515 qpair failed and we were unable to recover it. 00:30:29.515 [2024-12-09 06:29:23.639657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.515 [2024-12-09 06:29:23.639690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.515 qpair failed and we were unable to recover it. 00:30:29.515 [2024-12-09 06:29:23.640035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.515 [2024-12-09 06:29:23.640066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.515 qpair failed and we were unable to recover it. 00:30:29.515 [2024-12-09 06:29:23.640386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.515 [2024-12-09 06:29:23.640419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.515 qpair failed and we were unable to recover it. 00:30:29.515 [2024-12-09 06:29:23.640808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.515 [2024-12-09 06:29:23.640840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.515 qpair failed and we were unable to recover it. 00:30:29.515 [2024-12-09 06:29:23.641056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.515 [2024-12-09 06:29:23.641091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.515 qpair failed and we were unable to recover it. 00:30:29.515 [2024-12-09 06:29:23.641407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.515 [2024-12-09 06:29:23.641442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.515 qpair failed and we were unable to recover it. 00:30:29.515 [2024-12-09 06:29:23.641781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.515 [2024-12-09 06:29:23.641814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.515 qpair failed and we were unable to recover it. 00:30:29.515 [2024-12-09 06:29:23.642039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.515 [2024-12-09 06:29:23.642071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.515 qpair failed and we were unable to recover it. 00:30:29.515 [2024-12-09 06:29:23.642383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.515 [2024-12-09 06:29:23.642423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.515 qpair failed and we were unable to recover it. 00:30:29.515 [2024-12-09 06:29:23.642835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.515 [2024-12-09 06:29:23.642868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.516 qpair failed and we were unable to recover it. 00:30:29.516 [2024-12-09 06:29:23.643215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.516 [2024-12-09 06:29:23.643247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.516 qpair failed and we were unable to recover it. 00:30:29.516 [2024-12-09 06:29:23.643570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.516 [2024-12-09 06:29:23.643604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.516 qpair failed and we were unable to recover it. 00:30:29.516 [2024-12-09 06:29:23.643909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.516 [2024-12-09 06:29:23.643941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.516 qpair failed and we were unable to recover it. 00:30:29.516 [2024-12-09 06:29:23.644264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.516 [2024-12-09 06:29:23.644296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.516 qpair failed and we were unable to recover it. 00:30:29.516 [2024-12-09 06:29:23.644621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.516 [2024-12-09 06:29:23.644656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.516 qpair failed and we were unable to recover it. 00:30:29.516 [2024-12-09 06:29:23.645016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.516 [2024-12-09 06:29:23.645049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.516 qpair failed and we were unable to recover it. 00:30:29.516 [2024-12-09 06:29:23.645357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.516 [2024-12-09 06:29:23.645391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.516 qpair failed and we were unable to recover it. 00:30:29.516 [2024-12-09 06:29:23.645745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.516 [2024-12-09 06:29:23.645778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.516 qpair failed and we were unable to recover it. 00:30:29.516 [2024-12-09 06:29:23.646152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.516 [2024-12-09 06:29:23.646187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.516 qpair failed and we were unable to recover it. 00:30:29.516 [2024-12-09 06:29:23.646326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.516 [2024-12-09 06:29:23.646362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.516 qpair failed and we were unable to recover it. 00:30:29.516 [2024-12-09 06:29:23.646721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.516 [2024-12-09 06:29:23.646757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.516 qpair failed and we were unable to recover it. 00:30:29.516 [2024-12-09 06:29:23.647106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.516 [2024-12-09 06:29:23.647138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.516 qpair failed and we were unable to recover it. 00:30:29.516 [2024-12-09 06:29:23.647483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.516 [2024-12-09 06:29:23.647517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.516 qpair failed and we were unable to recover it. 00:30:29.516 [2024-12-09 06:29:23.647857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.516 [2024-12-09 06:29:23.647890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.516 qpair failed and we were unable to recover it. 00:30:29.516 [2024-12-09 06:29:23.648211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.516 [2024-12-09 06:29:23.648243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.516 qpair failed and we were unable to recover it. 00:30:29.516 [2024-12-09 06:29:23.648570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.516 [2024-12-09 06:29:23.648604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.516 qpair failed and we were unable to recover it. 00:30:29.516 [2024-12-09 06:29:23.648854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.516 [2024-12-09 06:29:23.648888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.516 qpair failed and we were unable to recover it. 00:30:29.516 [2024-12-09 06:29:23.649237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.516 [2024-12-09 06:29:23.649270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.516 qpair failed and we were unable to recover it. 00:30:29.516 [2024-12-09 06:29:23.649595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.516 [2024-12-09 06:29:23.649628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.516 qpair failed and we were unable to recover it. 00:30:29.516 [2024-12-09 06:29:23.649859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.516 [2024-12-09 06:29:23.649896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.516 qpair failed and we were unable to recover it. 00:30:29.516 [2024-12-09 06:29:23.650240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.516 [2024-12-09 06:29:23.650272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.516 qpair failed and we were unable to recover it. 00:30:29.516 [2024-12-09 06:29:23.650603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.516 [2024-12-09 06:29:23.650636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.516 qpair failed and we were unable to recover it. 00:30:29.516 [2024-12-09 06:29:23.651012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.516 [2024-12-09 06:29:23.651044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.516 qpair failed and we were unable to recover it. 00:30:29.516 [2024-12-09 06:29:23.651349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.516 [2024-12-09 06:29:23.651380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.516 qpair failed and we were unable to recover it. 00:30:29.516 [2024-12-09 06:29:23.651720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.516 [2024-12-09 06:29:23.651757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.516 qpair failed and we were unable to recover it. 00:30:29.516 [2024-12-09 06:29:23.652106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.516 [2024-12-09 06:29:23.652143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.516 qpair failed and we were unable to recover it. 00:30:29.516 [2024-12-09 06:29:23.652471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.516 [2024-12-09 06:29:23.652507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.516 qpair failed and we were unable to recover it. 00:30:29.516 [2024-12-09 06:29:23.652735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.516 [2024-12-09 06:29:23.652771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.516 qpair failed and we were unable to recover it. 00:30:29.516 [2024-12-09 06:29:23.653112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.516 [2024-12-09 06:29:23.653145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.516 qpair failed and we were unable to recover it. 00:30:29.516 [2024-12-09 06:29:23.653494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.516 [2024-12-09 06:29:23.653527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.516 qpair failed and we were unable to recover it. 00:30:29.516 [2024-12-09 06:29:23.653913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.516 [2024-12-09 06:29:23.653944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.516 qpair failed and we were unable to recover it. 00:30:29.516 [2024-12-09 06:29:23.654260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.516 [2024-12-09 06:29:23.654290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.516 qpair failed and we were unable to recover it. 00:30:29.516 [2024-12-09 06:29:23.654635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.516 [2024-12-09 06:29:23.654666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.516 qpair failed and we were unable to recover it. 00:30:29.516 [2024-12-09 06:29:23.655012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.516 [2024-12-09 06:29:23.655043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.516 qpair failed and we were unable to recover it. 00:30:29.516 [2024-12-09 06:29:23.655386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.516 [2024-12-09 06:29:23.655417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.516 qpair failed and we were unable to recover it. 00:30:29.516 [2024-12-09 06:29:23.655776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.516 [2024-12-09 06:29:23.655810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.516 qpair failed and we were unable to recover it. 00:30:29.516 [2024-12-09 06:29:23.656187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.516 [2024-12-09 06:29:23.656220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.516 qpair failed and we were unable to recover it. 00:30:29.516 [2024-12-09 06:29:23.656570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.517 [2024-12-09 06:29:23.656607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.517 qpair failed and we were unable to recover it. 00:30:29.517 [2024-12-09 06:29:23.656922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.517 [2024-12-09 06:29:23.656953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.517 qpair failed and we were unable to recover it. 00:30:29.517 [2024-12-09 06:29:23.657239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.517 [2024-12-09 06:29:23.657271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.517 qpair failed and we were unable to recover it. 00:30:29.517 [2024-12-09 06:29:23.657479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.517 [2024-12-09 06:29:23.657510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.517 qpair failed and we were unable to recover it. 00:30:29.517 [2024-12-09 06:29:23.657765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.517 [2024-12-09 06:29:23.657796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.517 qpair failed and we were unable to recover it. 00:30:29.517 [2024-12-09 06:29:23.658132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.517 [2024-12-09 06:29:23.658165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.517 qpair failed and we were unable to recover it. 00:30:29.517 [2024-12-09 06:29:23.658292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.517 [2024-12-09 06:29:23.658323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.517 qpair failed and we were unable to recover it. 00:30:29.517 [2024-12-09 06:29:23.658567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.517 [2024-12-09 06:29:23.658601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.517 qpair failed and we were unable to recover it. 00:30:29.517 [2024-12-09 06:29:23.658989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.517 [2024-12-09 06:29:23.659020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.517 qpair failed and we were unable to recover it. 00:30:29.517 [2024-12-09 06:29:23.659337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.517 [2024-12-09 06:29:23.659368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.517 qpair failed and we were unable to recover it. 00:30:29.517 [2024-12-09 06:29:23.659676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.517 [2024-12-09 06:29:23.659707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.517 qpair failed and we were unable to recover it. 00:30:29.517 [2024-12-09 06:29:23.660025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.517 [2024-12-09 06:29:23.660056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.517 qpair failed and we were unable to recover it. 00:30:29.517 [2024-12-09 06:29:23.660382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.517 [2024-12-09 06:29:23.660417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.517 qpair failed and we were unable to recover it. 00:30:29.517 [2024-12-09 06:29:23.660818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.517 [2024-12-09 06:29:23.660851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.517 qpair failed and we were unable to recover it. 00:30:29.517 [2024-12-09 06:29:23.661196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.517 [2024-12-09 06:29:23.661229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.517 qpair failed and we were unable to recover it. 00:30:29.517 [2024-12-09 06:29:23.661574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.517 [2024-12-09 06:29:23.661607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.517 qpair failed and we were unable to recover it. 00:30:29.517 [2024-12-09 06:29:23.661985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.517 [2024-12-09 06:29:23.662017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.517 qpair failed and we were unable to recover it. 00:30:29.517 [2024-12-09 06:29:23.662340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.517 [2024-12-09 06:29:23.662374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.517 qpair failed and we were unable to recover it. 00:30:29.517 [2024-12-09 06:29:23.662727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.517 [2024-12-09 06:29:23.662762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.517 qpair failed and we were unable to recover it. 00:30:29.517 [2024-12-09 06:29:23.663146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.517 [2024-12-09 06:29:23.663178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.517 qpair failed and we were unable to recover it. 00:30:29.517 [2024-12-09 06:29:23.663544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.517 [2024-12-09 06:29:23.663580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.517 qpair failed and we were unable to recover it. 00:30:29.517 [2024-12-09 06:29:23.663928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.517 [2024-12-09 06:29:23.663960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.517 qpair failed and we were unable to recover it. 00:30:29.517 [2024-12-09 06:29:23.664301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.517 [2024-12-09 06:29:23.664333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.517 qpair failed and we were unable to recover it. 00:30:29.517 [2024-12-09 06:29:23.664639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.517 [2024-12-09 06:29:23.664672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.517 qpair failed and we were unable to recover it. 00:30:29.517 [2024-12-09 06:29:23.664931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.517 [2024-12-09 06:29:23.664967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.517 qpair failed and we were unable to recover it. 00:30:29.517 [2024-12-09 06:29:23.665201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.517 [2024-12-09 06:29:23.665235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.517 qpair failed and we were unable to recover it. 00:30:29.517 [2024-12-09 06:29:23.665557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.517 [2024-12-09 06:29:23.665590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.517 qpair failed and we were unable to recover it. 00:30:29.517 [2024-12-09 06:29:23.665944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.517 [2024-12-09 06:29:23.665977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.517 qpair failed and we were unable to recover it. 00:30:29.517 [2024-12-09 06:29:23.666294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.517 [2024-12-09 06:29:23.666326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.517 qpair failed and we were unable to recover it. 00:30:29.517 [2024-12-09 06:29:23.666571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.517 [2024-12-09 06:29:23.666609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.517 qpair failed and we were unable to recover it. 00:30:29.517 [2024-12-09 06:29:23.666935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.517 [2024-12-09 06:29:23.666967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.517 qpair failed and we were unable to recover it. 00:30:29.517 [2024-12-09 06:29:23.667308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.517 [2024-12-09 06:29:23.667340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.517 qpair failed and we were unable to recover it. 00:30:29.517 [2024-12-09 06:29:23.667615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.517 [2024-12-09 06:29:23.667647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.517 qpair failed and we were unable to recover it. 00:30:29.517 [2024-12-09 06:29:23.667864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.517 [2024-12-09 06:29:23.667894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.517 qpair failed and we were unable to recover it. 00:30:29.517 [2024-12-09 06:29:23.668118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.517 [2024-12-09 06:29:23.668149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.517 qpair failed and we were unable to recover it. 00:30:29.517 [2024-12-09 06:29:23.668490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.517 [2024-12-09 06:29:23.668522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.517 qpair failed and we were unable to recover it. 00:30:29.517 [2024-12-09 06:29:23.668935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.517 [2024-12-09 06:29:23.668968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.517 qpair failed and we were unable to recover it. 00:30:29.517 [2024-12-09 06:29:23.669308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.517 [2024-12-09 06:29:23.669341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.517 qpair failed and we were unable to recover it. 00:30:29.517 [2024-12-09 06:29:23.669736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.518 [2024-12-09 06:29:23.669769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.518 qpair failed and we were unable to recover it. 00:30:29.518 [2024-12-09 06:29:23.670150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.518 [2024-12-09 06:29:23.670181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.518 qpair failed and we were unable to recover it. 00:30:29.518 [2024-12-09 06:29:23.670548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.518 [2024-12-09 06:29:23.670582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.518 qpair failed and we were unable to recover it. 00:30:29.518 [2024-12-09 06:29:23.670898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.518 [2024-12-09 06:29:23.670929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.518 qpair failed and we were unable to recover it. 00:30:29.518 [2024-12-09 06:29:23.671294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.518 [2024-12-09 06:29:23.671328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.518 qpair failed and we were unable to recover it. 00:30:29.518 [2024-12-09 06:29:23.671683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.518 [2024-12-09 06:29:23.671716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.518 qpair failed and we were unable to recover it. 00:30:29.518 [2024-12-09 06:29:23.672033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.518 [2024-12-09 06:29:23.672065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.518 qpair failed and we were unable to recover it. 00:30:29.518 [2024-12-09 06:29:23.672403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.518 [2024-12-09 06:29:23.672435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.518 qpair failed and we were unable to recover it. 00:30:29.518 [2024-12-09 06:29:23.672761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.518 [2024-12-09 06:29:23.672794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.518 qpair failed and we were unable to recover it. 00:30:29.518 [2024-12-09 06:29:23.673128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.518 [2024-12-09 06:29:23.673161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.518 qpair failed and we were unable to recover it. 00:30:29.518 [2024-12-09 06:29:23.673493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.518 [2024-12-09 06:29:23.673526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.518 qpair failed and we were unable to recover it. 00:30:29.518 [2024-12-09 06:29:23.673903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.518 [2024-12-09 06:29:23.673936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.518 qpair failed and we were unable to recover it. 00:30:29.518 [2024-12-09 06:29:23.674263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.518 [2024-12-09 06:29:23.674295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.518 qpair failed and we were unable to recover it. 00:30:29.518 [2024-12-09 06:29:23.674624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.518 [2024-12-09 06:29:23.674656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.518 qpair failed and we were unable to recover it. 00:30:29.518 [2024-12-09 06:29:23.675003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.518 [2024-12-09 06:29:23.675034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.518 qpair failed and we were unable to recover it. 00:30:29.518 [2024-12-09 06:29:23.675365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.518 [2024-12-09 06:29:23.675397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.518 qpair failed and we were unable to recover it. 00:30:29.518 [2024-12-09 06:29:23.675792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.518 [2024-12-09 06:29:23.675825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.518 qpair failed and we were unable to recover it. 00:30:29.518 [2024-12-09 06:29:23.676141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.518 [2024-12-09 06:29:23.676172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.518 qpair failed and we were unable to recover it. 00:30:29.518 [2024-12-09 06:29:23.676508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.518 [2024-12-09 06:29:23.676539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.518 qpair failed and we were unable to recover it. 00:30:29.518 [2024-12-09 06:29:23.676809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.518 [2024-12-09 06:29:23.676840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.518 qpair failed and we were unable to recover it. 00:30:29.518 [2024-12-09 06:29:23.677219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.518 [2024-12-09 06:29:23.677251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.518 qpair failed and we were unable to recover it. 00:30:29.518 [2024-12-09 06:29:23.677494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.518 [2024-12-09 06:29:23.677528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.518 qpair failed and we were unable to recover it. 00:30:29.518 [2024-12-09 06:29:23.677768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.518 [2024-12-09 06:29:23.677800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.518 qpair failed and we were unable to recover it. 00:30:29.518 [2024-12-09 06:29:23.678169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.518 [2024-12-09 06:29:23.678199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.518 qpair failed and we were unable to recover it. 00:30:29.518 [2024-12-09 06:29:23.678520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.518 [2024-12-09 06:29:23.678552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.518 qpair failed and we were unable to recover it. 00:30:29.518 [2024-12-09 06:29:23.678895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.518 [2024-12-09 06:29:23.678927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.518 qpair failed and we were unable to recover it. 00:30:29.518 [2024-12-09 06:29:23.679155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.518 [2024-12-09 06:29:23.679188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.518 qpair failed and we were unable to recover it. 00:30:29.518 [2024-12-09 06:29:23.679438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.518 [2024-12-09 06:29:23.679488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.518 qpair failed and we were unable to recover it. 00:30:29.518 [2024-12-09 06:29:23.679625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.518 [2024-12-09 06:29:23.679659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.518 qpair failed and we were unable to recover it. 00:30:29.518 [2024-12-09 06:29:23.680003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.518 [2024-12-09 06:29:23.680035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.518 qpair failed and we were unable to recover it. 00:30:29.518 [2024-12-09 06:29:23.680266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.518 [2024-12-09 06:29:23.680299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.518 qpair failed and we were unable to recover it. 00:30:29.518 [2024-12-09 06:29:23.680666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.518 [2024-12-09 06:29:23.680699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.518 qpair failed and we were unable to recover it. 00:30:29.518 [2024-12-09 06:29:23.681073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.518 [2024-12-09 06:29:23.681106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.518 qpair failed and we were unable to recover it. 00:30:29.518 [2024-12-09 06:29:23.681461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.518 [2024-12-09 06:29:23.681495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.518 qpair failed and we were unable to recover it. 00:30:29.518 [2024-12-09 06:29:23.681847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.518 [2024-12-09 06:29:23.681878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.518 qpair failed and we were unable to recover it. 00:30:29.518 [2024-12-09 06:29:23.682239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.518 [2024-12-09 06:29:23.682272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.518 qpair failed and we were unable to recover it. 00:30:29.518 [2024-12-09 06:29:23.682605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.518 [2024-12-09 06:29:23.682638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.518 qpair failed and we were unable to recover it. 00:30:29.518 [2024-12-09 06:29:23.682943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.518 [2024-12-09 06:29:23.682974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.518 qpair failed and we were unable to recover it. 00:30:29.518 [2024-12-09 06:29:23.683325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.519 [2024-12-09 06:29:23.683356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.519 qpair failed and we were unable to recover it. 00:30:29.519 [2024-12-09 06:29:23.683696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.519 [2024-12-09 06:29:23.683730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.519 qpair failed and we were unable to recover it. 00:30:29.519 [2024-12-09 06:29:23.684060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.519 [2024-12-09 06:29:23.684094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.519 qpair failed and we were unable to recover it. 00:30:29.519 [2024-12-09 06:29:23.684443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.519 [2024-12-09 06:29:23.684488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.519 qpair failed and we were unable to recover it. 00:30:29.519 [2024-12-09 06:29:23.684844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.519 [2024-12-09 06:29:23.684876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.519 qpair failed and we were unable to recover it. 00:30:29.519 [2024-12-09 06:29:23.685216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.519 [2024-12-09 06:29:23.685249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.519 qpair failed and we were unable to recover it. 00:30:29.519 [2024-12-09 06:29:23.685574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.519 [2024-12-09 06:29:23.685606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.519 qpair failed and we were unable to recover it. 00:30:29.519 [2024-12-09 06:29:23.685980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.519 [2024-12-09 06:29:23.686012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.519 qpair failed and we were unable to recover it. 00:30:29.519 [2024-12-09 06:29:23.686357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.519 [2024-12-09 06:29:23.686390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.519 qpair failed and we were unable to recover it. 00:30:29.519 [2024-12-09 06:29:23.686755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.519 [2024-12-09 06:29:23.686787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.519 qpair failed and we were unable to recover it. 00:30:29.519 [2024-12-09 06:29:23.687122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.519 [2024-12-09 06:29:23.687154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.519 qpair failed and we were unable to recover it. 00:30:29.519 [2024-12-09 06:29:23.687468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.519 [2024-12-09 06:29:23.687500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.519 qpair failed and we were unable to recover it. 00:30:29.519 [2024-12-09 06:29:23.687872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.519 [2024-12-09 06:29:23.687903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.519 qpair failed and we were unable to recover it. 00:30:29.519 [2024-12-09 06:29:23.688237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.519 [2024-12-09 06:29:23.688267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.519 qpair failed and we were unable to recover it. 00:30:29.519 [2024-12-09 06:29:23.688597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.519 [2024-12-09 06:29:23.688629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.519 qpair failed and we were unable to recover it. 00:30:29.519 [2024-12-09 06:29:23.689020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.519 [2024-12-09 06:29:23.689050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.519 qpair failed and we were unable to recover it. 00:30:29.519 [2024-12-09 06:29:23.689273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.519 [2024-12-09 06:29:23.689305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.519 qpair failed and we were unable to recover it. 00:30:29.519 [2024-12-09 06:29:23.689551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.519 [2024-12-09 06:29:23.689582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.519 qpair failed and we were unable to recover it. 00:30:29.519 [2024-12-09 06:29:23.689952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.519 [2024-12-09 06:29:23.689982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.519 qpair failed and we were unable to recover it. 00:30:29.519 [2024-12-09 06:29:23.690327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.519 [2024-12-09 06:29:23.690361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.519 qpair failed and we were unable to recover it. 00:30:29.519 [2024-12-09 06:29:23.690609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.519 [2024-12-09 06:29:23.690642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.519 qpair failed and we were unable to recover it. 00:30:29.519 [2024-12-09 06:29:23.691014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.519 [2024-12-09 06:29:23.691053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.519 qpair failed and we were unable to recover it. 00:30:29.519 [2024-12-09 06:29:23.691414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.519 [2024-12-09 06:29:23.691444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.519 qpair failed and we were unable to recover it. 00:30:29.519 [2024-12-09 06:29:23.691732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.519 [2024-12-09 06:29:23.691765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.519 qpair failed and we were unable to recover it. 00:30:29.519 [2024-12-09 06:29:23.692038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.519 [2024-12-09 06:29:23.692070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.519 qpair failed and we were unable to recover it. 00:30:29.519 [2024-12-09 06:29:23.692413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.519 [2024-12-09 06:29:23.692444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.519 qpair failed and we were unable to recover it. 00:30:29.519 [2024-12-09 06:29:23.692812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.519 [2024-12-09 06:29:23.692844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.519 qpair failed and we were unable to recover it. 00:30:29.519 [2024-12-09 06:29:23.693203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.519 [2024-12-09 06:29:23.693232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.519 qpair failed and we were unable to recover it. 00:30:29.519 [2024-12-09 06:29:23.693579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.519 [2024-12-09 06:29:23.693611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.519 qpair failed and we were unable to recover it. 00:30:29.519 [2024-12-09 06:29:23.693982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.519 [2024-12-09 06:29:23.694014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.519 qpair failed and we were unable to recover it. 00:30:29.519 [2024-12-09 06:29:23.694376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.519 [2024-12-09 06:29:23.694409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.519 qpair failed and we were unable to recover it. 00:30:29.519 [2024-12-09 06:29:23.694676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.519 [2024-12-09 06:29:23.694709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.519 qpair failed and we were unable to recover it. 00:30:29.519 [2024-12-09 06:29:23.694947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.519 [2024-12-09 06:29:23.694982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.519 qpair failed and we were unable to recover it. 00:30:29.519 [2024-12-09 06:29:23.695382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.520 [2024-12-09 06:29:23.695414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.520 qpair failed and we were unable to recover it. 00:30:29.520 [2024-12-09 06:29:23.695771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.520 [2024-12-09 06:29:23.695804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.520 qpair failed and we were unable to recover it. 00:30:29.520 [2024-12-09 06:29:23.696172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.520 [2024-12-09 06:29:23.696204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.520 qpair failed and we were unable to recover it. 00:30:29.520 [2024-12-09 06:29:23.696550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.520 [2024-12-09 06:29:23.696583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.520 qpair failed and we were unable to recover it. 00:30:29.520 [2024-12-09 06:29:23.696940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.520 [2024-12-09 06:29:23.696973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.520 qpair failed and we were unable to recover it. 00:30:29.520 [2024-12-09 06:29:23.697311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.520 [2024-12-09 06:29:23.697343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.520 qpair failed and we were unable to recover it. 00:30:29.520 [2024-12-09 06:29:23.697609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.520 [2024-12-09 06:29:23.697644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.520 qpair failed and we were unable to recover it. 00:30:29.520 [2024-12-09 06:29:23.698036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.520 [2024-12-09 06:29:23.698068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.520 qpair failed and we were unable to recover it. 00:30:29.520 [2024-12-09 06:29:23.698408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.520 [2024-12-09 06:29:23.698440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.520 qpair failed and we were unable to recover it. 00:30:29.520 [2024-12-09 06:29:23.698725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.520 [2024-12-09 06:29:23.698757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.520 qpair failed and we were unable to recover it. 00:30:29.520 [2024-12-09 06:29:23.699106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.520 [2024-12-09 06:29:23.699140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.520 qpair failed and we were unable to recover it. 00:30:29.520 [2024-12-09 06:29:23.699482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.520 [2024-12-09 06:29:23.699514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.520 qpair failed and we were unable to recover it. 00:30:29.520 [2024-12-09 06:29:23.699890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.520 [2024-12-09 06:29:23.699922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.520 qpair failed and we were unable to recover it. 00:30:29.520 [2024-12-09 06:29:23.700307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.520 [2024-12-09 06:29:23.700340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.520 qpair failed and we were unable to recover it. 00:30:29.520 [2024-12-09 06:29:23.700697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.520 [2024-12-09 06:29:23.700729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.520 qpair failed and we were unable to recover it. 00:30:29.520 [2024-12-09 06:29:23.700945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.520 [2024-12-09 06:29:23.700979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.520 qpair failed and we were unable to recover it. 00:30:29.520 [2024-12-09 06:29:23.701221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.520 [2024-12-09 06:29:23.701256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.520 qpair failed and we were unable to recover it. 00:30:29.520 [2024-12-09 06:29:23.701624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.520 [2024-12-09 06:29:23.701658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.520 qpair failed and we were unable to recover it. 00:30:29.520 [2024-12-09 06:29:23.702040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.520 [2024-12-09 06:29:23.702072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.520 qpair failed and we were unable to recover it. 00:30:29.520 [2024-12-09 06:29:23.702393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.520 [2024-12-09 06:29:23.702425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.520 qpair failed and we were unable to recover it. 00:30:29.520 [2024-12-09 06:29:23.702660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.520 [2024-12-09 06:29:23.702692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.520 qpair failed and we were unable to recover it. 00:30:29.520 [2024-12-09 06:29:23.703105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.520 [2024-12-09 06:29:23.703136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.520 qpair failed and we were unable to recover it. 00:30:29.520 [2024-12-09 06:29:23.703471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.520 [2024-12-09 06:29:23.703504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.520 qpair failed and we were unable to recover it. 00:30:29.520 [2024-12-09 06:29:23.703874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.520 [2024-12-09 06:29:23.703905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.520 qpair failed and we were unable to recover it. 00:30:29.520 [2024-12-09 06:29:23.704123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.520 [2024-12-09 06:29:23.704154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.520 qpair failed and we were unable to recover it. 00:30:29.520 [2024-12-09 06:29:23.704495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.520 [2024-12-09 06:29:23.704527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.520 qpair failed and we were unable to recover it. 00:30:29.520 [2024-12-09 06:29:23.704871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.520 [2024-12-09 06:29:23.704903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.520 qpair failed and we were unable to recover it. 00:30:29.520 [2024-12-09 06:29:23.705121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.520 [2024-12-09 06:29:23.705152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.520 qpair failed and we were unable to recover it. 00:30:29.520 [2024-12-09 06:29:23.705469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.520 [2024-12-09 06:29:23.705503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.520 qpair failed and we were unable to recover it. 00:30:29.520 [2024-12-09 06:29:23.705883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.520 [2024-12-09 06:29:23.705916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.520 qpair failed and we were unable to recover it. 00:30:29.520 [2024-12-09 06:29:23.706214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.520 [2024-12-09 06:29:23.706245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.520 qpair failed and we were unable to recover it. 00:30:29.520 [2024-12-09 06:29:23.706427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.520 [2024-12-09 06:29:23.706470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.520 qpair failed and we were unable to recover it. 00:30:29.520 [2024-12-09 06:29:23.706831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.520 [2024-12-09 06:29:23.706863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.520 qpair failed and we were unable to recover it. 00:30:29.520 [2024-12-09 06:29:23.707196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.520 [2024-12-09 06:29:23.707227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.520 qpair failed and we were unable to recover it. 00:30:29.520 [2024-12-09 06:29:23.707573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.520 [2024-12-09 06:29:23.707606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.520 qpair failed and we were unable to recover it. 00:30:29.520 [2024-12-09 06:29:23.707939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.520 [2024-12-09 06:29:23.707972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.520 qpair failed and we were unable to recover it. 00:30:29.520 [2024-12-09 06:29:23.708316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.520 [2024-12-09 06:29:23.708348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.520 qpair failed and we were unable to recover it. 00:30:29.520 [2024-12-09 06:29:23.708559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.520 [2024-12-09 06:29:23.708590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.520 qpair failed and we were unable to recover it. 00:30:29.520 [2024-12-09 06:29:23.708941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.521 [2024-12-09 06:29:23.708973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.521 qpair failed and we were unable to recover it. 00:30:29.521 [2024-12-09 06:29:23.709205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.521 [2024-12-09 06:29:23.709238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.521 qpair failed and we were unable to recover it. 00:30:29.521 [2024-12-09 06:29:23.709484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.521 [2024-12-09 06:29:23.709518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.521 qpair failed and we were unable to recover it. 00:30:29.521 [2024-12-09 06:29:23.709864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.521 [2024-12-09 06:29:23.709896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.521 qpair failed and we were unable to recover it. 00:30:29.521 [2024-12-09 06:29:23.710279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.521 [2024-12-09 06:29:23.710311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.521 qpair failed and we were unable to recover it. 00:30:29.521 [2024-12-09 06:29:23.710655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.521 [2024-12-09 06:29:23.710688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.521 qpair failed and we were unable to recover it. 00:30:29.521 [2024-12-09 06:29:23.711022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.521 [2024-12-09 06:29:23.711055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.521 qpair failed and we were unable to recover it. 00:30:29.521 [2024-12-09 06:29:23.711309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.521 [2024-12-09 06:29:23.711343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.521 qpair failed and we were unable to recover it. 00:30:29.521 [2024-12-09 06:29:23.711712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.521 [2024-12-09 06:29:23.711746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.521 qpair failed and we were unable to recover it. 00:30:29.521 [2024-12-09 06:29:23.712079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.521 [2024-12-09 06:29:23.712111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.521 qpair failed and we were unable to recover it. 00:30:29.521 [2024-12-09 06:29:23.712469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.521 [2024-12-09 06:29:23.712501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.521 qpair failed and we were unable to recover it. 00:30:29.521 [2024-12-09 06:29:23.712876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.521 [2024-12-09 06:29:23.712908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.521 qpair failed and we were unable to recover it. 00:30:29.521 [2024-12-09 06:29:23.713232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.521 [2024-12-09 06:29:23.713264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.521 qpair failed and we were unable to recover it. 00:30:29.521 [2024-12-09 06:29:23.713625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.521 [2024-12-09 06:29:23.713658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.521 qpair failed and we were unable to recover it. 00:30:29.521 [2024-12-09 06:29:23.713989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.521 [2024-12-09 06:29:23.714022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.521 qpair failed and we were unable to recover it. 00:30:29.521 [2024-12-09 06:29:23.714429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.521 [2024-12-09 06:29:23.714471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.521 qpair failed and we were unable to recover it. 00:30:29.521 [2024-12-09 06:29:23.714834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.521 [2024-12-09 06:29:23.714865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.521 qpair failed and we were unable to recover it. 00:30:29.521 [2024-12-09 06:29:23.715191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.521 [2024-12-09 06:29:23.715221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.521 qpair failed and we were unable to recover it. 00:30:29.521 [2024-12-09 06:29:23.715554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.521 [2024-12-09 06:29:23.715593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.521 qpair failed and we were unable to recover it. 00:30:29.521 [2024-12-09 06:29:23.715931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.521 [2024-12-09 06:29:23.715964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.521 qpair failed and we were unable to recover it. 00:30:29.521 [2024-12-09 06:29:23.716294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.521 [2024-12-09 06:29:23.716326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.521 qpair failed and we were unable to recover it. 00:30:29.521 [2024-12-09 06:29:23.716693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.521 [2024-12-09 06:29:23.716726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.521 qpair failed and we were unable to recover it. 00:30:29.521 [2024-12-09 06:29:23.717077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.521 [2024-12-09 06:29:23.717109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.521 qpair failed and we were unable to recover it. 00:30:29.521 [2024-12-09 06:29:23.717470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.521 [2024-12-09 06:29:23.717501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.521 qpair failed and we were unable to recover it. 00:30:29.521 [2024-12-09 06:29:23.717832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.521 [2024-12-09 06:29:23.717865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.521 qpair failed and we were unable to recover it. 00:30:29.521 [2024-12-09 06:29:23.718176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.521 [2024-12-09 06:29:23.718207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.521 qpair failed and we were unable to recover it. 00:30:29.521 [2024-12-09 06:29:23.718542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.521 [2024-12-09 06:29:23.718575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.521 qpair failed and we were unable to recover it. 00:30:29.521 [2024-12-09 06:29:23.718951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.521 [2024-12-09 06:29:23.718983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.521 qpair failed and we were unable to recover it. 00:30:29.521 [2024-12-09 06:29:23.719299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.521 [2024-12-09 06:29:23.719331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.521 qpair failed and we were unable to recover it. 00:30:29.521 [2024-12-09 06:29:23.719716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.521 [2024-12-09 06:29:23.719749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.521 qpair failed and we were unable to recover it. 00:30:29.521 [2024-12-09 06:29:23.720078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.521 [2024-12-09 06:29:23.720112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.521 qpair failed and we were unable to recover it. 00:30:29.521 [2024-12-09 06:29:23.720426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.521 [2024-12-09 06:29:23.720467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.521 qpair failed and we were unable to recover it. 00:30:29.521 [2024-12-09 06:29:23.720820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.521 [2024-12-09 06:29:23.720852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.521 qpair failed and we were unable to recover it. 00:30:29.521 [2024-12-09 06:29:23.721182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.521 [2024-12-09 06:29:23.721213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.521 qpair failed and we were unable to recover it. 00:30:29.521 [2024-12-09 06:29:23.721537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.521 [2024-12-09 06:29:23.721569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.521 qpair failed and we were unable to recover it. 00:30:29.521 [2024-12-09 06:29:23.721927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.521 [2024-12-09 06:29:23.721958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.521 qpair failed and we were unable to recover it. 00:30:29.521 [2024-12-09 06:29:23.722281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.521 [2024-12-09 06:29:23.722311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.521 qpair failed and we were unable to recover it. 00:30:29.521 [2024-12-09 06:29:23.722544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.521 [2024-12-09 06:29:23.722576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.521 qpair failed and we were unable to recover it. 00:30:29.521 [2024-12-09 06:29:23.722913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.522 [2024-12-09 06:29:23.722947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.522 qpair failed and we were unable to recover it. 00:30:29.522 [2024-12-09 06:29:23.723276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.522 [2024-12-09 06:29:23.723308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.522 qpair failed and we were unable to recover it. 00:30:29.522 [2024-12-09 06:29:23.723662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.522 [2024-12-09 06:29:23.723696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.522 qpair failed and we were unable to recover it. 00:30:29.522 [2024-12-09 06:29:23.724033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.522 [2024-12-09 06:29:23.724064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.522 qpair failed and we were unable to recover it. 00:30:29.522 [2024-12-09 06:29:23.724390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.522 [2024-12-09 06:29:23.724423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.522 qpair failed and we were unable to recover it. 00:30:29.522 [2024-12-09 06:29:23.724773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.522 [2024-12-09 06:29:23.724805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.522 qpair failed and we were unable to recover it. 00:30:29.522 [2024-12-09 06:29:23.725163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.522 [2024-12-09 06:29:23.725196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.522 qpair failed and we were unable to recover it. 00:30:29.522 [2024-12-09 06:29:23.725491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.522 [2024-12-09 06:29:23.725523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.522 qpair failed and we were unable to recover it. 00:30:29.522 [2024-12-09 06:29:23.725913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.522 [2024-12-09 06:29:23.725945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.522 qpair failed and we were unable to recover it. 00:30:29.522 [2024-12-09 06:29:23.726268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.522 [2024-12-09 06:29:23.726301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.522 qpair failed and we were unable to recover it. 00:30:29.522 [2024-12-09 06:29:23.726657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.522 [2024-12-09 06:29:23.726690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.522 qpair failed and we were unable to recover it. 00:30:29.522 [2024-12-09 06:29:23.727013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.522 [2024-12-09 06:29:23.727045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.522 qpair failed and we were unable to recover it. 00:30:29.522 [2024-12-09 06:29:23.727360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.522 [2024-12-09 06:29:23.727391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.522 qpair failed and we were unable to recover it. 00:30:29.522 [2024-12-09 06:29:23.727725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.522 [2024-12-09 06:29:23.727759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.522 qpair failed and we were unable to recover it. 00:30:29.522 [2024-12-09 06:29:23.728078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.522 [2024-12-09 06:29:23.728111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.522 qpair failed and we were unable to recover it. 00:30:29.522 [2024-12-09 06:29:23.728441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.522 [2024-12-09 06:29:23.728487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.522 qpair failed and we were unable to recover it. 00:30:29.522 [2024-12-09 06:29:23.728872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.522 [2024-12-09 06:29:23.728903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.522 qpair failed and we were unable to recover it. 00:30:29.522 [2024-12-09 06:29:23.729119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.522 [2024-12-09 06:29:23.729151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.522 qpair failed and we were unable to recover it. 00:30:29.522 [2024-12-09 06:29:23.729486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.522 [2024-12-09 06:29:23.729519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.522 qpair failed and we were unable to recover it. 00:30:29.522 [2024-12-09 06:29:23.729857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.522 [2024-12-09 06:29:23.729890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.522 qpair failed and we were unable to recover it. 00:30:29.522 [2024-12-09 06:29:23.730226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.522 [2024-12-09 06:29:23.730259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.522 qpair failed and we were unable to recover it. 00:30:29.522 [2024-12-09 06:29:23.730594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.522 [2024-12-09 06:29:23.730633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.522 qpair failed and we were unable to recover it. 00:30:29.522 [2024-12-09 06:29:23.730975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.522 [2024-12-09 06:29:23.731006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.522 qpair failed and we were unable to recover it. 00:30:29.522 [2024-12-09 06:29:23.731324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.522 [2024-12-09 06:29:23.731356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.522 qpair failed and we were unable to recover it. 00:30:29.522 [2024-12-09 06:29:23.731706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.522 [2024-12-09 06:29:23.731738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.522 qpair failed and we were unable to recover it. 00:30:29.522 [2024-12-09 06:29:23.732050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.522 [2024-12-09 06:29:23.732083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.522 qpair failed and we were unable to recover it. 00:30:29.522 [2024-12-09 06:29:23.732416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.522 [2024-12-09 06:29:23.732447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.522 qpair failed and we were unable to recover it. 00:30:29.522 [2024-12-09 06:29:23.732822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.522 [2024-12-09 06:29:23.732854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.522 qpair failed and we were unable to recover it. 00:30:29.522 [2024-12-09 06:29:23.733169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.522 [2024-12-09 06:29:23.733199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.522 qpair failed and we were unable to recover it. 00:30:29.522 [2024-12-09 06:29:23.733526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.522 [2024-12-09 06:29:23.733559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.522 qpair failed and we were unable to recover it. 00:30:29.522 [2024-12-09 06:29:23.733834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.522 [2024-12-09 06:29:23.733865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.522 qpair failed and we were unable to recover it. 00:30:29.522 [2024-12-09 06:29:23.734194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.522 [2024-12-09 06:29:23.734225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.522 qpair failed and we were unable to recover it. 00:30:29.522 [2024-12-09 06:29:23.734558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.522 [2024-12-09 06:29:23.734591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.522 qpair failed and we were unable to recover it. 00:30:29.522 [2024-12-09 06:29:23.734936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.522 [2024-12-09 06:29:23.734968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.522 qpair failed and we were unable to recover it. 00:30:29.522 [2024-12-09 06:29:23.735303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.522 [2024-12-09 06:29:23.735335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.522 qpair failed and we were unable to recover it. 00:30:29.522 [2024-12-09 06:29:23.735680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.522 [2024-12-09 06:29:23.735712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.522 qpair failed and we were unable to recover it. 00:30:29.522 [2024-12-09 06:29:23.736065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.522 [2024-12-09 06:29:23.736098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.522 qpair failed and we were unable to recover it. 00:30:29.522 [2024-12-09 06:29:23.736460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.522 [2024-12-09 06:29:23.736495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.522 qpair failed and we were unable to recover it. 00:30:29.523 [2024-12-09 06:29:23.736826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.523 [2024-12-09 06:29:23.736858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.523 qpair failed and we were unable to recover it. 00:30:29.523 [2024-12-09 06:29:23.737196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.523 [2024-12-09 06:29:23.737228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.523 qpair failed and we were unable to recover it. 00:30:29.523 [2024-12-09 06:29:23.737559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.523 [2024-12-09 06:29:23.737592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.523 qpair failed and we were unable to recover it. 00:30:29.523 [2024-12-09 06:29:23.737927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.523 [2024-12-09 06:29:23.737960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.523 qpair failed and we were unable to recover it. 00:30:29.523 [2024-12-09 06:29:23.738323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.523 [2024-12-09 06:29:23.738355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.523 qpair failed and we were unable to recover it. 00:30:29.523 [2024-12-09 06:29:23.738679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.523 [2024-12-09 06:29:23.738710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.523 qpair failed and we were unable to recover it. 00:30:29.523 [2024-12-09 06:29:23.739034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.523 [2024-12-09 06:29:23.739064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.523 qpair failed and we were unable to recover it. 00:30:29.523 [2024-12-09 06:29:23.739426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.523 [2024-12-09 06:29:23.739468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.523 qpair failed and we were unable to recover it. 00:30:29.523 [2024-12-09 06:29:23.739856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.523 [2024-12-09 06:29:23.739887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.523 qpair failed and we were unable to recover it. 00:30:29.523 [2024-12-09 06:29:23.740240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.523 [2024-12-09 06:29:23.740273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.523 qpair failed and we were unable to recover it. 00:30:29.523 [2024-12-09 06:29:23.740587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.523 [2024-12-09 06:29:23.740627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.523 qpair failed and we were unable to recover it. 00:30:29.523 [2024-12-09 06:29:23.740994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.523 [2024-12-09 06:29:23.741025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.523 qpair failed and we were unable to recover it. 00:30:29.523 [2024-12-09 06:29:23.741378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.523 [2024-12-09 06:29:23.741410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.523 qpair failed and we were unable to recover it. 00:30:29.523 [2024-12-09 06:29:23.741800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.523 [2024-12-09 06:29:23.741833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.523 qpair failed and we were unable to recover it. 00:30:29.523 [2024-12-09 06:29:23.742169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.523 [2024-12-09 06:29:23.742202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.523 qpair failed and we were unable to recover it. 00:30:29.523 [2024-12-09 06:29:23.742525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.523 [2024-12-09 06:29:23.742558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.523 qpair failed and we were unable to recover it. 00:30:29.523 [2024-12-09 06:29:23.742915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.523 [2024-12-09 06:29:23.742949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.523 qpair failed and we were unable to recover it. 00:30:29.523 [2024-12-09 06:29:23.743296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.523 [2024-12-09 06:29:23.743327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.523 qpair failed and we were unable to recover it. 00:30:29.523 [2024-12-09 06:29:23.743676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.523 [2024-12-09 06:29:23.743708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.523 qpair failed and we were unable to recover it. 00:30:29.523 [2024-12-09 06:29:23.744043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.523 [2024-12-09 06:29:23.744075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.523 qpair failed and we were unable to recover it. 00:30:29.523 [2024-12-09 06:29:23.744411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.523 [2024-12-09 06:29:23.744443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.523 qpair failed and we were unable to recover it. 00:30:29.523 [2024-12-09 06:29:23.744813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.523 [2024-12-09 06:29:23.744845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.523 qpair failed and we were unable to recover it. 00:30:29.523 [2024-12-09 06:29:23.745182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.523 [2024-12-09 06:29:23.745213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.523 qpair failed and we were unable to recover it. 00:30:29.523 [2024-12-09 06:29:23.745553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.523 [2024-12-09 06:29:23.745585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.523 qpair failed and we were unable to recover it. 00:30:29.523 [2024-12-09 06:29:23.745933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.523 [2024-12-09 06:29:23.745965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.523 qpair failed and we were unable to recover it. 00:30:29.523 [2024-12-09 06:29:23.746325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.523 [2024-12-09 06:29:23.746359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.523 qpair failed and we were unable to recover it. 00:30:29.523 [2024-12-09 06:29:23.746739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.523 [2024-12-09 06:29:23.746772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.523 qpair failed and we were unable to recover it. 00:30:29.523 [2024-12-09 06:29:23.747128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.523 [2024-12-09 06:29:23.747161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.523 qpair failed and we were unable to recover it. 00:30:29.523 [2024-12-09 06:29:23.747387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.523 [2024-12-09 06:29:23.747424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.523 qpair failed and we were unable to recover it. 00:30:29.523 [2024-12-09 06:29:23.747765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.523 [2024-12-09 06:29:23.747798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.523 qpair failed and we were unable to recover it. 00:30:29.523 [2024-12-09 06:29:23.748141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.523 [2024-12-09 06:29:23.748176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.523 qpair failed and we were unable to recover it. 00:30:29.523 [2024-12-09 06:29:23.748563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.523 [2024-12-09 06:29:23.748621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.523 qpair failed and we were unable to recover it. 00:30:29.523 [2024-12-09 06:29:23.749002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.523 [2024-12-09 06:29:23.749060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.523 qpair failed and we were unable to recover it. 00:30:29.523 [2024-12-09 06:29:23.749417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.523 [2024-12-09 06:29:23.749489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.523 qpair failed and we were unable to recover it. 00:30:29.523 [2024-12-09 06:29:23.749891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.524 [2024-12-09 06:29:23.749950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.524 qpair failed and we were unable to recover it. 00:30:29.524 [2024-12-09 06:29:23.750377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.524 [2024-12-09 06:29:23.750433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.524 qpair failed and we were unable to recover it. 00:30:29.524 [2024-12-09 06:29:23.750895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.524 [2024-12-09 06:29:23.750952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.524 qpair failed and we were unable to recover it. 00:30:29.524 [2024-12-09 06:29:23.751337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.524 [2024-12-09 06:29:23.751396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.524 qpair failed and we were unable to recover it. 00:30:29.524 [2024-12-09 06:29:23.751812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.524 [2024-12-09 06:29:23.751867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.524 qpair failed and we were unable to recover it. 00:30:29.524 [2024-12-09 06:29:23.752248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.524 [2024-12-09 06:29:23.752306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.524 qpair failed and we were unable to recover it. 00:30:29.524 [2024-12-09 06:29:23.752703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.524 [2024-12-09 06:29:23.752756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.524 qpair failed and we were unable to recover it. 00:30:29.524 [2024-12-09 06:29:23.753140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.524 [2024-12-09 06:29:23.753194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.524 qpair failed and we were unable to recover it. 00:30:29.524 [2024-12-09 06:29:23.753575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.524 [2024-12-09 06:29:23.753631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.524 qpair failed and we were unable to recover it. 00:30:29.524 [2024-12-09 06:29:23.754060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.524 [2024-12-09 06:29:23.754119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.524 qpair failed and we were unable to recover it. 00:30:29.524 [2024-12-09 06:29:23.754519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.524 [2024-12-09 06:29:23.754574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.524 qpair failed and we were unable to recover it. 00:30:29.524 [2024-12-09 06:29:23.754938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.524 [2024-12-09 06:29:23.754985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.524 qpair failed and we were unable to recover it. 00:30:29.524 [2024-12-09 06:29:23.755340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.524 [2024-12-09 06:29:23.755388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.524 qpair failed and we were unable to recover it. 00:30:29.524 [2024-12-09 06:29:23.755778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.524 [2024-12-09 06:29:23.755828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.524 qpair failed and we were unable to recover it. 00:30:29.524 [2024-12-09 06:29:23.756231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.524 [2024-12-09 06:29:23.756279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.524 qpair failed and we were unable to recover it. 00:30:29.524 [2024-12-09 06:29:23.756680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.524 [2024-12-09 06:29:23.756727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.524 qpair failed and we were unable to recover it. 00:30:29.524 [2024-12-09 06:29:23.757116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.524 [2024-12-09 06:29:23.757163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.524 qpair failed and we were unable to recover it. 00:30:29.524 [2024-12-09 06:29:23.757530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.524 [2024-12-09 06:29:23.757589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.524 qpair failed and we were unable to recover it. 00:30:29.524 [2024-12-09 06:29:23.757982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.524 [2024-12-09 06:29:23.758030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.524 qpair failed and we were unable to recover it. 00:30:29.524 [2024-12-09 06:29:23.758410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.524 [2024-12-09 06:29:23.758473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.524 qpair failed and we were unable to recover it. 00:30:29.524 [2024-12-09 06:29:23.758890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.524 [2024-12-09 06:29:23.758937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.524 qpair failed and we were unable to recover it. 00:30:29.524 [2024-12-09 06:29:23.759318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.524 [2024-12-09 06:29:23.759365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.524 qpair failed and we were unable to recover it. 00:30:29.524 [2024-12-09 06:29:23.759766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.524 [2024-12-09 06:29:23.759812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.524 qpair failed and we were unable to recover it. 00:30:29.524 [2024-12-09 06:29:23.760179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.524 [2024-12-09 06:29:23.760224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.524 qpair failed and we were unable to recover it. 00:30:29.524 [2024-12-09 06:29:23.760638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.524 [2024-12-09 06:29:23.760685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.524 qpair failed and we were unable to recover it. 00:30:29.524 [2024-12-09 06:29:23.761063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.524 [2024-12-09 06:29:23.761111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.524 qpair failed and we were unable to recover it. 00:30:29.524 [2024-12-09 06:29:23.761504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.524 [2024-12-09 06:29:23.761555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.524 qpair failed and we were unable to recover it. 00:30:29.524 [2024-12-09 06:29:23.761987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.524 [2024-12-09 06:29:23.762036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.524 qpair failed and we were unable to recover it. 00:30:29.524 [2024-12-09 06:29:23.762414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.524 [2024-12-09 06:29:23.762477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.524 qpair failed and we were unable to recover it. 00:30:29.524 [2024-12-09 06:29:23.762775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.524 [2024-12-09 06:29:23.762826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.524 qpair failed and we were unable to recover it. 00:30:29.524 [2024-12-09 06:29:23.763127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.524 [2024-12-09 06:29:23.763180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.524 qpair failed and we were unable to recover it. 00:30:29.524 [2024-12-09 06:29:23.763609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.524 [2024-12-09 06:29:23.763658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.524 qpair failed and we were unable to recover it. 00:30:29.524 [2024-12-09 06:29:23.764030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.524 [2024-12-09 06:29:23.764077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.524 qpair failed and we were unable to recover it. 00:30:29.524 [2024-12-09 06:29:23.764508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.524 [2024-12-09 06:29:23.764564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.524 qpair failed and we were unable to recover it. 00:30:29.524 [2024-12-09 06:29:23.764946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.524 [2024-12-09 06:29:23.764984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.524 qpair failed and we were unable to recover it. 00:30:29.524 [2024-12-09 06:29:23.765329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.524 [2024-12-09 06:29:23.765363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.524 qpair failed and we were unable to recover it. 00:30:29.524 [2024-12-09 06:29:23.765746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.524 [2024-12-09 06:29:23.765780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.524 qpair failed and we were unable to recover it. 00:30:29.524 [2024-12-09 06:29:23.766026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.524 [2024-12-09 06:29:23.766057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.524 qpair failed and we were unable to recover it. 00:30:29.524 [2024-12-09 06:29:23.766413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.525 [2024-12-09 06:29:23.766445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.525 qpair failed and we were unable to recover it. 00:30:29.525 [2024-12-09 06:29:23.766831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.525 [2024-12-09 06:29:23.766866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.525 qpair failed and we were unable to recover it. 00:30:29.525 [2024-12-09 06:29:23.767216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.525 [2024-12-09 06:29:23.767248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.525 qpair failed and we were unable to recover it. 00:30:29.525 [2024-12-09 06:29:23.767592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.525 [2024-12-09 06:29:23.767620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.525 qpair failed and we were unable to recover it. 00:30:29.525 [2024-12-09 06:29:23.767969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.525 [2024-12-09 06:29:23.767996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.525 qpair failed and we were unable to recover it. 00:30:29.525 [2024-12-09 06:29:23.768345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.525 [2024-12-09 06:29:23.768372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.525 qpair failed and we were unable to recover it. 00:30:29.525 [2024-12-09 06:29:23.768713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.525 [2024-12-09 06:29:23.768748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.525 qpair failed and we were unable to recover it. 00:30:29.525 [2024-12-09 06:29:23.768974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.525 [2024-12-09 06:29:23.769000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.525 qpair failed and we were unable to recover it. 00:30:29.525 [2024-12-09 06:29:23.769175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.525 [2024-12-09 06:29:23.769202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.525 qpair failed and we were unable to recover it. 00:30:29.525 [2024-12-09 06:29:23.769539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.525 [2024-12-09 06:29:23.769565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.525 qpair failed and we were unable to recover it. 00:30:29.525 [2024-12-09 06:29:23.769801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.525 [2024-12-09 06:29:23.769828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.525 qpair failed and we were unable to recover it. 00:30:29.525 [2024-12-09 06:29:23.770157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.525 [2024-12-09 06:29:23.770183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.525 qpair failed and we were unable to recover it. 00:30:29.525 [2024-12-09 06:29:23.770528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.525 [2024-12-09 06:29:23.770554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.525 qpair failed and we were unable to recover it. 00:30:29.525 [2024-12-09 06:29:23.770807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.525 [2024-12-09 06:29:23.770832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.525 qpair failed and we were unable to recover it. 00:30:29.525 [2024-12-09 06:29:23.771163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.525 [2024-12-09 06:29:23.771190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.525 qpair failed and we were unable to recover it. 00:30:29.525 [2024-12-09 06:29:23.771557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.525 [2024-12-09 06:29:23.771586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.525 qpair failed and we were unable to recover it. 00:30:29.525 [2024-12-09 06:29:23.771918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.525 [2024-12-09 06:29:23.771947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.525 qpair failed and we were unable to recover it. 00:30:29.525 [2024-12-09 06:29:23.772294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.525 [2024-12-09 06:29:23.772322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.525 qpair failed and we were unable to recover it. 00:30:29.525 [2024-12-09 06:29:23.772545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.525 [2024-12-09 06:29:23.772572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.525 qpair failed and we were unable to recover it. 00:30:29.525 [2024-12-09 06:29:23.772910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.525 [2024-12-09 06:29:23.772938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.525 qpair failed and we were unable to recover it. 00:30:29.525 [2024-12-09 06:29:23.773209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.525 [2024-12-09 06:29:23.773236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.525 qpair failed and we were unable to recover it. 00:30:29.525 [2024-12-09 06:29:23.773580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.525 [2024-12-09 06:29:23.773608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.525 qpair failed and we were unable to recover it. 00:30:29.525 [2024-12-09 06:29:23.773944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.525 [2024-12-09 06:29:23.773969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.525 qpair failed and we were unable to recover it. 00:30:29.525 [2024-12-09 06:29:23.774319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.525 [2024-12-09 06:29:23.774347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.525 qpair failed and we were unable to recover it. 00:30:29.525 [2024-12-09 06:29:23.774634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.525 [2024-12-09 06:29:23.774661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.525 qpair failed and we were unable to recover it. 00:30:29.525 [2024-12-09 06:29:23.775003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.525 [2024-12-09 06:29:23.775031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.525 qpair failed and we were unable to recover it. 00:30:29.525 [2024-12-09 06:29:23.775384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.525 [2024-12-09 06:29:23.775412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.525 qpair failed and we were unable to recover it. 00:30:29.525 [2024-12-09 06:29:23.775777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.525 [2024-12-09 06:29:23.775803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.525 qpair failed and we were unable to recover it. 00:30:29.525 [2024-12-09 06:29:23.776145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.525 [2024-12-09 06:29:23.776173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.525 qpair failed and we were unable to recover it. 00:30:29.525 [2024-12-09 06:29:23.776502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.525 [2024-12-09 06:29:23.776529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.525 qpair failed and we were unable to recover it. 00:30:29.525 [2024-12-09 06:29:23.776842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.525 [2024-12-09 06:29:23.776867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.525 qpair failed and we were unable to recover it. 00:30:29.525 [2024-12-09 06:29:23.777225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.525 [2024-12-09 06:29:23.777250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.525 qpair failed and we were unable to recover it. 00:30:29.525 [2024-12-09 06:29:23.777595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.525 [2024-12-09 06:29:23.777623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.525 qpair failed and we were unable to recover it. 00:30:29.525 [2024-12-09 06:29:23.777975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.525 [2024-12-09 06:29:23.778001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.525 qpair failed and we were unable to recover it. 00:30:29.525 [2024-12-09 06:29:23.778231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.525 [2024-12-09 06:29:23.778257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.525 qpair failed and we were unable to recover it. 00:30:29.525 [2024-12-09 06:29:23.778480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.525 [2024-12-09 06:29:23.778512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.525 qpair failed and we were unable to recover it. 00:30:29.525 [2024-12-09 06:29:23.778766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.525 [2024-12-09 06:29:23.778796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.525 qpair failed and we were unable to recover it. 00:30:29.525 [2024-12-09 06:29:23.779021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.525 [2024-12-09 06:29:23.779055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.525 qpair failed and we were unable to recover it. 00:30:29.525 [2024-12-09 06:29:23.779384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.526 [2024-12-09 06:29:23.779417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.526 qpair failed and we were unable to recover it. 00:30:29.526 [2024-12-09 06:29:23.779800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.526 [2024-12-09 06:29:23.779832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.526 qpair failed and we were unable to recover it. 00:30:29.526 [2024-12-09 06:29:23.780096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.526 [2024-12-09 06:29:23.780127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.526 qpair failed and we were unable to recover it. 00:30:29.526 [2024-12-09 06:29:23.780480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.526 [2024-12-09 06:29:23.780513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.526 qpair failed and we were unable to recover it. 00:30:29.526 [2024-12-09 06:29:23.780883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.526 [2024-12-09 06:29:23.780915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.526 qpair failed and we were unable to recover it. 00:30:29.526 [2024-12-09 06:29:23.781249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.526 [2024-12-09 06:29:23.781282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.526 qpair failed and we were unable to recover it. 00:30:29.526 [2024-12-09 06:29:23.781626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.526 [2024-12-09 06:29:23.781659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.526 qpair failed and we were unable to recover it. 00:30:29.526 [2024-12-09 06:29:23.782008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.526 [2024-12-09 06:29:23.782040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.526 qpair failed and we were unable to recover it. 00:30:29.526 [2024-12-09 06:29:23.782362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.526 [2024-12-09 06:29:23.782393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.526 qpair failed and we were unable to recover it. 00:30:29.526 [2024-12-09 06:29:23.782767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.526 [2024-12-09 06:29:23.782810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.526 qpair failed and we were unable to recover it. 00:30:29.526 [2024-12-09 06:29:23.783198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.526 [2024-12-09 06:29:23.783229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.526 qpair failed and we were unable to recover it. 00:30:29.526 [2024-12-09 06:29:23.783568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.526 [2024-12-09 06:29:23.783603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.526 qpair failed and we were unable to recover it. 00:30:29.526 [2024-12-09 06:29:23.783917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.526 [2024-12-09 06:29:23.783948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.526 qpair failed and we were unable to recover it. 00:30:29.526 [2024-12-09 06:29:23.784278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.526 [2024-12-09 06:29:23.784310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.526 qpair failed and we were unable to recover it. 00:30:29.526 [2024-12-09 06:29:23.784690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.526 [2024-12-09 06:29:23.784723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.526 qpair failed and we were unable to recover it. 00:30:29.526 [2024-12-09 06:29:23.785058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.526 [2024-12-09 06:29:23.785091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.526 qpair failed and we were unable to recover it. 00:30:29.526 [2024-12-09 06:29:23.785404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.526 [2024-12-09 06:29:23.785436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.526 qpair failed and we were unable to recover it. 00:30:29.526 [2024-12-09 06:29:23.785835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.526 [2024-12-09 06:29:23.785867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.526 qpair failed and we were unable to recover it. 00:30:29.526 [2024-12-09 06:29:23.786209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.526 [2024-12-09 06:29:23.786241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.526 qpair failed and we were unable to recover it. 00:30:29.526 [2024-12-09 06:29:23.786629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.526 [2024-12-09 06:29:23.786663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.526 qpair failed and we were unable to recover it. 00:30:29.526 [2024-12-09 06:29:23.786989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.526 [2024-12-09 06:29:23.787022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.526 qpair failed and we were unable to recover it. 00:30:29.526 [2024-12-09 06:29:23.787354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.526 [2024-12-09 06:29:23.787385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.526 qpair failed and we were unable to recover it. 00:30:29.526 [2024-12-09 06:29:23.787710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.526 [2024-12-09 06:29:23.787745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.526 qpair failed and we were unable to recover it. 00:30:29.526 [2024-12-09 06:29:23.788101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.526 [2024-12-09 06:29:23.788134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.526 qpair failed and we were unable to recover it. 00:30:29.526 [2024-12-09 06:29:23.788475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.526 [2024-12-09 06:29:23.788509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.526 qpair failed and we were unable to recover it. 00:30:29.526 [2024-12-09 06:29:23.788765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.526 [2024-12-09 06:29:23.788800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.526 qpair failed and we were unable to recover it. 00:30:29.526 [2024-12-09 06:29:23.789206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.526 [2024-12-09 06:29:23.789238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.526 qpair failed and we were unable to recover it. 00:30:29.526 [2024-12-09 06:29:23.789575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.526 [2024-12-09 06:29:23.789610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.526 qpair failed and we were unable to recover it. 00:30:29.526 [2024-12-09 06:29:23.789953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.526 [2024-12-09 06:29:23.789985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.526 qpair failed and we were unable to recover it. 00:30:29.526 [2024-12-09 06:29:23.790330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.526 [2024-12-09 06:29:23.790361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.526 qpair failed and we were unable to recover it. 00:30:29.526 [2024-12-09 06:29:23.790708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.526 [2024-12-09 06:29:23.790741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.526 qpair failed and we were unable to recover it. 00:30:29.526 [2024-12-09 06:29:23.791077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.526 [2024-12-09 06:29:23.791110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.526 qpair failed and we were unable to recover it. 00:30:29.526 [2024-12-09 06:29:23.791441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.526 [2024-12-09 06:29:23.791505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.526 qpair failed and we were unable to recover it. 00:30:29.526 [2024-12-09 06:29:23.791882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.526 [2024-12-09 06:29:23.791915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.526 qpair failed and we were unable to recover it. 00:30:29.526 [2024-12-09 06:29:23.792285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.526 [2024-12-09 06:29:23.792316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.526 qpair failed and we were unable to recover it. 00:30:29.526 [2024-12-09 06:29:23.792661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.526 [2024-12-09 06:29:23.792693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.526 qpair failed and we were unable to recover it. 00:30:29.526 [2024-12-09 06:29:23.792917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.526 [2024-12-09 06:29:23.792958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.526 qpair failed and we were unable to recover it. 00:30:29.526 [2024-12-09 06:29:23.793313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.526 [2024-12-09 06:29:23.793345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.526 qpair failed and we were unable to recover it. 00:30:29.526 [2024-12-09 06:29:23.793737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.526 [2024-12-09 06:29:23.793771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.526 qpair failed and we were unable to recover it. 00:30:29.527 [2024-12-09 06:29:23.794180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.527 [2024-12-09 06:29:23.794212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.527 qpair failed and we were unable to recover it. 00:30:29.527 [2024-12-09 06:29:23.794559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.527 [2024-12-09 06:29:23.794591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.527 qpair failed and we were unable to recover it. 00:30:29.527 [2024-12-09 06:29:23.794972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.527 [2024-12-09 06:29:23.795003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.527 qpair failed and we were unable to recover it. 00:30:29.527 [2024-12-09 06:29:23.795338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.527 [2024-12-09 06:29:23.795371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.527 qpair failed and we were unable to recover it. 00:30:29.527 [2024-12-09 06:29:23.795700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.527 [2024-12-09 06:29:23.795732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.527 qpair failed and we were unable to recover it. 00:30:29.527 [2024-12-09 06:29:23.796105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.527 [2024-12-09 06:29:23.796137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.527 qpair failed and we were unable to recover it. 00:30:29.527 [2024-12-09 06:29:23.796471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.527 [2024-12-09 06:29:23.796503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.527 qpair failed and we were unable to recover it. 00:30:29.527 [2024-12-09 06:29:23.796887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.527 [2024-12-09 06:29:23.796919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.527 qpair failed and we were unable to recover it. 00:30:29.527 [2024-12-09 06:29:23.797244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.527 [2024-12-09 06:29:23.797276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.527 qpair failed and we were unable to recover it. 00:30:29.527 [2024-12-09 06:29:23.797502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.527 [2024-12-09 06:29:23.797535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.527 qpair failed and we were unable to recover it. 00:30:29.527 [2024-12-09 06:29:23.797873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.527 [2024-12-09 06:29:23.797903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.527 qpair failed and we were unable to recover it. 00:30:29.527 [2024-12-09 06:29:23.798273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.527 [2024-12-09 06:29:23.798304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.527 qpair failed and we were unable to recover it. 00:30:29.527 [2024-12-09 06:29:23.798654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.527 [2024-12-09 06:29:23.798687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.527 qpair failed and we were unable to recover it. 00:30:29.527 [2024-12-09 06:29:23.799046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.527 [2024-12-09 06:29:23.799077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.527 qpair failed and we were unable to recover it. 00:30:29.527 [2024-12-09 06:29:23.799432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.527 [2024-12-09 06:29:23.799475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.527 qpair failed and we were unable to recover it. 00:30:29.527 [2024-12-09 06:29:23.799820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.527 [2024-12-09 06:29:23.799853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.527 qpair failed and we were unable to recover it. 00:30:29.527 [2024-12-09 06:29:23.800100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.527 [2024-12-09 06:29:23.800131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.527 qpair failed and we were unable to recover it. 00:30:29.527 [2024-12-09 06:29:23.800495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.527 [2024-12-09 06:29:23.800527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.527 qpair failed and we were unable to recover it. 00:30:29.527 [2024-12-09 06:29:23.800877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.527 [2024-12-09 06:29:23.800910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.527 qpair failed and we were unable to recover it. 00:30:29.527 [2024-12-09 06:29:23.801270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.527 [2024-12-09 06:29:23.801302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.527 qpair failed and we were unable to recover it. 00:30:29.527 [2024-12-09 06:29:23.801659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.527 [2024-12-09 06:29:23.801693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.527 qpair failed and we were unable to recover it. 00:30:29.527 [2024-12-09 06:29:23.802056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.527 [2024-12-09 06:29:23.802088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.527 qpair failed and we were unable to recover it. 00:30:29.527 [2024-12-09 06:29:23.802431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.527 [2024-12-09 06:29:23.802473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.527 qpair failed and we were unable to recover it. 00:30:29.527 [2024-12-09 06:29:23.802820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.527 [2024-12-09 06:29:23.802853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.527 qpair failed and we were unable to recover it. 00:30:29.527 [2024-12-09 06:29:23.803176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.527 [2024-12-09 06:29:23.803206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.527 qpair failed and we were unable to recover it. 00:30:29.527 [2024-12-09 06:29:23.803554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.527 [2024-12-09 06:29:23.803588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.527 qpair failed and we were unable to recover it. 00:30:29.527 [2024-12-09 06:29:23.803929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.527 [2024-12-09 06:29:23.803962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.527 qpair failed and we were unable to recover it. 00:30:29.527 [2024-12-09 06:29:23.804307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.527 [2024-12-09 06:29:23.804339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.527 qpair failed and we were unable to recover it. 00:30:29.527 [2024-12-09 06:29:23.804691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.527 [2024-12-09 06:29:23.804722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.527 qpair failed and we were unable to recover it. 00:30:29.527 [2024-12-09 06:29:23.805081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.527 [2024-12-09 06:29:23.805112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.527 qpair failed and we were unable to recover it. 00:30:29.527 [2024-12-09 06:29:23.805471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.527 [2024-12-09 06:29:23.805506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.527 qpair failed and we were unable to recover it. 00:30:29.527 [2024-12-09 06:29:23.805878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.527 [2024-12-09 06:29:23.805911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.527 qpair failed and we were unable to recover it. 00:30:29.527 [2024-12-09 06:29:23.806236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.527 [2024-12-09 06:29:23.806268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.527 qpair failed and we were unable to recover it. 00:30:29.527 [2024-12-09 06:29:23.806617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.527 [2024-12-09 06:29:23.806649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.527 qpair failed and we were unable to recover it. 00:30:29.527 [2024-12-09 06:29:23.806995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.527 [2024-12-09 06:29:23.807026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.527 qpair failed and we were unable to recover it. 00:30:29.527 [2024-12-09 06:29:23.807409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.527 [2024-12-09 06:29:23.807441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.527 qpair failed and we were unable to recover it. 00:30:29.527 [2024-12-09 06:29:23.807802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.527 [2024-12-09 06:29:23.807834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.527 qpair failed and we were unable to recover it. 00:30:29.527 [2024-12-09 06:29:23.808170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.527 [2024-12-09 06:29:23.808203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.527 qpair failed and we were unable to recover it. 00:30:29.528 [2024-12-09 06:29:23.808548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.528 [2024-12-09 06:29:23.808588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.528 qpair failed and we were unable to recover it. 00:30:29.528 [2024-12-09 06:29:23.808951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.528 [2024-12-09 06:29:23.808983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.528 qpair failed and we were unable to recover it. 00:30:29.528 [2024-12-09 06:29:23.809311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.528 [2024-12-09 06:29:23.809341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.528 qpair failed and we were unable to recover it. 00:30:29.528 [2024-12-09 06:29:23.809693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.528 [2024-12-09 06:29:23.809726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.528 qpair failed and we were unable to recover it. 00:30:29.528 [2024-12-09 06:29:23.810069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.528 [2024-12-09 06:29:23.810102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.528 qpair failed and we were unable to recover it. 00:30:29.528 [2024-12-09 06:29:23.810443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.528 [2024-12-09 06:29:23.810490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.528 qpair failed and we were unable to recover it. 00:30:29.528 [2024-12-09 06:29:23.810837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.528 [2024-12-09 06:29:23.810869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.528 qpair failed and we were unable to recover it. 00:30:29.528 [2024-12-09 06:29:23.811206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.528 [2024-12-09 06:29:23.811238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.528 qpair failed and we were unable to recover it. 00:30:29.528 [2024-12-09 06:29:23.811474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.528 [2024-12-09 06:29:23.811510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.528 qpair failed and we were unable to recover it. 00:30:29.528 [2024-12-09 06:29:23.811853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.528 [2024-12-09 06:29:23.811886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.528 qpair failed and we were unable to recover it. 00:30:29.528 [2024-12-09 06:29:23.812226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.528 [2024-12-09 06:29:23.812258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.528 qpair failed and we were unable to recover it. 00:30:29.528 [2024-12-09 06:29:23.812474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.528 [2024-12-09 06:29:23.812507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.528 qpair failed and we were unable to recover it. 00:30:29.528 [2024-12-09 06:29:23.812859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.528 [2024-12-09 06:29:23.812891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.528 qpair failed and we were unable to recover it. 00:30:29.528 [2024-12-09 06:29:23.813265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.528 [2024-12-09 06:29:23.813300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.528 qpair failed and we were unable to recover it. 00:30:29.528 [2024-12-09 06:29:23.813662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.528 [2024-12-09 06:29:23.813695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.528 qpair failed and we were unable to recover it. 00:30:29.528 [2024-12-09 06:29:23.814035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.528 [2024-12-09 06:29:23.814069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.528 qpair failed and we were unable to recover it. 00:30:29.528 [2024-12-09 06:29:23.814395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.528 [2024-12-09 06:29:23.814428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.528 qpair failed and we were unable to recover it. 00:30:29.528 [2024-12-09 06:29:23.814811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.528 [2024-12-09 06:29:23.814844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.528 qpair failed and we were unable to recover it. 00:30:29.528 [2024-12-09 06:29:23.815213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.528 [2024-12-09 06:29:23.815245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.528 qpair failed and we were unable to recover it. 00:30:29.528 [2024-12-09 06:29:23.815578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.528 [2024-12-09 06:29:23.815611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.528 qpair failed and we were unable to recover it. 00:30:29.528 [2024-12-09 06:29:23.815938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.528 [2024-12-09 06:29:23.815970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.528 qpair failed and we were unable to recover it. 00:30:29.528 [2024-12-09 06:29:23.816317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.528 [2024-12-09 06:29:23.816349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.528 qpair failed and we were unable to recover it. 00:30:29.528 [2024-12-09 06:29:23.816692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.528 [2024-12-09 06:29:23.816725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.528 qpair failed and we were unable to recover it. 00:30:29.528 [2024-12-09 06:29:23.817062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.528 [2024-12-09 06:29:23.817094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.528 qpair failed and we were unable to recover it. 00:30:29.528 [2024-12-09 06:29:23.817438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.528 [2024-12-09 06:29:23.817481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.528 qpair failed and we were unable to recover it. 00:30:29.528 [2024-12-09 06:29:23.817824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.528 [2024-12-09 06:29:23.817856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.528 qpair failed and we were unable to recover it. 00:30:29.528 [2024-12-09 06:29:23.818194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.528 [2024-12-09 06:29:23.818227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.528 qpair failed and we were unable to recover it. 00:30:29.528 [2024-12-09 06:29:23.818567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.528 [2024-12-09 06:29:23.818600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.528 qpair failed and we were unable to recover it. 00:30:29.528 [2024-12-09 06:29:23.818955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.528 [2024-12-09 06:29:23.818989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.528 qpair failed and we were unable to recover it. 00:30:29.528 [2024-12-09 06:29:23.819323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.528 [2024-12-09 06:29:23.819355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.528 qpair failed and we were unable to recover it. 00:30:29.528 [2024-12-09 06:29:23.819698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.528 [2024-12-09 06:29:23.819733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.528 qpair failed and we were unable to recover it. 00:30:29.528 [2024-12-09 06:29:23.820095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.528 [2024-12-09 06:29:23.820129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.528 qpair failed and we were unable to recover it. 00:30:29.528 [2024-12-09 06:29:23.820474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.528 [2024-12-09 06:29:23.820508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.528 qpair failed and we were unable to recover it. 00:30:29.528 [2024-12-09 06:29:23.820839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.529 [2024-12-09 06:29:23.820872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.529 qpair failed and we were unable to recover it. 00:30:29.529 [2024-12-09 06:29:23.821282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.529 [2024-12-09 06:29:23.821313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.529 qpair failed and we were unable to recover it. 00:30:29.529 [2024-12-09 06:29:23.821664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.529 [2024-12-09 06:29:23.821697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.529 qpair failed and we were unable to recover it. 00:30:29.529 [2024-12-09 06:29:23.822039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.529 [2024-12-09 06:29:23.822070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.529 qpair failed and we were unable to recover it. 00:30:29.529 [2024-12-09 06:29:23.822441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.529 [2024-12-09 06:29:23.822486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.529 qpair failed and we were unable to recover it. 00:30:29.529 [2024-12-09 06:29:23.822750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.529 [2024-12-09 06:29:23.822785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.529 qpair failed and we were unable to recover it. 00:30:29.529 [2024-12-09 06:29:23.823167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.529 [2024-12-09 06:29:23.823200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.529 qpair failed and we were unable to recover it. 00:30:29.529 [2024-12-09 06:29:23.823471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.529 [2024-12-09 06:29:23.823503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.529 qpair failed and we were unable to recover it. 00:30:29.529 [2024-12-09 06:29:23.823885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.529 [2024-12-09 06:29:23.823919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.529 qpair failed and we were unable to recover it. 00:30:29.529 [2024-12-09 06:29:23.824282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.529 [2024-12-09 06:29:23.824314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.529 qpair failed and we were unable to recover it. 00:30:29.529 [2024-12-09 06:29:23.824660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.529 [2024-12-09 06:29:23.824692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.529 qpair failed and we were unable to recover it. 00:30:29.529 [2024-12-09 06:29:23.825021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.529 [2024-12-09 06:29:23.825055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.529 qpair failed and we were unable to recover it. 00:30:29.529 [2024-12-09 06:29:23.825394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.529 [2024-12-09 06:29:23.825427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.529 qpair failed and we were unable to recover it. 00:30:29.529 [2024-12-09 06:29:23.825807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.529 [2024-12-09 06:29:23.825840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.529 qpair failed and we were unable to recover it. 00:30:29.529 [2024-12-09 06:29:23.826208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.529 [2024-12-09 06:29:23.826241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.529 qpair failed and we were unable to recover it. 00:30:29.529 [2024-12-09 06:29:23.826571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.529 [2024-12-09 06:29:23.826605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.529 qpair failed and we were unable to recover it. 00:30:29.529 [2024-12-09 06:29:23.826968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.529 [2024-12-09 06:29:23.827000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.529 qpair failed and we were unable to recover it. 00:30:29.529 [2024-12-09 06:29:23.827365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.529 [2024-12-09 06:29:23.827396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.529 qpair failed and we were unable to recover it. 00:30:29.529 [2024-12-09 06:29:23.827836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.529 [2024-12-09 06:29:23.827870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.529 qpair failed and we were unable to recover it. 00:30:29.529 [2024-12-09 06:29:23.828192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.529 [2024-12-09 06:29:23.828224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.529 qpair failed and we were unable to recover it. 00:30:29.529 [2024-12-09 06:29:23.828565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.529 [2024-12-09 06:29:23.828598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.529 qpair failed and we were unable to recover it. 00:30:29.529 [2024-12-09 06:29:23.828969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.529 [2024-12-09 06:29:23.829001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.529 qpair failed and we were unable to recover it. 00:30:29.529 [2024-12-09 06:29:23.829362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.529 [2024-12-09 06:29:23.829394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.529 qpair failed and we were unable to recover it. 00:30:29.529 [2024-12-09 06:29:23.829797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.529 [2024-12-09 06:29:23.829830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.529 qpair failed and we were unable to recover it. 00:30:29.529 [2024-12-09 06:29:23.830068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.529 [2024-12-09 06:29:23.830102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.529 qpair failed and we were unable to recover it. 00:30:29.529 [2024-12-09 06:29:23.830488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.529 [2024-12-09 06:29:23.830521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.529 qpair failed and we were unable to recover it. 00:30:29.529 [2024-12-09 06:29:23.830862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.529 [2024-12-09 06:29:23.830894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.529 qpair failed and we were unable to recover it. 00:30:29.529 [2024-12-09 06:29:23.831231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.529 [2024-12-09 06:29:23.831262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.529 qpair failed and we were unable to recover it. 00:30:29.529 [2024-12-09 06:29:23.831622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.529 [2024-12-09 06:29:23.831655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.529 qpair failed and we were unable to recover it. 00:30:29.529 [2024-12-09 06:29:23.832002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.529 [2024-12-09 06:29:23.832035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.529 qpair failed and we were unable to recover it. 00:30:29.529 [2024-12-09 06:29:23.832509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.529 [2024-12-09 06:29:23.832552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.529 qpair failed and we were unable to recover it. 00:30:29.529 [2024-12-09 06:29:23.832939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.529 [2024-12-09 06:29:23.832977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.529 qpair failed and we were unable to recover it. 00:30:29.529 [2024-12-09 06:29:23.833357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.529 [2024-12-09 06:29:23.833390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.529 qpair failed and we were unable to recover it. 00:30:29.529 [2024-12-09 06:29:23.833752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.529 [2024-12-09 06:29:23.833785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.529 qpair failed and we were unable to recover it. 00:30:29.529 [2024-12-09 06:29:23.834125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.529 [2024-12-09 06:29:23.834158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.529 qpair failed and we were unable to recover it. 00:30:29.529 [2024-12-09 06:29:23.834506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.529 [2024-12-09 06:29:23.834548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.529 qpair failed and we were unable to recover it. 00:30:29.529 [2024-12-09 06:29:23.834916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.529 [2024-12-09 06:29:23.834949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.529 qpair failed and we were unable to recover it. 00:30:29.529 [2024-12-09 06:29:23.835324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.529 [2024-12-09 06:29:23.835357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.529 qpair failed and we were unable to recover it. 00:30:29.529 [2024-12-09 06:29:23.835736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.530 [2024-12-09 06:29:23.835769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.530 qpair failed and we were unable to recover it. 00:30:29.530 [2024-12-09 06:29:23.836112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.530 [2024-12-09 06:29:23.836145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.530 qpair failed and we were unable to recover it. 00:30:29.530 [2024-12-09 06:29:23.836481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.530 [2024-12-09 06:29:23.836514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.530 qpair failed and we were unable to recover it. 00:30:29.530 [2024-12-09 06:29:23.836853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.530 [2024-12-09 06:29:23.836886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.530 qpair failed and we were unable to recover it. 00:30:29.530 [2024-12-09 06:29:23.837223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.530 [2024-12-09 06:29:23.837255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.530 qpair failed and we were unable to recover it. 00:30:29.530 [2024-12-09 06:29:23.837592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.530 [2024-12-09 06:29:23.837625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.530 qpair failed and we were unable to recover it. 00:30:29.530 [2024-12-09 06:29:23.837995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.530 [2024-12-09 06:29:23.838027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.530 qpair failed and we were unable to recover it. 00:30:29.530 [2024-12-09 06:29:23.838362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.530 [2024-12-09 06:29:23.838394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.530 qpair failed and we were unable to recover it. 00:30:29.530 [2024-12-09 06:29:23.838759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.530 [2024-12-09 06:29:23.838793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.530 qpair failed and we were unable to recover it. 00:30:29.530 [2024-12-09 06:29:23.839125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.530 [2024-12-09 06:29:23.839159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.530 qpair failed and we were unable to recover it. 00:30:29.530 [2024-12-09 06:29:23.839496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.530 [2024-12-09 06:29:23.839530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.530 qpair failed and we were unable to recover it. 00:30:29.530 [2024-12-09 06:29:23.839872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.530 [2024-12-09 06:29:23.839905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.530 qpair failed and we were unable to recover it. 00:30:29.530 [2024-12-09 06:29:23.840269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.530 [2024-12-09 06:29:23.840300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.530 qpair failed and we were unable to recover it. 00:30:29.530 [2024-12-09 06:29:23.840677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.530 [2024-12-09 06:29:23.840710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.530 qpair failed and we were unable to recover it. 00:30:29.530 [2024-12-09 06:29:23.841083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.530 [2024-12-09 06:29:23.841115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.530 qpair failed and we were unable to recover it. 00:30:29.530 [2024-12-09 06:29:23.841462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.530 [2024-12-09 06:29:23.841496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.530 qpair failed and we were unable to recover it. 00:30:29.530 [2024-12-09 06:29:23.841850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.530 [2024-12-09 06:29:23.841883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.530 qpair failed and we were unable to recover it. 00:30:29.530 [2024-12-09 06:29:23.842233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.530 [2024-12-09 06:29:23.842265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.530 qpair failed and we were unable to recover it. 00:30:29.530 [2024-12-09 06:29:23.842669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.530 [2024-12-09 06:29:23.842704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.530 qpair failed and we were unable to recover it. 00:30:29.530 [2024-12-09 06:29:23.843040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.530 [2024-12-09 06:29:23.843072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.530 qpair failed and we were unable to recover it. 00:30:29.530 [2024-12-09 06:29:23.843436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.530 [2024-12-09 06:29:23.843481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.530 qpair failed and we were unable to recover it. 00:30:29.530 [2024-12-09 06:29:23.843857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.530 [2024-12-09 06:29:23.843890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.530 qpair failed and we were unable to recover it. 00:30:29.530 [2024-12-09 06:29:23.844226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.530 [2024-12-09 06:29:23.844258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.530 qpair failed and we were unable to recover it. 00:30:29.530 [2024-12-09 06:29:23.844577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.530 [2024-12-09 06:29:23.844611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.530 qpair failed and we were unable to recover it. 00:30:29.530 [2024-12-09 06:29:23.844974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.530 [2024-12-09 06:29:23.845006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.530 qpair failed and we were unable to recover it. 00:30:29.530 [2024-12-09 06:29:23.845246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.530 [2024-12-09 06:29:23.845282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.530 qpair failed and we were unable to recover it. 00:30:29.530 [2024-12-09 06:29:23.845709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.530 [2024-12-09 06:29:23.845741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.530 qpair failed and we were unable to recover it. 00:30:29.530 [2024-12-09 06:29:23.846150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.530 [2024-12-09 06:29:23.846184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.530 qpair failed and we were unable to recover it. 00:30:29.530 [2024-12-09 06:29:23.846520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.530 [2024-12-09 06:29:23.846553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.530 qpair failed and we were unable to recover it. 00:30:29.530 [2024-12-09 06:29:23.846889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.530 [2024-12-09 06:29:23.846922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.530 qpair failed and we were unable to recover it. 00:30:29.530 [2024-12-09 06:29:23.847283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.530 [2024-12-09 06:29:23.847315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.530 qpair failed and we were unable to recover it. 00:30:29.530 [2024-12-09 06:29:23.847669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.530 [2024-12-09 06:29:23.847702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.530 qpair failed and we were unable to recover it. 00:30:29.530 [2024-12-09 06:29:23.848039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.530 [2024-12-09 06:29:23.848073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.530 qpair failed and we were unable to recover it. 00:30:29.530 [2024-12-09 06:29:23.848410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.530 [2024-12-09 06:29:23.848442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.530 qpair failed and we were unable to recover it. 00:30:29.530 [2024-12-09 06:29:23.848697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.530 [2024-12-09 06:29:23.848734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.530 qpair failed and we were unable to recover it. 00:30:29.530 [2024-12-09 06:29:23.848905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.530 [2024-12-09 06:29:23.848938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.530 qpair failed and we were unable to recover it. 00:30:29.530 [2024-12-09 06:29:23.849311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.530 [2024-12-09 06:29:23.849343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.530 qpair failed and we were unable to recover it. 00:30:29.530 [2024-12-09 06:29:23.849706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.530 [2024-12-09 06:29:23.849740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.530 qpair failed and we were unable to recover it. 00:30:29.530 [2024-12-09 06:29:23.849957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.530 [2024-12-09 06:29:23.849998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.531 qpair failed and we were unable to recover it. 00:30:29.531 [2024-12-09 06:29:23.850366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.531 [2024-12-09 06:29:23.850398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.531 qpair failed and we were unable to recover it. 00:30:29.531 [2024-12-09 06:29:23.850776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.531 [2024-12-09 06:29:23.850809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.531 qpair failed and we were unable to recover it. 00:30:29.531 [2024-12-09 06:29:23.851027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.531 [2024-12-09 06:29:23.851063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.531 qpair failed and we were unable to recover it. 00:30:29.531 [2024-12-09 06:29:23.851431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.531 [2024-12-09 06:29:23.851474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.531 qpair failed and we were unable to recover it. 00:30:29.531 [2024-12-09 06:29:23.851838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.531 [2024-12-09 06:29:23.851871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.531 qpair failed and we were unable to recover it. 00:30:29.531 [2024-12-09 06:29:23.852212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.531 [2024-12-09 06:29:23.852244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.531 qpair failed and we were unable to recover it. 00:30:29.531 [2024-12-09 06:29:23.852588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.531 [2024-12-09 06:29:23.852622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.531 qpair failed and we were unable to recover it. 00:30:29.531 [2024-12-09 06:29:23.852985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.531 [2024-12-09 06:29:23.853017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.531 qpair failed and we were unable to recover it. 00:30:29.531 [2024-12-09 06:29:23.853358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.531 [2024-12-09 06:29:23.853389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.531 qpair failed and we were unable to recover it. 00:30:29.531 [2024-12-09 06:29:23.853776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.531 [2024-12-09 06:29:23.853809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.531 qpair failed and we were unable to recover it. 00:30:29.531 [2024-12-09 06:29:23.854151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.531 [2024-12-09 06:29:23.854183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.531 qpair failed and we were unable to recover it. 00:30:29.531 [2024-12-09 06:29:23.854560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.531 [2024-12-09 06:29:23.854593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.531 qpair failed and we were unable to recover it. 00:30:29.531 [2024-12-09 06:29:23.854821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.531 [2024-12-09 06:29:23.854854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.531 qpair failed and we were unable to recover it. 00:30:29.531 [2024-12-09 06:29:23.855215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.531 [2024-12-09 06:29:23.855248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.531 qpair failed and we were unable to recover it. 00:30:29.531 [2024-12-09 06:29:23.855572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.531 [2024-12-09 06:29:23.855604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.531 qpair failed and we were unable to recover it. 00:30:29.531 [2024-12-09 06:29:23.855952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.531 [2024-12-09 06:29:23.855984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.531 qpair failed and we were unable to recover it. 00:30:29.531 [2024-12-09 06:29:23.856351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.531 [2024-12-09 06:29:23.856384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.531 qpair failed and we were unable to recover it. 00:30:29.531 [2024-12-09 06:29:23.856734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.531 [2024-12-09 06:29:23.856767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.531 qpair failed and we were unable to recover it. 00:30:29.531 [2024-12-09 06:29:23.857097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.531 [2024-12-09 06:29:23.857130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.531 qpair failed and we were unable to recover it. 00:30:29.531 [2024-12-09 06:29:23.857469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.531 [2024-12-09 06:29:23.857503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.531 qpair failed and we were unable to recover it. 00:30:29.531 [2024-12-09 06:29:23.857844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.531 [2024-12-09 06:29:23.857876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.531 qpair failed and we were unable to recover it. 00:30:29.531 [2024-12-09 06:29:23.858199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.531 [2024-12-09 06:29:23.858231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.531 qpair failed and we were unable to recover it. 00:30:29.531 [2024-12-09 06:29:23.858568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.531 [2024-12-09 06:29:23.858602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.531 qpair failed and we were unable to recover it. 00:30:29.531 [2024-12-09 06:29:23.858941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.531 [2024-12-09 06:29:23.858973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.531 qpair failed and we were unable to recover it. 00:30:29.531 [2024-12-09 06:29:23.859351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.531 [2024-12-09 06:29:23.859383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.531 qpair failed and we were unable to recover it. 00:30:29.531 [2024-12-09 06:29:23.859742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.531 [2024-12-09 06:29:23.859775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.531 qpair failed and we were unable to recover it. 00:30:29.531 [2024-12-09 06:29:23.860141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.531 [2024-12-09 06:29:23.860179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.531 qpair failed and we were unable to recover it. 00:30:29.531 [2024-12-09 06:29:23.860535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.531 [2024-12-09 06:29:23.860568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.531 qpair failed and we were unable to recover it. 00:30:29.531 [2024-12-09 06:29:23.860908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.531 [2024-12-09 06:29:23.860940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.531 qpair failed and we were unable to recover it. 00:30:29.531 [2024-12-09 06:29:23.861269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.531 [2024-12-09 06:29:23.861302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.531 qpair failed and we were unable to recover it. 00:30:29.531 [2024-12-09 06:29:23.861646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.531 [2024-12-09 06:29:23.861678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.531 qpair failed and we were unable to recover it. 00:30:29.531 [2024-12-09 06:29:23.862057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.531 [2024-12-09 06:29:23.862090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.531 qpair failed and we were unable to recover it. 00:30:29.531 [2024-12-09 06:29:23.862504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.531 [2024-12-09 06:29:23.862537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.531 qpair failed and we were unable to recover it. 00:30:29.531 [2024-12-09 06:29:23.862901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.531 [2024-12-09 06:29:23.862934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.531 qpair failed and we were unable to recover it. 00:30:29.531 [2024-12-09 06:29:23.863282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.531 [2024-12-09 06:29:23.863315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.531 qpair failed and we were unable to recover it. 00:30:29.531 [2024-12-09 06:29:23.863697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.531 [2024-12-09 06:29:23.863731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.531 qpair failed and we were unable to recover it. 00:30:29.531 [2024-12-09 06:29:23.864108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.531 [2024-12-09 06:29:23.864141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.531 qpair failed and we were unable to recover it. 00:30:29.531 [2024-12-09 06:29:23.864510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.532 [2024-12-09 06:29:23.864543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.532 qpair failed and we were unable to recover it. 00:30:29.532 [2024-12-09 06:29:23.864920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.532 [2024-12-09 06:29:23.864952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.532 qpair failed and we were unable to recover it. 00:30:29.532 [2024-12-09 06:29:23.865328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.532 [2024-12-09 06:29:23.865360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.532 qpair failed and we were unable to recover it. 00:30:29.532 [2024-12-09 06:29:23.865749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.532 [2024-12-09 06:29:23.865784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.532 qpair failed and we were unable to recover it. 00:30:29.532 [2024-12-09 06:29:23.866127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.532 [2024-12-09 06:29:23.866160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.532 qpair failed and we were unable to recover it. 00:30:29.532 [2024-12-09 06:29:23.866502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.532 [2024-12-09 06:29:23.866537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.532 qpair failed and we were unable to recover it. 00:30:29.532 [2024-12-09 06:29:23.866910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.532 [2024-12-09 06:29:23.866944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.532 qpair failed and we were unable to recover it. 00:30:29.532 [2024-12-09 06:29:23.867276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.532 [2024-12-09 06:29:23.867310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.532 qpair failed and we were unable to recover it. 00:30:29.532 [2024-12-09 06:29:23.867660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.532 [2024-12-09 06:29:23.867693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.532 qpair failed and we were unable to recover it. 00:30:29.532 [2024-12-09 06:29:23.868036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.532 [2024-12-09 06:29:23.868069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.532 qpair failed and we were unable to recover it. 00:30:29.532 [2024-12-09 06:29:23.868405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.532 [2024-12-09 06:29:23.868440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.532 qpair failed and we were unable to recover it. 00:30:29.532 [2024-12-09 06:29:23.868818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.532 [2024-12-09 06:29:23.868851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.532 qpair failed and we were unable to recover it. 00:30:29.532 [2024-12-09 06:29:23.869224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.532 [2024-12-09 06:29:23.869257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.532 qpair failed and we were unable to recover it. 00:30:29.532 [2024-12-09 06:29:23.869591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.532 [2024-12-09 06:29:23.869626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.532 qpair failed and we were unable to recover it. 00:30:29.532 [2024-12-09 06:29:23.869993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.532 [2024-12-09 06:29:23.870031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.532 qpair failed and we were unable to recover it. 00:30:29.532 [2024-12-09 06:29:23.870368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.532 [2024-12-09 06:29:23.870400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.532 qpair failed and we were unable to recover it. 00:30:29.532 [2024-12-09 06:29:23.870769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.532 [2024-12-09 06:29:23.870804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.532 qpair failed and we were unable to recover it. 00:30:29.532 [2024-12-09 06:29:23.871157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.532 [2024-12-09 06:29:23.871191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.532 qpair failed and we were unable to recover it. 00:30:29.532 [2024-12-09 06:29:23.871554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.532 [2024-12-09 06:29:23.871590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.532 qpair failed and we were unable to recover it. 00:30:29.532 [2024-12-09 06:29:23.871923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.532 [2024-12-09 06:29:23.871956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.532 qpair failed and we were unable to recover it. 00:30:29.532 [2024-12-09 06:29:23.872295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.532 [2024-12-09 06:29:23.872327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.532 qpair failed and we were unable to recover it. 00:30:29.532 [2024-12-09 06:29:23.872675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.532 [2024-12-09 06:29:23.872707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.532 qpair failed and we were unable to recover it. 00:30:29.532 [2024-12-09 06:29:23.873036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.532 [2024-12-09 06:29:23.873069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.532 qpair failed and we were unable to recover it. 00:30:29.532 [2024-12-09 06:29:23.873407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.532 [2024-12-09 06:29:23.873443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.532 qpair failed and we were unable to recover it. 00:30:29.532 [2024-12-09 06:29:23.873803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.532 [2024-12-09 06:29:23.873837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.532 qpair failed and we were unable to recover it. 00:30:29.532 [2024-12-09 06:29:23.874203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.532 [2024-12-09 06:29:23.874236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.532 qpair failed and we were unable to recover it. 00:30:29.532 [2024-12-09 06:29:23.874584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.532 [2024-12-09 06:29:23.874617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.532 qpair failed and we were unable to recover it. 00:30:29.532 [2024-12-09 06:29:23.875030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.532 [2024-12-09 06:29:23.875060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.532 qpair failed and we were unable to recover it. 00:30:29.532 [2024-12-09 06:29:23.875420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.532 [2024-12-09 06:29:23.875468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.532 qpair failed and we were unable to recover it. 00:30:29.532 [2024-12-09 06:29:23.875813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.532 [2024-12-09 06:29:23.875845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.532 qpair failed and we were unable to recover it. 00:30:29.532 [2024-12-09 06:29:23.876080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.532 [2024-12-09 06:29:23.876119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.532 qpair failed and we were unable to recover it. 00:30:29.532 [2024-12-09 06:29:23.876501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.532 [2024-12-09 06:29:23.876533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.532 qpair failed and we were unable to recover it. 00:30:29.532 [2024-12-09 06:29:23.876893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.532 [2024-12-09 06:29:23.876927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.532 qpair failed and we were unable to recover it. 00:30:29.532 [2024-12-09 06:29:23.877285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.532 [2024-12-09 06:29:23.877317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.532 qpair failed and we were unable to recover it. 00:30:29.532 [2024-12-09 06:29:23.877570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.532 [2024-12-09 06:29:23.877601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.532 qpair failed and we were unable to recover it. 00:30:29.532 [2024-12-09 06:29:23.877976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.532 [2024-12-09 06:29:23.878006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.532 qpair failed and we were unable to recover it. 00:30:29.532 [2024-12-09 06:29:23.878368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.532 [2024-12-09 06:29:23.878401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.533 qpair failed and we were unable to recover it. 00:30:29.533 [2024-12-09 06:29:23.878757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.533 [2024-12-09 06:29:23.878789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.533 qpair failed and we were unable to recover it. 00:30:29.533 [2024-12-09 06:29:23.879128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.533 [2024-12-09 06:29:23.879159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.533 qpair failed and we were unable to recover it. 00:30:29.533 [2024-12-09 06:29:23.879520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.533 [2024-12-09 06:29:23.879555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.533 qpair failed and we were unable to recover it. 00:30:29.533 [2024-12-09 06:29:23.879933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.533 [2024-12-09 06:29:23.879965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.533 qpair failed and we were unable to recover it. 00:30:29.533 [2024-12-09 06:29:23.880344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.533 [2024-12-09 06:29:23.880377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.533 qpair failed and we were unable to recover it. 00:30:29.533 [2024-12-09 06:29:23.880755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.533 [2024-12-09 06:29:23.880787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.533 qpair failed and we were unable to recover it. 00:30:29.533 [2024-12-09 06:29:23.881147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.533 [2024-12-09 06:29:23.881180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.533 qpair failed and we were unable to recover it. 00:30:29.533 [2024-12-09 06:29:23.881544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.533 [2024-12-09 06:29:23.881575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.533 qpair failed and we were unable to recover it. 00:30:29.533 [2024-12-09 06:29:23.881814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.533 [2024-12-09 06:29:23.881845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.533 qpair failed and we were unable to recover it. 00:30:29.533 [2024-12-09 06:29:23.882088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.533 [2024-12-09 06:29:23.882120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.533 qpair failed and we were unable to recover it. 00:30:29.533 [2024-12-09 06:29:23.882498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.533 [2024-12-09 06:29:23.882533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.533 qpair failed and we were unable to recover it. 00:30:29.533 [2024-12-09 06:29:23.882871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.533 [2024-12-09 06:29:23.882904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.533 qpair failed and we were unable to recover it. 00:30:29.533 [2024-12-09 06:29:23.883230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.533 [2024-12-09 06:29:23.883263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.533 qpair failed and we were unable to recover it. 00:30:29.533 [2024-12-09 06:29:23.883636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.533 [2024-12-09 06:29:23.883670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.533 qpair failed and we were unable to recover it. 00:30:29.533 [2024-12-09 06:29:23.884007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.533 [2024-12-09 06:29:23.884040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.533 qpair failed and we were unable to recover it. 00:30:29.533 [2024-12-09 06:29:23.884358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.533 [2024-12-09 06:29:23.884389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.533 qpair failed and we were unable to recover it. 00:30:29.533 [2024-12-09 06:29:23.884658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.533 [2024-12-09 06:29:23.884690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.533 qpair failed and we were unable to recover it. 00:30:29.533 [2024-12-09 06:29:23.885051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.533 [2024-12-09 06:29:23.885083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.533 qpair failed and we were unable to recover it. 00:30:29.533 [2024-12-09 06:29:23.885421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.533 [2024-12-09 06:29:23.885467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.533 qpair failed and we were unable to recover it. 00:30:29.533 [2024-12-09 06:29:23.885861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.533 [2024-12-09 06:29:23.885892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.533 qpair failed and we were unable to recover it. 00:30:29.533 [2024-12-09 06:29:23.886215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.533 [2024-12-09 06:29:23.886256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.533 qpair failed and we were unable to recover it. 00:30:29.533 [2024-12-09 06:29:23.886637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.533 [2024-12-09 06:29:23.886672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.533 qpair failed and we were unable to recover it. 00:30:29.533 [2024-12-09 06:29:23.887017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.533 [2024-12-09 06:29:23.887054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.533 qpair failed and we were unable to recover it. 00:30:29.533 [2024-12-09 06:29:23.887390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.533 [2024-12-09 06:29:23.887422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.533 qpair failed and we were unable to recover it. 00:30:29.533 [2024-12-09 06:29:23.887778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.533 [2024-12-09 06:29:23.887812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.533 qpair failed and we were unable to recover it. 00:30:29.533 [2024-12-09 06:29:23.888221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.533 [2024-12-09 06:29:23.888256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.533 qpair failed and we were unable to recover it. 00:30:29.533 [2024-12-09 06:29:23.888595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.533 [2024-12-09 06:29:23.888628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.533 qpair failed and we were unable to recover it. 00:30:29.533 [2024-12-09 06:29:23.888997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.533 [2024-12-09 06:29:23.889030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.533 qpair failed and we were unable to recover it. 00:30:29.533 [2024-12-09 06:29:23.889350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.533 [2024-12-09 06:29:23.889382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.533 qpair failed and we were unable to recover it. 00:30:29.533 [2024-12-09 06:29:23.889717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.533 [2024-12-09 06:29:23.889751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.533 qpair failed and we were unable to recover it. 00:30:29.533 [2024-12-09 06:29:23.890023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.533 [2024-12-09 06:29:23.890055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.533 qpair failed and we were unable to recover it. 00:30:29.533 [2024-12-09 06:29:23.890391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.533 [2024-12-09 06:29:23.890423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.533 qpair failed and we were unable to recover it. 00:30:29.533 [2024-12-09 06:29:23.890778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.533 [2024-12-09 06:29:23.890811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.533 qpair failed and we were unable to recover it. 00:30:29.533 [2024-12-09 06:29:23.891071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.533 [2024-12-09 06:29:23.891101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.533 qpair failed and we were unable to recover it. 00:30:29.533 [2024-12-09 06:29:23.891477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.533 [2024-12-09 06:29:23.891510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.534 qpair failed and we were unable to recover it. 00:30:29.534 [2024-12-09 06:29:23.891887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.534 [2024-12-09 06:29:23.891920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.534 qpair failed and we were unable to recover it. 00:30:29.534 [2024-12-09 06:29:23.892266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.534 [2024-12-09 06:29:23.892296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.534 qpair failed and we were unable to recover it. 00:30:29.534 [2024-12-09 06:29:23.892651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.534 [2024-12-09 06:29:23.892683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.534 qpair failed and we were unable to recover it. 00:30:29.534 [2024-12-09 06:29:23.893031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.534 [2024-12-09 06:29:23.893063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.534 qpair failed and we were unable to recover it. 00:30:29.534 [2024-12-09 06:29:23.893398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.534 [2024-12-09 06:29:23.893429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.534 qpair failed and we were unable to recover it. 00:30:29.534 [2024-12-09 06:29:23.893856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.534 [2024-12-09 06:29:23.893888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.534 qpair failed and we were unable to recover it. 00:30:29.534 [2024-12-09 06:29:23.894229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.534 [2024-12-09 06:29:23.894262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.534 qpair failed and we were unable to recover it. 00:30:29.534 [2024-12-09 06:29:23.894603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.534 [2024-12-09 06:29:23.894636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.534 qpair failed and we were unable to recover it. 00:30:29.534 [2024-12-09 06:29:23.894990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.534 [2024-12-09 06:29:23.895022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.534 qpair failed and we were unable to recover it. 00:30:29.534 [2024-12-09 06:29:23.895346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.534 [2024-12-09 06:29:23.895377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.534 qpair failed and we were unable to recover it. 00:30:29.534 [2024-12-09 06:29:23.895718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.534 [2024-12-09 06:29:23.895751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.534 qpair failed and we were unable to recover it. 00:30:29.534 [2024-12-09 06:29:23.896100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.534 [2024-12-09 06:29:23.896133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.534 qpair failed and we were unable to recover it. 00:30:29.534 [2024-12-09 06:29:23.896474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.534 [2024-12-09 06:29:23.896506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.534 qpair failed and we were unable to recover it. 00:30:29.534 [2024-12-09 06:29:23.896883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.534 [2024-12-09 06:29:23.896917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.534 qpair failed and we were unable to recover it. 00:30:29.534 [2024-12-09 06:29:23.897288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.534 [2024-12-09 06:29:23.897322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.534 qpair failed and we were unable to recover it. 00:30:29.534 [2024-12-09 06:29:23.897640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.534 [2024-12-09 06:29:23.897675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.534 qpair failed and we were unable to recover it. 00:30:29.534 [2024-12-09 06:29:23.898024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.534 [2024-12-09 06:29:23.898056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.534 qpair failed and we were unable to recover it. 00:30:29.534 [2024-12-09 06:29:23.901493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.534 [2024-12-09 06:29:23.901558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.534 qpair failed and we were unable to recover it. 00:30:29.534 [2024-12-09 06:29:23.901968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.534 [2024-12-09 06:29:23.902004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.534 qpair failed and we were unable to recover it. 00:30:29.534 [2024-12-09 06:29:23.902362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.534 [2024-12-09 06:29:23.902394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.534 qpair failed and we were unable to recover it. 00:30:29.534 [2024-12-09 06:29:23.902719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.534 [2024-12-09 06:29:23.902752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.534 qpair failed and we were unable to recover it. 00:30:29.534 [2024-12-09 06:29:23.903072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.534 [2024-12-09 06:29:23.903102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.534 qpair failed and we were unable to recover it. 00:30:29.534 [2024-12-09 06:29:23.903475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.534 [2024-12-09 06:29:23.903508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.534 qpair failed and we were unable to recover it. 00:30:29.534 [2024-12-09 06:29:23.903889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.534 [2024-12-09 06:29:23.903924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.534 qpair failed and we were unable to recover it. 00:30:29.534 [2024-12-09 06:29:23.904305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.534 [2024-12-09 06:29:23.904337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.534 qpair failed and we were unable to recover it. 00:30:29.534 [2024-12-09 06:29:23.904613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.534 [2024-12-09 06:29:23.904645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.534 qpair failed and we were unable to recover it. 00:30:29.534 [2024-12-09 06:29:23.905041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.534 [2024-12-09 06:29:23.905082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.534 qpair failed and we were unable to recover it. 00:30:29.534 [2024-12-09 06:29:23.905434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.534 [2024-12-09 06:29:23.905485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.534 qpair failed and we were unable to recover it. 00:30:29.534 [2024-12-09 06:29:23.905746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.534 [2024-12-09 06:29:23.905778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.534 qpair failed and we were unable to recover it. 00:30:29.534 [2024-12-09 06:29:23.906156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.534 [2024-12-09 06:29:23.906188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.534 qpair failed and we were unable to recover it. 00:30:29.534 [2024-12-09 06:29:23.906533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.534 [2024-12-09 06:29:23.906568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.534 qpair failed and we were unable to recover it. 00:30:29.534 [2024-12-09 06:29:23.906973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.534 [2024-12-09 06:29:23.907006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.534 qpair failed and we were unable to recover it. 00:30:29.534 [2024-12-09 06:29:23.907374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.534 [2024-12-09 06:29:23.907406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.534 qpair failed and we were unable to recover it. 00:30:29.534 [2024-12-09 06:29:23.907788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.534 [2024-12-09 06:29:23.907821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.534 qpair failed and we were unable to recover it. 00:30:29.534 [2024-12-09 06:29:23.908174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.534 [2024-12-09 06:29:23.908207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.534 qpair failed and we were unable to recover it. 00:30:29.534 [2024-12-09 06:29:23.908581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.534 [2024-12-09 06:29:23.908613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.534 qpair failed and we were unable to recover it. 00:30:29.535 [2024-12-09 06:29:23.908968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.535 [2024-12-09 06:29:23.908999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.535 qpair failed and we were unable to recover it. 00:30:29.535 [2024-12-09 06:29:23.909407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.535 [2024-12-09 06:29:23.909441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.535 qpair failed and we were unable to recover it. 00:30:29.535 [2024-12-09 06:29:23.909836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.535 [2024-12-09 06:29:23.909871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.535 qpair failed and we were unable to recover it. 00:30:29.535 [2024-12-09 06:29:23.910220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.535 [2024-12-09 06:29:23.910265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.535 qpair failed and we were unable to recover it. 00:30:29.535 [2024-12-09 06:29:23.910580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.535 [2024-12-09 06:29:23.910620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.535 qpair failed and we were unable to recover it. 00:30:29.535 [2024-12-09 06:29:23.910981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.535 [2024-12-09 06:29:23.911020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.535 qpair failed and we were unable to recover it. 00:30:29.535 [2024-12-09 06:29:23.913600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.535 [2024-12-09 06:29:23.913653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.535 qpair failed and we were unable to recover it. 00:30:29.535 [2024-12-09 06:29:23.914020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.535 [2024-12-09 06:29:23.914051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.535 qpair failed and we were unable to recover it. 00:30:29.535 [2024-12-09 06:29:23.914389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.535 [2024-12-09 06:29:23.914413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.535 qpair failed and we were unable to recover it. 00:30:29.535 [2024-12-09 06:29:23.914771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.535 [2024-12-09 06:29:23.914795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.535 qpair failed and we were unable to recover it. 00:30:29.535 [2024-12-09 06:29:23.915134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.535 [2024-12-09 06:29:23.915158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.535 qpair failed and we were unable to recover it. 00:30:29.535 [2024-12-09 06:29:23.915506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.535 [2024-12-09 06:29:23.915532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.535 qpair failed and we were unable to recover it. 00:30:29.535 [2024-12-09 06:29:23.915876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.535 [2024-12-09 06:29:23.915898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.535 qpair failed and we were unable to recover it. 00:30:29.535 [2024-12-09 06:29:23.916108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.535 [2024-12-09 06:29:23.916135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.535 qpair failed and we were unable to recover it. 00:30:29.535 [2024-12-09 06:29:23.916468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.535 [2024-12-09 06:29:23.916492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.535 qpair failed and we were unable to recover it. 00:30:29.535 [2024-12-09 06:29:23.917984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.535 [2024-12-09 06:29:23.918033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.535 qpair failed and we were unable to recover it. 00:30:29.535 [2024-12-09 06:29:23.918435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.535 [2024-12-09 06:29:23.918472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.535 qpair failed and we were unable to recover it. 00:30:29.535 [2024-12-09 06:29:23.918825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.535 [2024-12-09 06:29:23.918856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.535 qpair failed and we were unable to recover it. 00:30:29.535 [2024-12-09 06:29:23.919191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.535 [2024-12-09 06:29:23.919212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.535 qpair failed and we were unable to recover it. 00:30:29.535 [2024-12-09 06:29:23.919560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.535 [2024-12-09 06:29:23.919584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.535 qpair failed and we were unable to recover it. 00:30:29.535 [2024-12-09 06:29:23.919905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.535 [2024-12-09 06:29:23.919927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.535 qpair failed and we were unable to recover it. 00:30:29.535 [2024-12-09 06:29:23.920264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.535 [2024-12-09 06:29:23.920289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.535 qpair failed and we were unable to recover it. 00:30:29.535 [2024-12-09 06:29:23.920608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.535 [2024-12-09 06:29:23.920632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.535 qpair failed and we were unable to recover it. 00:30:29.535 [2024-12-09 06:29:23.920967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.535 [2024-12-09 06:29:23.920990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.535 qpair failed and we were unable to recover it. 00:30:29.535 [2024-12-09 06:29:23.921342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.535 [2024-12-09 06:29:23.921364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.535 qpair failed and we were unable to recover it. 00:30:29.535 [2024-12-09 06:29:23.921619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.535 [2024-12-09 06:29:23.921640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.535 qpair failed and we were unable to recover it. 00:30:29.535 [2024-12-09 06:29:23.921996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.535 [2024-12-09 06:29:23.922019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.535 qpair failed and we were unable to recover it. 00:30:29.535 [2024-12-09 06:29:23.922360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.535 [2024-12-09 06:29:23.922386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.535 qpair failed and we were unable to recover it. 00:30:29.535 [2024-12-09 06:29:23.922742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.535 [2024-12-09 06:29:23.922766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.535 qpair failed and we were unable to recover it. 00:30:29.535 [2024-12-09 06:29:23.923075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.535 [2024-12-09 06:29:23.923097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.535 qpair failed and we were unable to recover it. 00:30:29.535 [2024-12-09 06:29:23.923326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.535 [2024-12-09 06:29:23.923351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.535 qpair failed and we were unable to recover it. 00:30:29.535 [2024-12-09 06:29:23.923629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.535 [2024-12-09 06:29:23.923652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.535 qpair failed and we were unable to recover it. 00:30:29.535 [2024-12-09 06:29:23.923970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.535 [2024-12-09 06:29:23.923992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.535 qpair failed and we were unable to recover it. 00:30:29.535 [2024-12-09 06:29:23.924348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.535 [2024-12-09 06:29:23.924370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.535 qpair failed and we were unable to recover it. 00:30:29.535 [2024-12-09 06:29:23.924716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.535 [2024-12-09 06:29:23.924740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.535 qpair failed and we were unable to recover it. 00:30:29.536 [2024-12-09 06:29:23.925085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.536 [2024-12-09 06:29:23.925114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.536 qpair failed and we were unable to recover it. 00:30:29.536 [2024-12-09 06:29:23.925442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.536 [2024-12-09 06:29:23.925487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.536 qpair failed and we were unable to recover it. 00:30:29.536 [2024-12-09 06:29:23.926939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.536 [2024-12-09 06:29:23.926993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.536 qpair failed and we were unable to recover it. 00:30:29.536 [2024-12-09 06:29:23.927408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.536 [2024-12-09 06:29:23.927441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.536 qpair failed and we were unable to recover it. 00:30:29.536 [2024-12-09 06:29:23.929078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.536 [2024-12-09 06:29:23.929133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.536 qpair failed and we were unable to recover it. 00:30:29.536 [2024-12-09 06:29:23.929425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.536 [2024-12-09 06:29:23.929467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.536 qpair failed and we were unable to recover it. 00:30:29.536 [2024-12-09 06:29:23.929824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.536 [2024-12-09 06:29:23.929853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.536 qpair failed and we were unable to recover it. 00:30:29.536 [2024-12-09 06:29:23.930180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.536 [2024-12-09 06:29:23.930211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.536 qpair failed and we were unable to recover it. 00:30:29.536 [2024-12-09 06:29:23.930522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.536 [2024-12-09 06:29:23.930551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.536 qpair failed and we were unable to recover it. 00:30:29.536 [2024-12-09 06:29:23.930871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.536 [2024-12-09 06:29:23.930899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.536 qpair failed and we were unable to recover it. 00:30:29.536 [2024-12-09 06:29:23.931122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.536 [2024-12-09 06:29:23.931153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.536 qpair failed and we were unable to recover it. 00:30:29.536 [2024-12-09 06:29:23.931523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.536 [2024-12-09 06:29:23.931555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.536 qpair failed and we were unable to recover it. 00:30:29.536 [2024-12-09 06:29:23.933701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.536 [2024-12-09 06:29:23.933767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.536 qpair failed and we were unable to recover it. 00:30:29.536 [2024-12-09 06:29:23.934141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.536 [2024-12-09 06:29:23.934175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.536 qpair failed and we were unable to recover it. 00:30:29.536 [2024-12-09 06:29:23.934506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.536 [2024-12-09 06:29:23.934536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.536 qpair failed and we were unable to recover it. 00:30:29.536 [2024-12-09 06:29:23.934770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.536 [2024-12-09 06:29:23.934802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.536 qpair failed and we were unable to recover it. 00:30:29.536 [2024-12-09 06:29:23.935150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.536 [2024-12-09 06:29:23.935179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.536 qpair failed and we were unable to recover it. 00:30:29.536 [2024-12-09 06:29:23.935517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.536 [2024-12-09 06:29:23.935547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.536 qpair failed and we were unable to recover it. 00:30:29.536 [2024-12-09 06:29:23.935880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.536 [2024-12-09 06:29:23.935907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.536 qpair failed and we were unable to recover it. 00:30:29.536 [2024-12-09 06:29:23.936255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.536 [2024-12-09 06:29:23.936286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.536 qpair failed and we were unable to recover it. 00:30:29.536 [2024-12-09 06:29:23.936622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.536 [2024-12-09 06:29:23.936653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.536 qpair failed and we were unable to recover it. 00:30:29.536 [2024-12-09 06:29:23.937011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.536 [2024-12-09 06:29:23.937039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.536 qpair failed and we were unable to recover it. 00:30:29.536 [2024-12-09 06:29:23.937470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.536 [2024-12-09 06:29:23.937499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.536 qpair failed and we were unable to recover it. 00:30:29.536 [2024-12-09 06:29:23.937831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.536 [2024-12-09 06:29:23.937868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.536 qpair failed and we were unable to recover it. 00:30:29.536 [2024-12-09 06:29:23.938202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.536 [2024-12-09 06:29:23.938232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.536 qpair failed and we were unable to recover it. 00:30:29.536 [2024-12-09 06:29:23.938568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.536 [2024-12-09 06:29:23.938600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.536 qpair failed and we were unable to recover it. 00:30:29.536 [2024-12-09 06:29:23.938941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.536 [2024-12-09 06:29:23.938970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.536 qpair failed and we were unable to recover it. 00:30:29.536 [2024-12-09 06:29:23.939246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.536 [2024-12-09 06:29:23.939273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.536 qpair failed and we were unable to recover it. 00:30:29.536 [2024-12-09 06:29:23.939613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.536 [2024-12-09 06:29:23.939645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.536 qpair failed and we were unable to recover it. 00:30:29.536 [2024-12-09 06:29:23.939992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.536 [2024-12-09 06:29:23.940023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.536 qpair failed and we were unable to recover it. 00:30:29.536 [2024-12-09 06:29:23.940374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.536 [2024-12-09 06:29:23.940405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.536 qpair failed and we were unable to recover it. 00:30:29.536 [2024-12-09 06:29:23.940748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.536 [2024-12-09 06:29:23.940778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.536 qpair failed and we were unable to recover it. 00:30:29.536 [2024-12-09 06:29:23.941110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.536 [2024-12-09 06:29:23.941140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.536 qpair failed and we were unable to recover it. 00:30:29.536 [2024-12-09 06:29:23.941479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.536 [2024-12-09 06:29:23.941509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.536 qpair failed and we were unable to recover it. 00:30:29.536 [2024-12-09 06:29:23.941850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.536 [2024-12-09 06:29:23.941878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.536 qpair failed and we were unable to recover it. 00:30:29.536 [2024-12-09 06:29:23.942214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.536 [2024-12-09 06:29:23.942244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.536 qpair failed and we were unable to recover it. 00:30:29.536 [2024-12-09 06:29:23.942575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.536 [2024-12-09 06:29:23.942606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.536 qpair failed and we were unable to recover it. 00:30:29.536 [2024-12-09 06:29:23.942958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.536 [2024-12-09 06:29:23.942988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.537 qpair failed and we were unable to recover it. 00:30:29.537 [2024-12-09 06:29:23.943333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.537 [2024-12-09 06:29:23.943365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.537 qpair failed and we were unable to recover it. 00:30:29.537 [2024-12-09 06:29:23.943713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.537 [2024-12-09 06:29:23.943745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.537 qpair failed and we were unable to recover it. 00:30:29.537 [2024-12-09 06:29:23.944076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.537 [2024-12-09 06:29:23.944106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.537 qpair failed and we were unable to recover it. 00:30:29.537 [2024-12-09 06:29:23.944444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.537 [2024-12-09 06:29:23.944483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.537 qpair failed and we were unable to recover it. 00:30:29.537 [2024-12-09 06:29:23.944861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.537 [2024-12-09 06:29:23.944892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.537 qpair failed and we were unable to recover it. 00:30:29.537 [2024-12-09 06:29:23.945225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.537 [2024-12-09 06:29:23.945253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.537 qpair failed and we were unable to recover it. 00:30:29.537 [2024-12-09 06:29:23.946899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.537 [2024-12-09 06:29:23.946958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.537 qpair failed and we were unable to recover it. 00:30:29.537 [2024-12-09 06:29:23.947328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.537 [2024-12-09 06:29:23.947364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.537 qpair failed and we were unable to recover it. 00:30:29.537 [2024-12-09 06:29:23.947710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.537 [2024-12-09 06:29:23.947745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.537 qpair failed and we were unable to recover it. 00:30:29.537 [2024-12-09 06:29:23.948127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.537 [2024-12-09 06:29:23.948158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.537 qpair failed and we were unable to recover it. 00:30:29.537 [2024-12-09 06:29:23.948519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.537 [2024-12-09 06:29:23.948556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.537 qpair failed and we were unable to recover it. 00:30:29.537 [2024-12-09 06:29:23.948781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.537 [2024-12-09 06:29:23.948814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.537 qpair failed and we were unable to recover it. 00:30:29.537 [2024-12-09 06:29:23.949188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.537 [2024-12-09 06:29:23.949221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.537 qpair failed and we were unable to recover it. 00:30:29.537 [2024-12-09 06:29:23.949611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.537 [2024-12-09 06:29:23.949645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.537 qpair failed and we were unable to recover it. 00:30:29.537 [2024-12-09 06:29:23.949870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.537 [2024-12-09 06:29:23.949900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.537 qpair failed and we were unable to recover it. 00:30:29.537 [2024-12-09 06:29:23.952111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.537 [2024-12-09 06:29:23.952178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.537 qpair failed and we were unable to recover it. 00:30:29.537 [2024-12-09 06:29:23.952552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.537 [2024-12-09 06:29:23.952588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.537 qpair failed and we were unable to recover it. 00:30:29.537 [2024-12-09 06:29:23.952974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.537 [2024-12-09 06:29:23.953006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.537 qpair failed and we were unable to recover it. 00:30:29.537 [2024-12-09 06:29:23.953380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.537 [2024-12-09 06:29:23.953413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.537 qpair failed and we were unable to recover it. 00:30:29.537 [2024-12-09 06:29:23.953804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.537 [2024-12-09 06:29:23.953837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.537 qpair failed and we were unable to recover it. 00:30:29.537 [2024-12-09 06:29:23.954189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.537 [2024-12-09 06:29:23.954222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.537 qpair failed and we were unable to recover it. 00:30:29.537 [2024-12-09 06:29:23.954564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.537 [2024-12-09 06:29:23.954599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.537 qpair failed and we were unable to recover it. 00:30:29.537 [2024-12-09 06:29:23.954935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.537 [2024-12-09 06:29:23.954971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.537 qpair failed and we were unable to recover it. 00:30:29.537 [2024-12-09 06:29:23.955305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.537 [2024-12-09 06:29:23.955335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.537 qpair failed and we were unable to recover it. 00:30:29.537 [2024-12-09 06:29:23.955673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.537 [2024-12-09 06:29:23.955708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.537 qpair failed and we were unable to recover it. 00:30:29.537 [2024-12-09 06:29:23.955976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.537 [2024-12-09 06:29:23.956011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.537 qpair failed and we were unable to recover it. 00:30:29.537 [2024-12-09 06:29:23.956383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.537 [2024-12-09 06:29:23.956413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.537 qpair failed and we were unable to recover it. 00:30:29.538 [2024-12-09 06:29:23.956820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.538 [2024-12-09 06:29:23.956855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.538 qpair failed and we were unable to recover it. 00:30:29.538 [2024-12-09 06:29:23.957236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.538 [2024-12-09 06:29:23.957269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.538 qpair failed and we were unable to recover it. 00:30:29.538 [2024-12-09 06:29:23.957549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.538 [2024-12-09 06:29:23.957582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.538 qpair failed and we were unable to recover it. 00:30:29.538 [2024-12-09 06:29:23.957938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.538 [2024-12-09 06:29:23.957970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.538 qpair failed and we were unable to recover it. 00:30:29.538 [2024-12-09 06:29:23.958198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.538 [2024-12-09 06:29:23.958229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.538 qpair failed and we were unable to recover it. 00:30:29.538 [2024-12-09 06:29:23.958585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.538 [2024-12-09 06:29:23.958617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.538 qpair failed and we were unable to recover it. 00:30:29.538 [2024-12-09 06:29:23.958849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.538 [2024-12-09 06:29:23.958879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.538 qpair failed and we were unable to recover it. 00:30:29.538 [2024-12-09 06:29:23.959235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.538 [2024-12-09 06:29:23.959268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.538 qpair failed and we were unable to recover it. 00:30:29.538 [2024-12-09 06:29:23.959513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.538 [2024-12-09 06:29:23.959544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.538 qpair failed and we were unable to recover it. 00:30:29.538 [2024-12-09 06:29:23.959897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.538 [2024-12-09 06:29:23.959927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.538 qpair failed and we were unable to recover it. 00:30:29.538 [2024-12-09 06:29:23.960271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.538 [2024-12-09 06:29:23.960302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.538 qpair failed and we were unable to recover it. 00:30:29.538 [2024-12-09 06:29:23.960539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.538 [2024-12-09 06:29:23.960571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.538 qpair failed and we were unable to recover it. 00:30:29.538 [2024-12-09 06:29:23.960934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.538 [2024-12-09 06:29:23.960966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.538 qpair failed and we were unable to recover it. 00:30:29.538 [2024-12-09 06:29:23.961331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.538 [2024-12-09 06:29:23.961366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.538 qpair failed and we were unable to recover it. 00:30:29.538 [2024-12-09 06:29:23.961710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.538 [2024-12-09 06:29:23.961746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.538 qpair failed and we were unable to recover it. 00:30:29.538 [2024-12-09 06:29:23.962128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.538 [2024-12-09 06:29:23.962159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.538 qpair failed and we were unable to recover it. 00:30:29.538 [2024-12-09 06:29:23.962443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.538 [2024-12-09 06:29:23.962488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.538 qpair failed and we were unable to recover it. 00:30:29.538 [2024-12-09 06:29:23.962841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.538 [2024-12-09 06:29:23.962875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.538 qpair failed and we were unable to recover it. 00:30:29.538 [2024-12-09 06:29:23.963260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.538 [2024-12-09 06:29:23.963295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.538 qpair failed and we were unable to recover it. 00:30:29.538 [2024-12-09 06:29:23.963665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.538 [2024-12-09 06:29:23.963698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.538 qpair failed and we were unable to recover it. 00:30:29.538 [2024-12-09 06:29:23.964057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.538 [2024-12-09 06:29:23.964089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.538 qpair failed and we were unable to recover it. 00:30:29.538 [2024-12-09 06:29:23.964333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.538 [2024-12-09 06:29:23.964364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.538 qpair failed and we were unable to recover it. 00:30:29.538 [2024-12-09 06:29:23.964755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.538 [2024-12-09 06:29:23.964788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.538 qpair failed and we were unable to recover it. 00:30:29.538 [2024-12-09 06:29:23.965131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.538 [2024-12-09 06:29:23.965163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.538 qpair failed and we were unable to recover it. 00:30:29.538 [2024-12-09 06:29:23.965497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.538 [2024-12-09 06:29:23.965529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.538 qpair failed and we were unable to recover it. 00:30:29.538 [2024-12-09 06:29:23.965878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.538 [2024-12-09 06:29:23.965908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.538 qpair failed and we were unable to recover it. 00:30:29.538 [2024-12-09 06:29:23.966252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.538 [2024-12-09 06:29:23.966289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.538 qpair failed and we were unable to recover it. 00:30:29.538 [2024-12-09 06:29:23.966558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.538 [2024-12-09 06:29:23.966589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.538 qpair failed and we were unable to recover it. 00:30:29.538 [2024-12-09 06:29:23.966952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.538 [2024-12-09 06:29:23.966983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.538 qpair failed and we were unable to recover it. 00:30:29.538 [2024-12-09 06:29:23.967328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.539 [2024-12-09 06:29:23.967364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.539 qpair failed and we were unable to recover it. 00:30:29.539 [2024-12-09 06:29:23.967719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.539 [2024-12-09 06:29:23.967752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.539 qpair failed and we were unable to recover it. 00:30:29.539 [2024-12-09 06:29:23.968094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.539 [2024-12-09 06:29:23.968127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.539 qpair failed and we were unable to recover it. 00:30:29.539 [2024-12-09 06:29:23.968490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.539 [2024-12-09 06:29:23.968523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.539 qpair failed and we were unable to recover it. 00:30:29.539 [2024-12-09 06:29:23.968886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.539 [2024-12-09 06:29:23.968916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.539 qpair failed and we were unable to recover it. 00:30:29.539 [2024-12-09 06:29:23.969253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.539 [2024-12-09 06:29:23.969285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.539 qpair failed and we were unable to recover it. 00:30:29.539 [2024-12-09 06:29:23.969675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.539 [2024-12-09 06:29:23.969708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.539 qpair failed and we were unable to recover it. 00:30:29.539 [2024-12-09 06:29:23.969947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.539 [2024-12-09 06:29:23.969978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.539 qpair failed and we were unable to recover it. 00:30:29.539 [2024-12-09 06:29:23.970325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.539 [2024-12-09 06:29:23.970358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.539 qpair failed and we were unable to recover it. 00:30:29.539 [2024-12-09 06:29:23.970701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.539 [2024-12-09 06:29:23.970733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.539 qpair failed and we were unable to recover it. 00:30:29.539 [2024-12-09 06:29:23.971109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.539 [2024-12-09 06:29:23.971141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.539 qpair failed and we were unable to recover it. 00:30:29.539 [2024-12-09 06:29:23.971496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.539 [2024-12-09 06:29:23.971528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.539 qpair failed and we were unable to recover it. 00:30:29.539 [2024-12-09 06:29:23.971860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.539 [2024-12-09 06:29:23.971890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.539 qpair failed and we were unable to recover it. 00:30:29.539 [2024-12-09 06:29:23.972113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.539 [2024-12-09 06:29:23.972146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.539 qpair failed and we were unable to recover it. 00:30:29.539 [2024-12-09 06:29:23.972518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.539 [2024-12-09 06:29:23.972552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.539 qpair failed and we were unable to recover it. 00:30:29.539 [2024-12-09 06:29:23.972902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.539 [2024-12-09 06:29:23.972934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.539 qpair failed and we were unable to recover it. 00:30:29.539 [2024-12-09 06:29:23.973267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.539 [2024-12-09 06:29:23.973299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.539 qpair failed and we were unable to recover it. 00:30:29.539 [2024-12-09 06:29:23.973619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.539 [2024-12-09 06:29:23.973652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.539 qpair failed and we were unable to recover it. 00:30:29.539 [2024-12-09 06:29:23.973884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.539 [2024-12-09 06:29:23.973916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.539 qpair failed and we were unable to recover it. 00:30:29.539 [2024-12-09 06:29:23.974266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.539 [2024-12-09 06:29:23.974300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.539 qpair failed and we were unable to recover it. 00:30:29.539 [2024-12-09 06:29:23.974711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.539 [2024-12-09 06:29:23.974745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.539 qpair failed and we were unable to recover it. 00:30:29.539 [2024-12-09 06:29:23.975128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.539 [2024-12-09 06:29:23.975159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.539 qpair failed and we were unable to recover it. 00:30:29.539 [2024-12-09 06:29:23.975507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.539 [2024-12-09 06:29:23.975539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.539 qpair failed and we were unable to recover it. 00:30:29.539 [2024-12-09 06:29:23.975882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.539 [2024-12-09 06:29:23.975913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.539 qpair failed and we were unable to recover it. 00:30:29.539 [2024-12-09 06:29:23.976250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.539 [2024-12-09 06:29:23.976286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.539 qpair failed and we were unable to recover it. 00:30:29.539 [2024-12-09 06:29:23.976698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.539 [2024-12-09 06:29:23.976731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.539 qpair failed and we were unable to recover it. 00:30:29.539 [2024-12-09 06:29:23.977090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.539 [2024-12-09 06:29:23.977120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.539 qpair failed and we were unable to recover it. 00:30:29.539 [2024-12-09 06:29:23.977477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.539 [2024-12-09 06:29:23.977510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.539 qpair failed and we were unable to recover it. 00:30:29.539 [2024-12-09 06:29:23.977882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.539 [2024-12-09 06:29:23.977912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.539 qpair failed and we were unable to recover it. 00:30:29.539 [2024-12-09 06:29:23.978264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.539 [2024-12-09 06:29:23.978295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.539 qpair failed and we were unable to recover it. 00:30:29.539 [2024-12-09 06:29:23.978656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.539 [2024-12-09 06:29:23.978687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.539 qpair failed and we were unable to recover it. 00:30:29.539 [2024-12-09 06:29:23.979002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.540 [2024-12-09 06:29:23.979036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.540 qpair failed and we were unable to recover it. 00:30:29.540 [2024-12-09 06:29:23.979374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.540 [2024-12-09 06:29:23.979405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.540 qpair failed and we were unable to recover it. 00:30:29.540 [2024-12-09 06:29:23.979800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.540 [2024-12-09 06:29:23.979833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.540 qpair failed and we were unable to recover it. 00:30:29.540 [2024-12-09 06:29:23.980182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.540 [2024-12-09 06:29:23.980214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.540 qpair failed and we were unable to recover it. 00:30:29.540 [2024-12-09 06:29:23.980563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.540 [2024-12-09 06:29:23.980596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.540 qpair failed and we were unable to recover it. 00:30:29.540 [2024-12-09 06:29:23.980710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.540 [2024-12-09 06:29:23.980738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.540 qpair failed and we were unable to recover it. 00:30:29.540 [2024-12-09 06:29:23.981102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.540 [2024-12-09 06:29:23.981132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.540 qpair failed and we were unable to recover it. 00:30:29.540 [2024-12-09 06:29:23.981479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.540 [2024-12-09 06:29:23.981514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.540 qpair failed and we were unable to recover it. 00:30:29.540 [2024-12-09 06:29:23.981894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.540 [2024-12-09 06:29:23.981925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.540 qpair failed and we were unable to recover it. 00:30:29.540 [2024-12-09 06:29:23.982259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.540 [2024-12-09 06:29:23.982289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.540 qpair failed and we were unable to recover it. 00:30:29.540 [2024-12-09 06:29:23.982658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.540 [2024-12-09 06:29:23.982690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.540 qpair failed and we were unable to recover it. 00:30:29.540 [2024-12-09 06:29:23.983036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.540 [2024-12-09 06:29:23.983068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.540 qpair failed and we were unable to recover it. 00:30:29.540 [2024-12-09 06:29:23.983416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.540 [2024-12-09 06:29:23.983462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.540 qpair failed and we were unable to recover it. 00:30:29.540 [2024-12-09 06:29:23.983840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.540 [2024-12-09 06:29:23.983871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.540 qpair failed and we were unable to recover it. 00:30:29.540 [2024-12-09 06:29:23.984203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.540 [2024-12-09 06:29:23.984234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.540 qpair failed and we were unable to recover it. 00:30:29.540 [2024-12-09 06:29:23.984479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.540 [2024-12-09 06:29:23.984513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.540 qpair failed and we were unable to recover it. 00:30:29.540 [2024-12-09 06:29:23.984738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.540 [2024-12-09 06:29:23.984770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.540 qpair failed and we were unable to recover it. 00:30:29.540 [2024-12-09 06:29:23.985110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.540 [2024-12-09 06:29:23.985143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.540 qpair failed and we were unable to recover it. 00:30:29.540 [2024-12-09 06:29:23.985394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.540 [2024-12-09 06:29:23.985427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.540 qpair failed and we were unable to recover it. 00:30:29.540 [2024-12-09 06:29:23.985800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.540 [2024-12-09 06:29:23.985836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.540 qpair failed and we were unable to recover it. 00:30:29.540 [2024-12-09 06:29:23.986180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.540 [2024-12-09 06:29:23.986212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.540 qpair failed and we were unable to recover it. 00:30:29.540 [2024-12-09 06:29:23.986557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.540 [2024-12-09 06:29:23.986588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.540 qpair failed and we were unable to recover it. 00:30:29.540 [2024-12-09 06:29:23.986936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.540 [2024-12-09 06:29:23.986969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.540 qpair failed and we were unable to recover it. 00:30:29.540 [2024-12-09 06:29:23.987370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.540 [2024-12-09 06:29:23.987402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.540 qpair failed and we were unable to recover it. 00:30:29.540 [2024-12-09 06:29:23.987628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.540 [2024-12-09 06:29:23.987660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.540 qpair failed and we were unable to recover it. 00:30:29.540 [2024-12-09 06:29:23.987980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.540 [2024-12-09 06:29:23.988015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.540 qpair failed and we were unable to recover it. 00:30:29.540 [2024-12-09 06:29:23.988381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.540 [2024-12-09 06:29:23.988413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.540 qpair failed and we were unable to recover it. 00:30:29.540 [2024-12-09 06:29:23.988775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.540 [2024-12-09 06:29:23.988807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.540 qpair failed and we were unable to recover it. 00:30:29.540 [2024-12-09 06:29:23.989148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.540 [2024-12-09 06:29:23.989178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.540 qpair failed and we were unable to recover it. 00:30:29.540 [2024-12-09 06:29:23.989522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.540 [2024-12-09 06:29:23.989554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.540 qpair failed and we were unable to recover it. 00:30:29.541 [2024-12-09 06:29:23.989906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.541 [2024-12-09 06:29:23.989938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.541 qpair failed and we were unable to recover it. 00:30:29.541 [2024-12-09 06:29:23.990286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.541 [2024-12-09 06:29:23.990319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.541 qpair failed and we were unable to recover it. 00:30:29.541 [2024-12-09 06:29:23.990654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.541 [2024-12-09 06:29:23.990688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.541 qpair failed and we were unable to recover it. 00:30:29.541 [2024-12-09 06:29:23.990893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.541 [2024-12-09 06:29:23.990928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.541 qpair failed and we were unable to recover it. 00:30:29.541 [2024-12-09 06:29:23.991270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.541 [2024-12-09 06:29:23.991309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.541 qpair failed and we were unable to recover it. 00:30:29.541 [2024-12-09 06:29:23.991633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.541 [2024-12-09 06:29:23.991667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.541 qpair failed and we were unable to recover it. 00:30:29.541 [2024-12-09 06:29:23.991990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.541 [2024-12-09 06:29:23.992021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.541 qpair failed and we were unable to recover it. 00:30:29.541 [2024-12-09 06:29:23.992372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.541 [2024-12-09 06:29:23.992406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.541 qpair failed and we were unable to recover it. 00:30:29.541 [2024-12-09 06:29:23.992785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.541 [2024-12-09 06:29:23.992818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.541 qpair failed and we were unable to recover it. 00:30:29.541 [2024-12-09 06:29:23.993261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.541 [2024-12-09 06:29:23.993294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.541 qpair failed and we were unable to recover it. 00:30:29.541 [2024-12-09 06:29:23.993651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.541 [2024-12-09 06:29:23.993685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.541 qpair failed and we were unable to recover it. 00:30:29.541 [2024-12-09 06:29:23.994049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.541 [2024-12-09 06:29:23.994079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.541 qpair failed and we were unable to recover it. 00:30:29.541 [2024-12-09 06:29:23.994419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.541 [2024-12-09 06:29:23.994484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.541 qpair failed and we were unable to recover it. 00:30:29.541 [2024-12-09 06:29:23.994827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.541 [2024-12-09 06:29:23.994863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.541 qpair failed and we were unable to recover it. 00:30:29.541 [2024-12-09 06:29:23.995112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.541 [2024-12-09 06:29:23.995145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.541 qpair failed and we were unable to recover it. 00:30:29.541 [2024-12-09 06:29:23.995494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.541 [2024-12-09 06:29:23.995527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.541 qpair failed and we were unable to recover it. 00:30:29.541 [2024-12-09 06:29:23.995769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.541 [2024-12-09 06:29:23.995803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.541 qpair failed and we were unable to recover it. 00:30:29.541 [2024-12-09 06:29:23.996147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.541 [2024-12-09 06:29:23.996178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.541 qpair failed and we were unable to recover it. 00:30:29.541 [2024-12-09 06:29:23.996530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.541 [2024-12-09 06:29:23.996563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.541 qpair failed and we were unable to recover it. 00:30:29.541 [2024-12-09 06:29:23.996924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.541 [2024-12-09 06:29:23.996956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.541 qpair failed and we were unable to recover it. 00:30:29.541 [2024-12-09 06:29:23.997301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.541 [2024-12-09 06:29:23.997333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.541 qpair failed and we were unable to recover it. 00:30:29.541 [2024-12-09 06:29:23.997685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.541 [2024-12-09 06:29:23.997716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.541 qpair failed and we were unable to recover it. 00:30:29.541 [2024-12-09 06:29:23.998065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.541 [2024-12-09 06:29:23.998097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.541 qpair failed and we were unable to recover it. 00:30:29.541 [2024-12-09 06:29:23.998464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.541 [2024-12-09 06:29:23.998494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.541 qpair failed and we were unable to recover it. 00:30:29.541 [2024-12-09 06:29:23.998853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.541 [2024-12-09 06:29:23.998883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.541 qpair failed and we were unable to recover it. 00:30:29.541 [2024-12-09 06:29:23.999227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.541 [2024-12-09 06:29:23.999262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.541 qpair failed and we were unable to recover it. 00:30:29.541 [2024-12-09 06:29:23.999582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.541 [2024-12-09 06:29:23.999616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.541 qpair failed and we were unable to recover it. 00:30:29.541 [2024-12-09 06:29:23.999977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.541 [2024-12-09 06:29:24.000009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.541 qpair failed and we were unable to recover it. 00:30:29.541 [2024-12-09 06:29:24.000348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.541 [2024-12-09 06:29:24.000380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.541 qpair failed and we were unable to recover it. 00:30:29.541 [2024-12-09 06:29:24.000604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.541 [2024-12-09 06:29:24.000639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.541 qpair failed and we were unable to recover it. 00:30:29.541 [2024-12-09 06:29:24.000997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.541 [2024-12-09 06:29:24.001029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.541 qpair failed and we were unable to recover it. 00:30:29.541 [2024-12-09 06:29:24.001395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.541 [2024-12-09 06:29:24.001430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.541 qpair failed and we were unable to recover it. 00:30:29.541 [2024-12-09 06:29:24.001822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.542 [2024-12-09 06:29:24.001855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.542 qpair failed and we were unable to recover it. 00:30:29.542 [2024-12-09 06:29:24.002225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.542 [2024-12-09 06:29:24.002257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.542 qpair failed and we were unable to recover it. 00:30:29.542 [2024-12-09 06:29:24.002621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.542 [2024-12-09 06:29:24.002653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.542 qpair failed and we were unable to recover it. 00:30:29.542 [2024-12-09 06:29:24.003029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.542 [2024-12-09 06:29:24.003061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.542 qpair failed and we were unable to recover it. 00:30:29.542 [2024-12-09 06:29:24.003405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.542 [2024-12-09 06:29:24.003439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.542 qpair failed and we were unable to recover it. 00:30:29.542 [2024-12-09 06:29:24.003874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.542 [2024-12-09 06:29:24.003907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.542 qpair failed and we were unable to recover it. 00:30:29.542 [2024-12-09 06:29:24.004306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.542 [2024-12-09 06:29:24.004339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.542 qpair failed and we were unable to recover it. 00:30:29.542 [2024-12-09 06:29:24.004664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.542 [2024-12-09 06:29:24.004697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.542 qpair failed and we were unable to recover it. 00:30:29.542 [2024-12-09 06:29:24.005038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.542 [2024-12-09 06:29:24.005071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.542 qpair failed and we were unable to recover it. 00:30:29.542 [2024-12-09 06:29:24.005387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.542 [2024-12-09 06:29:24.005419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.542 qpair failed and we were unable to recover it. 00:30:29.542 [2024-12-09 06:29:24.005811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.542 [2024-12-09 06:29:24.005845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.542 qpair failed and we were unable to recover it. 00:30:29.542 [2024-12-09 06:29:24.006187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.542 [2024-12-09 06:29:24.006221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.542 qpair failed and we were unable to recover it. 00:30:29.542 [2024-12-09 06:29:24.006465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.542 [2024-12-09 06:29:24.006498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.542 qpair failed and we were unable to recover it. 00:30:29.542 [2024-12-09 06:29:24.006835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.542 [2024-12-09 06:29:24.006874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.542 qpair failed and we were unable to recover it. 00:30:29.542 [2024-12-09 06:29:24.007146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.542 [2024-12-09 06:29:24.007178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.542 qpair failed and we were unable to recover it. 00:30:29.542 [2024-12-09 06:29:24.007536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.542 [2024-12-09 06:29:24.007569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.542 qpair failed and we were unable to recover it. 00:30:29.542 [2024-12-09 06:29:24.007924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.542 [2024-12-09 06:29:24.007957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.542 qpair failed and we were unable to recover it. 00:30:29.542 [2024-12-09 06:29:24.008285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.542 [2024-12-09 06:29:24.008319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.542 qpair failed and we were unable to recover it. 00:30:29.542 [2024-12-09 06:29:24.008677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.542 [2024-12-09 06:29:24.008712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.542 qpair failed and we were unable to recover it. 00:30:29.542 [2024-12-09 06:29:24.009042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.542 [2024-12-09 06:29:24.009075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.542 qpair failed and we were unable to recover it. 00:30:29.542 [2024-12-09 06:29:24.009410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.542 [2024-12-09 06:29:24.009442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.542 qpair failed and we were unable to recover it. 00:30:29.542 [2024-12-09 06:29:24.009870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.542 [2024-12-09 06:29:24.009904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.542 qpair failed and we were unable to recover it. 00:30:29.542 [2024-12-09 06:29:24.010272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.542 [2024-12-09 06:29:24.010306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.542 qpair failed and we were unable to recover it. 00:30:29.542 [2024-12-09 06:29:24.010665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.542 [2024-12-09 06:29:24.010698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.542 qpair failed and we were unable to recover it. 00:30:29.542 [2024-12-09 06:29:24.011056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.542 [2024-12-09 06:29:24.011087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.542 qpair failed and we were unable to recover it. 00:30:29.542 [2024-12-09 06:29:24.011428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.542 [2024-12-09 06:29:24.011473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.542 qpair failed and we were unable to recover it. 00:30:29.542 [2024-12-09 06:29:24.011843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.542 [2024-12-09 06:29:24.011876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.542 qpair failed and we were unable to recover it. 00:30:29.542 [2024-12-09 06:29:24.012244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.542 [2024-12-09 06:29:24.012275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.542 qpair failed and we were unable to recover it. 00:30:29.542 [2024-12-09 06:29:24.014509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.542 [2024-12-09 06:29:24.014577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.542 qpair failed and we were unable to recover it. 00:30:29.542 [2024-12-09 06:29:24.014999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.542 [2024-12-09 06:29:24.015034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.542 qpair failed and we were unable to recover it. 00:30:29.542 [2024-12-09 06:29:24.016778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.542 [2024-12-09 06:29:24.016838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.542 qpair failed and we were unable to recover it. 00:30:29.542 [2024-12-09 06:29:24.017214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.542 [2024-12-09 06:29:24.017249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.542 qpair failed and we were unable to recover it. 00:30:29.542 [2024-12-09 06:29:24.017628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.542 [2024-12-09 06:29:24.017662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.542 qpair failed and we were unable to recover it. 00:30:29.542 [2024-12-09 06:29:24.017993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.542 [2024-12-09 06:29:24.018027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.542 qpair failed and we were unable to recover it. 00:30:29.542 [2024-12-09 06:29:24.018430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.542 [2024-12-09 06:29:24.018475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.542 qpair failed and we were unable to recover it. 00:30:29.542 [2024-12-09 06:29:24.018833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.542 [2024-12-09 06:29:24.018866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.542 qpair failed and we were unable to recover it. 00:30:29.542 [2024-12-09 06:29:24.019211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.542 [2024-12-09 06:29:24.019243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.542 qpair failed and we were unable to recover it. 00:30:29.542 [2024-12-09 06:29:24.019600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.542 [2024-12-09 06:29:24.019634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.542 qpair failed and we were unable to recover it. 00:30:29.543 [2024-12-09 06:29:24.019859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.543 [2024-12-09 06:29:24.019890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.543 qpair failed and we were unable to recover it. 00:30:29.543 [2024-12-09 06:29:24.020290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.543 [2024-12-09 06:29:24.020322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.543 qpair failed and we were unable to recover it. 00:30:29.543 [2024-12-09 06:29:24.020666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.543 [2024-12-09 06:29:24.020707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.543 qpair failed and we were unable to recover it. 00:30:29.543 [2024-12-09 06:29:24.020975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.543 [2024-12-09 06:29:24.021008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.543 qpair failed and we were unable to recover it. 00:30:29.543 [2024-12-09 06:29:24.021351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.543 [2024-12-09 06:29:24.021384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.543 qpair failed and we were unable to recover it. 00:30:29.543 [2024-12-09 06:29:24.021721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.543 [2024-12-09 06:29:24.021756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.543 qpair failed and we were unable to recover it. 00:30:29.543 [2024-12-09 06:29:24.022084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.543 [2024-12-09 06:29:24.022122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.543 qpair failed and we were unable to recover it. 00:30:29.543 [2024-12-09 06:29:24.022526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.543 [2024-12-09 06:29:24.022559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.543 qpair failed and we were unable to recover it. 00:30:29.543 [2024-12-09 06:29:24.022734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.543 [2024-12-09 06:29:24.022765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.543 qpair failed and we were unable to recover it. 00:30:29.543 [2024-12-09 06:29:24.023129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.543 [2024-12-09 06:29:24.023161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.543 qpair failed and we were unable to recover it. 00:30:29.543 [2024-12-09 06:29:24.023507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.543 [2024-12-09 06:29:24.023542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.543 qpair failed and we were unable to recover it. 00:30:29.543 [2024-12-09 06:29:24.023914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.543 [2024-12-09 06:29:24.023947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.543 qpair failed and we were unable to recover it. 00:30:29.543 [2024-12-09 06:29:24.024276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.543 [2024-12-09 06:29:24.024309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.543 qpair failed and we were unable to recover it. 00:30:29.543 [2024-12-09 06:29:24.024675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.543 [2024-12-09 06:29:24.024708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.543 qpair failed and we were unable to recover it. 00:30:29.543 [2024-12-09 06:29:24.025077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.543 [2024-12-09 06:29:24.025108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.543 qpair failed and we were unable to recover it. 00:30:29.543 [2024-12-09 06:29:24.025434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.543 [2024-12-09 06:29:24.025483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.543 qpair failed and we were unable to recover it. 00:30:29.543 [2024-12-09 06:29:24.025884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.543 [2024-12-09 06:29:24.025917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.543 qpair failed and we were unable to recover it. 00:30:29.543 [2024-12-09 06:29:24.026199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.543 [2024-12-09 06:29:24.026230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.543 qpair failed and we were unable to recover it. 00:30:29.543 [2024-12-09 06:29:24.026574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.543 [2024-12-09 06:29:24.026606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.543 qpair failed and we were unable to recover it. 00:30:29.543 [2024-12-09 06:29:24.026963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.543 [2024-12-09 06:29:24.026995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.543 qpair failed and we were unable to recover it. 00:30:29.543 [2024-12-09 06:29:24.027447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.543 [2024-12-09 06:29:24.027490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.543 qpair failed and we were unable to recover it. 00:30:29.543 [2024-12-09 06:29:24.027752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.543 [2024-12-09 06:29:24.027783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.543 qpair failed and we were unable to recover it. 00:30:29.543 [2024-12-09 06:29:24.028008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.543 [2024-12-09 06:29:24.028041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.543 qpair failed and we were unable to recover it. 00:30:29.543 [2024-12-09 06:29:24.028383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.543 [2024-12-09 06:29:24.028416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.543 qpair failed and we were unable to recover it. 00:30:29.543 [2024-12-09 06:29:24.028822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.543 [2024-12-09 06:29:24.028859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.543 qpair failed and we were unable to recover it. 00:30:29.543 [2024-12-09 06:29:24.029265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.543 [2024-12-09 06:29:24.029297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.543 qpair failed and we were unable to recover it. 00:30:29.543 [2024-12-09 06:29:24.029730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.543 [2024-12-09 06:29:24.029763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.543 qpair failed and we were unable to recover it. 00:30:29.543 [2024-12-09 06:29:24.030137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.543 [2024-12-09 06:29:24.030170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.543 qpair failed and we were unable to recover it. 00:30:29.543 [2024-12-09 06:29:24.030536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.543 [2024-12-09 06:29:24.030568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.543 qpair failed and we were unable to recover it. 00:30:29.543 [2024-12-09 06:29:24.030923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.543 [2024-12-09 06:29:24.030955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.543 qpair failed and we were unable to recover it. 00:30:29.543 [2024-12-09 06:29:24.031380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.543 [2024-12-09 06:29:24.031412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.543 qpair failed and we were unable to recover it. 00:30:29.543 [2024-12-09 06:29:24.031826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.543 [2024-12-09 06:29:24.031860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.543 qpair failed and we were unable to recover it. 00:30:29.543 [2024-12-09 06:29:24.032190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.543 [2024-12-09 06:29:24.032224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.543 qpair failed and we were unable to recover it. 00:30:29.543 [2024-12-09 06:29:24.032621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.543 [2024-12-09 06:29:24.032654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.543 qpair failed and we were unable to recover it. 00:30:29.543 [2024-12-09 06:29:24.032998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.543 [2024-12-09 06:29:24.033031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.543 qpair failed and we were unable to recover it. 00:30:29.543 [2024-12-09 06:29:24.033359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.543 [2024-12-09 06:29:24.033390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.543 qpair failed and we were unable to recover it. 00:30:29.544 [2024-12-09 06:29:24.033704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.544 [2024-12-09 06:29:24.033737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.544 qpair failed and we were unable to recover it. 00:30:29.544 [2024-12-09 06:29:24.034094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.544 [2024-12-09 06:29:24.034125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.544 qpair failed and we were unable to recover it. 00:30:29.544 [2024-12-09 06:29:24.034474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.544 [2024-12-09 06:29:24.034509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.544 qpair failed and we were unable to recover it. 00:30:29.544 [2024-12-09 06:29:24.034846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.544 [2024-12-09 06:29:24.034877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.544 qpair failed and we were unable to recover it. 00:30:29.544 [2024-12-09 06:29:24.035252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.544 [2024-12-09 06:29:24.035285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.544 qpair failed and we were unable to recover it. 00:30:29.544 [2024-12-09 06:29:24.035738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.544 [2024-12-09 06:29:24.035771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.544 qpair failed and we were unable to recover it. 00:30:29.544 [2024-12-09 06:29:24.036156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.544 [2024-12-09 06:29:24.036187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.544 qpair failed and we were unable to recover it. 00:30:29.544 [2024-12-09 06:29:24.036522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.544 [2024-12-09 06:29:24.036563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.544 qpair failed and we were unable to recover it. 00:30:29.544 [2024-12-09 06:29:24.036935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.544 [2024-12-09 06:29:24.036966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.544 qpair failed and we were unable to recover it. 00:30:29.544 [2024-12-09 06:29:24.037310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.544 [2024-12-09 06:29:24.037341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.544 qpair failed and we were unable to recover it. 00:30:29.544 [2024-12-09 06:29:24.037517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.544 [2024-12-09 06:29:24.037547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.544 qpair failed and we were unable to recover it. 00:30:29.544 [2024-12-09 06:29:24.037900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.544 [2024-12-09 06:29:24.037932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.544 qpair failed and we were unable to recover it. 00:30:29.544 [2024-12-09 06:29:24.038166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.544 [2024-12-09 06:29:24.038195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.544 qpair failed and we were unable to recover it. 00:30:29.544 [2024-12-09 06:29:24.038480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.544 [2024-12-09 06:29:24.038512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.544 qpair failed and we were unable to recover it. 00:30:29.544 [2024-12-09 06:29:24.038865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.544 [2024-12-09 06:29:24.038898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.544 qpair failed and we were unable to recover it. 00:30:29.544 [2024-12-09 06:29:24.039124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.544 [2024-12-09 06:29:24.039155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.544 qpair failed and we were unable to recover it. 00:30:29.544 [2024-12-09 06:29:24.039549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.544 [2024-12-09 06:29:24.039583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.544 qpair failed and we were unable to recover it. 00:30:29.544 [2024-12-09 06:29:24.039822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.544 [2024-12-09 06:29:24.039857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.544 qpair failed and we were unable to recover it. 00:30:29.544 [2024-12-09 06:29:24.040224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.544 [2024-12-09 06:29:24.040255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.544 qpair failed and we were unable to recover it. 00:30:29.544 [2024-12-09 06:29:24.040596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.544 [2024-12-09 06:29:24.040628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.544 qpair failed and we were unable to recover it. 00:30:29.544 [2024-12-09 06:29:24.041013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.544 [2024-12-09 06:29:24.041045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.544 qpair failed and we were unable to recover it. 00:30:29.544 [2024-12-09 06:29:24.041417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.544 [2024-12-09 06:29:24.041478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.544 qpair failed and we were unable to recover it. 00:30:29.544 [2024-12-09 06:29:24.041844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.544 [2024-12-09 06:29:24.041876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.544 qpair failed and we were unable to recover it. 00:30:29.544 [2024-12-09 06:29:24.042242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.544 [2024-12-09 06:29:24.042275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.544 qpair failed and we were unable to recover it. 00:30:29.544 [2024-12-09 06:29:24.042644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.544 [2024-12-09 06:29:24.042677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.544 qpair failed and we were unable to recover it. 00:30:29.544 [2024-12-09 06:29:24.043011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.544 [2024-12-09 06:29:24.043044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.544 qpair failed and we were unable to recover it. 00:30:29.544 [2024-12-09 06:29:24.043370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.544 [2024-12-09 06:29:24.043402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.544 qpair failed and we were unable to recover it. 00:30:29.544 [2024-12-09 06:29:24.043819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.544 [2024-12-09 06:29:24.043852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.544 qpair failed and we were unable to recover it. 00:30:29.544 [2024-12-09 06:29:24.044258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.544 [2024-12-09 06:29:24.044289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.544 qpair failed and we were unable to recover it. 00:30:29.544 [2024-12-09 06:29:24.044528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.544 [2024-12-09 06:29:24.044560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.544 qpair failed and we were unable to recover it. 00:30:29.544 [2024-12-09 06:29:24.044911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.544 [2024-12-09 06:29:24.044943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.544 qpair failed and we were unable to recover it. 00:30:29.544 [2024-12-09 06:29:24.045291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.544 [2024-12-09 06:29:24.045323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.544 qpair failed and we were unable to recover it. 00:30:29.544 [2024-12-09 06:29:24.045672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.544 [2024-12-09 06:29:24.045703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.544 qpair failed and we were unable to recover it. 00:30:29.544 [2024-12-09 06:29:24.045974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.544 [2024-12-09 06:29:24.046007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.544 qpair failed and we were unable to recover it. 00:30:29.544 [2024-12-09 06:29:24.046243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.544 [2024-12-09 06:29:24.046279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.544 qpair failed and we were unable to recover it. 00:30:29.544 [2024-12-09 06:29:24.046663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.544 [2024-12-09 06:29:24.046695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.544 qpair failed and we were unable to recover it. 00:30:29.544 [2024-12-09 06:29:24.047049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.544 [2024-12-09 06:29:24.047082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.544 qpair failed and we were unable to recover it. 00:30:29.544 [2024-12-09 06:29:24.047467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.544 [2024-12-09 06:29:24.047501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.544 qpair failed and we were unable to recover it. 00:30:29.544 [2024-12-09 06:29:24.047852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.544 [2024-12-09 06:29:24.047884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.544 qpair failed and we were unable to recover it. 00:30:29.544 [2024-12-09 06:29:24.048219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.544 [2024-12-09 06:29:24.048253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.544 qpair failed and we were unable to recover it. 00:30:29.544 [2024-12-09 06:29:24.048606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.544 [2024-12-09 06:29:24.048639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.545 qpair failed and we were unable to recover it. 00:30:29.545 [2024-12-09 06:29:24.048990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.545 [2024-12-09 06:29:24.049022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.545 qpair failed and we were unable to recover it. 00:30:29.545 [2024-12-09 06:29:24.049245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.545 [2024-12-09 06:29:24.049276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.545 qpair failed and we were unable to recover it. 00:30:29.545 [2024-12-09 06:29:24.049671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.545 [2024-12-09 06:29:24.049703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.545 qpair failed and we were unable to recover it. 00:30:29.545 [2024-12-09 06:29:24.050044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.545 [2024-12-09 06:29:24.050077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.545 qpair failed and we were unable to recover it. 00:30:29.545 [2024-12-09 06:29:24.050468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.545 [2024-12-09 06:29:24.050501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.545 qpair failed and we were unable to recover it. 00:30:29.545 [2024-12-09 06:29:24.050837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.545 [2024-12-09 06:29:24.050871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.545 qpair failed and we were unable to recover it. 00:30:29.545 [2024-12-09 06:29:24.051223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.545 [2024-12-09 06:29:24.051254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.545 qpair failed and we were unable to recover it. 00:30:29.545 [2024-12-09 06:29:24.051602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.545 [2024-12-09 06:29:24.051637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.545 qpair failed and we were unable to recover it. 00:30:29.545 [2024-12-09 06:29:24.051983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.545 [2024-12-09 06:29:24.052015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.545 qpair failed and we were unable to recover it. 00:30:29.545 [2024-12-09 06:29:24.052357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.545 [2024-12-09 06:29:24.052389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.545 qpair failed and we were unable to recover it. 00:30:29.545 [2024-12-09 06:29:24.052727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.545 [2024-12-09 06:29:24.052760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.545 qpair failed and we were unable to recover it. 00:30:29.545 [2024-12-09 06:29:24.053129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.545 [2024-12-09 06:29:24.053162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.545 qpair failed and we were unable to recover it. 00:30:29.545 [2024-12-09 06:29:24.053385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.545 [2024-12-09 06:29:24.053418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.545 qpair failed and we were unable to recover it. 00:30:29.545 [2024-12-09 06:29:24.053837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.545 [2024-12-09 06:29:24.053870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.545 qpair failed and we were unable to recover it. 00:30:29.545 [2024-12-09 06:29:24.054210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.545 [2024-12-09 06:29:24.054243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.545 qpair failed and we were unable to recover it. 00:30:29.545 [2024-12-09 06:29:24.054526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.545 [2024-12-09 06:29:24.054559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.545 qpair failed and we were unable to recover it. 00:30:29.545 [2024-12-09 06:29:24.054796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.545 [2024-12-09 06:29:24.054827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.545 qpair failed and we were unable to recover it. 00:30:29.545 [2024-12-09 06:29:24.055059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.545 [2024-12-09 06:29:24.055090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.545 qpair failed and we were unable to recover it. 00:30:29.545 [2024-12-09 06:29:24.055335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.545 [2024-12-09 06:29:24.055369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.545 qpair failed and we were unable to recover it. 00:30:29.545 [2024-12-09 06:29:24.055636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.545 [2024-12-09 06:29:24.055669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.545 qpair failed and we were unable to recover it. 00:30:29.545 [2024-12-09 06:29:24.055980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.545 [2024-12-09 06:29:24.056012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.545 qpair failed and we were unable to recover it. 00:30:29.545 [2024-12-09 06:29:24.056358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.545 [2024-12-09 06:29:24.056390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.545 qpair failed and we were unable to recover it. 00:30:29.545 [2024-12-09 06:29:24.056757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.545 [2024-12-09 06:29:24.056791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.545 qpair failed and we were unable to recover it. 00:30:29.545 [2024-12-09 06:29:24.057139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.545 [2024-12-09 06:29:24.057172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.545 qpair failed and we were unable to recover it. 00:30:29.545 [2024-12-09 06:29:24.057401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.545 [2024-12-09 06:29:24.057433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.545 qpair failed and we were unable to recover it. 00:30:29.545 [2024-12-09 06:29:24.057698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.545 [2024-12-09 06:29:24.057732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.545 qpair failed and we were unable to recover it. 00:30:29.545 [2024-12-09 06:29:24.058079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.545 [2024-12-09 06:29:24.058112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.545 qpair failed and we were unable to recover it. 00:30:29.545 [2024-12-09 06:29:24.058362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.545 [2024-12-09 06:29:24.058395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.545 qpair failed and we were unable to recover it. 00:30:29.545 [2024-12-09 06:29:24.058759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.545 [2024-12-09 06:29:24.058793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.545 qpair failed and we were unable to recover it. 00:30:29.545 [2024-12-09 06:29:24.059120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.545 [2024-12-09 06:29:24.059153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.545 qpair failed and we were unable to recover it. 00:30:29.545 [2024-12-09 06:29:24.059378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.545 [2024-12-09 06:29:24.059413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.545 qpair failed and we were unable to recover it. 00:30:29.545 [2024-12-09 06:29:24.059710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.545 [2024-12-09 06:29:24.059744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.545 qpair failed and we were unable to recover it. 00:30:29.545 [2024-12-09 06:29:24.060009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.545 [2024-12-09 06:29:24.060042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.545 qpair failed and we were unable to recover it. 00:30:29.545 [2024-12-09 06:29:24.060389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.545 [2024-12-09 06:29:24.060422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.545 qpair failed and we were unable to recover it. 00:30:29.545 [2024-12-09 06:29:24.060698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.545 [2024-12-09 06:29:24.060737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.545 qpair failed and we were unable to recover it. 00:30:29.819 [2024-12-09 06:29:24.061112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.819 [2024-12-09 06:29:24.061148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.819 qpair failed and we were unable to recover it. 00:30:29.819 [2024-12-09 06:29:24.061510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.819 [2024-12-09 06:29:24.061544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.819 qpair failed and we were unable to recover it. 00:30:29.819 [2024-12-09 06:29:24.061924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.819 [2024-12-09 06:29:24.061958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.819 qpair failed and we were unable to recover it. 00:30:29.819 [2024-12-09 06:29:24.062331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.819 [2024-12-09 06:29:24.062363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.819 qpair failed and we were unable to recover it. 00:30:29.819 [2024-12-09 06:29:24.062586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.819 [2024-12-09 06:29:24.062621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.819 qpair failed and we were unable to recover it. 00:30:29.819 [2024-12-09 06:29:24.062962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.819 [2024-12-09 06:29:24.062994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.819 qpair failed and we were unable to recover it. 00:30:29.819 [2024-12-09 06:29:24.063325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.819 [2024-12-09 06:29:24.063358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.819 qpair failed and we were unable to recover it. 00:30:29.819 [2024-12-09 06:29:24.063719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.819 [2024-12-09 06:29:24.063752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.819 qpair failed and we were unable to recover it. 00:30:29.819 [2024-12-09 06:29:24.064081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.819 [2024-12-09 06:29:24.064114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.819 qpair failed and we were unable to recover it. 00:30:29.819 [2024-12-09 06:29:24.064495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.819 [2024-12-09 06:29:24.064528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.819 qpair failed and we were unable to recover it. 00:30:29.819 [2024-12-09 06:29:24.064909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.819 [2024-12-09 06:29:24.064941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.819 qpair failed and we were unable to recover it. 00:30:29.819 [2024-12-09 06:29:24.065280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.819 [2024-12-09 06:29:24.065311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.819 qpair failed and we were unable to recover it. 00:30:29.819 [2024-12-09 06:29:24.065561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.819 [2024-12-09 06:29:24.065594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.819 qpair failed and we were unable to recover it. 00:30:29.819 [2024-12-09 06:29:24.065950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.819 [2024-12-09 06:29:24.065983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.819 qpair failed and we were unable to recover it. 00:30:29.820 [2024-12-09 06:29:24.066259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.820 [2024-12-09 06:29:24.066291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.820 qpair failed and we were unable to recover it. 00:30:29.820 [2024-12-09 06:29:24.066563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.820 [2024-12-09 06:29:24.066594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.820 qpair failed and we were unable to recover it. 00:30:29.820 [2024-12-09 06:29:24.066915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.820 [2024-12-09 06:29:24.066947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.820 qpair failed and we were unable to recover it. 00:30:29.820 [2024-12-09 06:29:24.067286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.820 [2024-12-09 06:29:24.067319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.820 qpair failed and we were unable to recover it. 00:30:29.820 [2024-12-09 06:29:24.067639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.820 [2024-12-09 06:29:24.067671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.820 qpair failed and we were unable to recover it. 00:30:29.820 [2024-12-09 06:29:24.068000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.820 [2024-12-09 06:29:24.068032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.820 qpair failed and we were unable to recover it. 00:30:29.820 [2024-12-09 06:29:24.068349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.820 [2024-12-09 06:29:24.068381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.820 qpair failed and we were unable to recover it. 00:30:29.820 [2024-12-09 06:29:24.068676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.820 [2024-12-09 06:29:24.068708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.820 qpair failed and we were unable to recover it. 00:30:29.820 [2024-12-09 06:29:24.069063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.820 [2024-12-09 06:29:24.069094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.820 qpair failed and we were unable to recover it. 00:30:29.820 [2024-12-09 06:29:24.069427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.820 [2024-12-09 06:29:24.069475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.820 qpair failed and we were unable to recover it. 00:30:29.820 [2024-12-09 06:29:24.069831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.820 [2024-12-09 06:29:24.069862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.820 qpair failed and we were unable to recover it. 00:30:29.820 [2024-12-09 06:29:24.070112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.820 [2024-12-09 06:29:24.070142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.820 qpair failed and we were unable to recover it. 00:30:29.820 [2024-12-09 06:29:24.070466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.820 [2024-12-09 06:29:24.070498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.820 qpair failed and we were unable to recover it. 00:30:29.820 [2024-12-09 06:29:24.070865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.820 [2024-12-09 06:29:24.070897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.820 qpair failed and we were unable to recover it. 00:30:29.820 [2024-12-09 06:29:24.071138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.820 [2024-12-09 06:29:24.071168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.820 qpair failed and we were unable to recover it. 00:30:29.820 [2024-12-09 06:29:24.071514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.820 [2024-12-09 06:29:24.071547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.820 qpair failed and we were unable to recover it. 00:30:29.820 [2024-12-09 06:29:24.071909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.820 [2024-12-09 06:29:24.071941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.820 qpair failed and we were unable to recover it. 00:30:29.820 [2024-12-09 06:29:24.072283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.820 [2024-12-09 06:29:24.072313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.820 qpair failed and we were unable to recover it. 00:30:29.820 [2024-12-09 06:29:24.072560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.820 [2024-12-09 06:29:24.072591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.820 qpair failed and we were unable to recover it. 00:30:29.820 [2024-12-09 06:29:24.072933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.820 [2024-12-09 06:29:24.072966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.820 qpair failed and we were unable to recover it. 00:30:29.820 [2024-12-09 06:29:24.073196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.820 [2024-12-09 06:29:24.073227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.820 qpair failed and we were unable to recover it. 00:30:29.820 [2024-12-09 06:29:24.073511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.820 [2024-12-09 06:29:24.073543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.820 qpair failed and we were unable to recover it. 00:30:29.820 [2024-12-09 06:29:24.073904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.820 [2024-12-09 06:29:24.073938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.820 qpair failed and we were unable to recover it. 00:30:29.820 [2024-12-09 06:29:24.074311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.820 [2024-12-09 06:29:24.074343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.820 qpair failed and we were unable to recover it. 00:30:29.820 [2024-12-09 06:29:24.074703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.820 [2024-12-09 06:29:24.074737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.820 qpair failed and we were unable to recover it. 00:30:29.820 [2024-12-09 06:29:24.075071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.820 [2024-12-09 06:29:24.075102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.820 qpair failed and we were unable to recover it. 00:30:29.820 [2024-12-09 06:29:24.075468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.820 [2024-12-09 06:29:24.075501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.820 qpair failed and we were unable to recover it. 00:30:29.820 [2024-12-09 06:29:24.075871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.820 [2024-12-09 06:29:24.075904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.820 qpair failed and we were unable to recover it. 00:30:29.820 [2024-12-09 06:29:24.076068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.820 [2024-12-09 06:29:24.076099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.820 qpair failed and we were unable to recover it. 00:30:29.820 [2024-12-09 06:29:24.076440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.820 [2024-12-09 06:29:24.076486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.820 qpair failed and we were unable to recover it. 00:30:29.820 [2024-12-09 06:29:24.076863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.820 [2024-12-09 06:29:24.076895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.820 qpair failed and we were unable to recover it. 00:30:29.820 [2024-12-09 06:29:24.077223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.820 [2024-12-09 06:29:24.077256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.820 qpair failed and we were unable to recover it. 00:30:29.820 [2024-12-09 06:29:24.077578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.820 [2024-12-09 06:29:24.077611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.820 qpair failed and we were unable to recover it. 00:30:29.820 [2024-12-09 06:29:24.077839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.820 [2024-12-09 06:29:24.077873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.820 qpair failed and we were unable to recover it. 00:30:29.820 [2024-12-09 06:29:24.078218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.820 [2024-12-09 06:29:24.078249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.820 qpair failed and we were unable to recover it. 00:30:29.820 [2024-12-09 06:29:24.078587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.820 [2024-12-09 06:29:24.078622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.820 qpair failed and we were unable to recover it. 00:30:29.820 [2024-12-09 06:29:24.078972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.820 [2024-12-09 06:29:24.079003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.820 qpair failed and we were unable to recover it. 00:30:29.820 [2024-12-09 06:29:24.079332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.821 [2024-12-09 06:29:24.079365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.821 qpair failed and we were unable to recover it. 00:30:29.821 [2024-12-09 06:29:24.079709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.821 [2024-12-09 06:29:24.079743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.821 qpair failed and we were unable to recover it. 00:30:29.821 [2024-12-09 06:29:24.079883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.821 [2024-12-09 06:29:24.079915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.821 qpair failed and we were unable to recover it. 00:30:29.821 [2024-12-09 06:29:24.080282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.821 [2024-12-09 06:29:24.080314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.821 qpair failed and we were unable to recover it. 00:30:29.821 [2024-12-09 06:29:24.080558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.821 [2024-12-09 06:29:24.080592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.821 qpair failed and we were unable to recover it. 00:30:29.821 [2024-12-09 06:29:24.080932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.821 [2024-12-09 06:29:24.080963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.821 qpair failed and we were unable to recover it. 00:30:29.821 [2024-12-09 06:29:24.081305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.821 [2024-12-09 06:29:24.081338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.821 qpair failed and we were unable to recover it. 00:30:29.821 [2024-12-09 06:29:24.081629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.821 [2024-12-09 06:29:24.081661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.821 qpair failed and we were unable to recover it. 00:30:29.821 [2024-12-09 06:29:24.082010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.821 [2024-12-09 06:29:24.082041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.821 qpair failed and we were unable to recover it. 00:30:29.821 [2024-12-09 06:29:24.082258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.821 [2024-12-09 06:29:24.082291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.821 qpair failed and we were unable to recover it. 00:30:29.821 [2024-12-09 06:29:24.082540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.821 [2024-12-09 06:29:24.082571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.821 qpair failed and we were unable to recover it. 00:30:29.821 [2024-12-09 06:29:24.082943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.821 [2024-12-09 06:29:24.082975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.821 qpair failed and we were unable to recover it. 00:30:29.821 [2024-12-09 06:29:24.083325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.821 [2024-12-09 06:29:24.083356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.821 qpair failed and we were unable to recover it. 00:30:29.821 [2024-12-09 06:29:24.083728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.821 [2024-12-09 06:29:24.083762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.821 qpair failed and we were unable to recover it. 00:30:29.821 [2024-12-09 06:29:24.084108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.821 [2024-12-09 06:29:24.084140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.821 qpair failed and we were unable to recover it. 00:30:29.821 [2024-12-09 06:29:24.084472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.821 [2024-12-09 06:29:24.084506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.821 qpair failed and we were unable to recover it. 00:30:29.821 [2024-12-09 06:29:24.084875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.821 [2024-12-09 06:29:24.084914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.821 qpair failed and we were unable to recover it. 00:30:29.821 [2024-12-09 06:29:24.085201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.821 [2024-12-09 06:29:24.085232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.821 qpair failed and we were unable to recover it. 00:30:29.821 [2024-12-09 06:29:24.085490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.821 [2024-12-09 06:29:24.085522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.821 qpair failed and we were unable to recover it. 00:30:29.821 [2024-12-09 06:29:24.085908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.821 [2024-12-09 06:29:24.085942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.821 qpair failed and we were unable to recover it. 00:30:29.821 [2024-12-09 06:29:24.086287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.821 [2024-12-09 06:29:24.086321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.821 qpair failed and we were unable to recover it. 00:30:29.821 [2024-12-09 06:29:24.086565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.821 [2024-12-09 06:29:24.086598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.821 qpair failed and we were unable to recover it. 00:30:29.821 [2024-12-09 06:29:24.086975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.821 [2024-12-09 06:29:24.087008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.821 qpair failed and we were unable to recover it. 00:30:29.821 [2024-12-09 06:29:24.087329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.821 [2024-12-09 06:29:24.087361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.821 qpair failed and we were unable to recover it. 00:30:29.821 [2024-12-09 06:29:24.087689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.821 [2024-12-09 06:29:24.087722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.821 qpair failed and we were unable to recover it. 00:30:29.821 [2024-12-09 06:29:24.087950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.821 [2024-12-09 06:29:24.087984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.821 qpair failed and we were unable to recover it. 00:30:29.821 [2024-12-09 06:29:24.088373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.821 [2024-12-09 06:29:24.088405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.821 qpair failed and we were unable to recover it. 00:30:29.821 [2024-12-09 06:29:24.088801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.821 [2024-12-09 06:29:24.088834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.821 qpair failed and we were unable to recover it. 00:30:29.821 [2024-12-09 06:29:24.089219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.821 [2024-12-09 06:29:24.089251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.821 qpair failed and we were unable to recover it. 00:30:29.821 [2024-12-09 06:29:24.089613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.821 [2024-12-09 06:29:24.089647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.821 qpair failed and we were unable to recover it. 00:30:29.821 [2024-12-09 06:29:24.090019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.821 [2024-12-09 06:29:24.090051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.821 qpair failed and we were unable to recover it. 00:30:29.821 [2024-12-09 06:29:24.090425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.821 [2024-12-09 06:29:24.090468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.821 qpair failed and we were unable to recover it. 00:30:29.821 [2024-12-09 06:29:24.090817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.821 [2024-12-09 06:29:24.090850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.821 qpair failed and we were unable to recover it. 00:30:29.821 [2024-12-09 06:29:24.091195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.821 [2024-12-09 06:29:24.091229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.821 qpair failed and we were unable to recover it. 00:30:29.821 [2024-12-09 06:29:24.091512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.821 [2024-12-09 06:29:24.091543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.821 qpair failed and we were unable to recover it. 00:30:29.821 [2024-12-09 06:29:24.091814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.821 [2024-12-09 06:29:24.091845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.821 qpair failed and we were unable to recover it. 00:30:29.821 [2024-12-09 06:29:24.092168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.821 [2024-12-09 06:29:24.092201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.821 qpair failed and we were unable to recover it. 00:30:29.821 [2024-12-09 06:29:24.092511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.821 [2024-12-09 06:29:24.092544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.821 qpair failed and we were unable to recover it. 00:30:29.821 [2024-12-09 06:29:24.092920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.822 [2024-12-09 06:29:24.092954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.822 qpair failed and we were unable to recover it. 00:30:29.822 [2024-12-09 06:29:24.093285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.822 [2024-12-09 06:29:24.093317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.822 qpair failed and we were unable to recover it. 00:30:29.822 [2024-12-09 06:29:24.093674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.822 [2024-12-09 06:29:24.093706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.822 qpair failed and we were unable to recover it. 00:30:29.822 [2024-12-09 06:29:24.093940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.822 [2024-12-09 06:29:24.093971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.822 qpair failed and we were unable to recover it. 00:30:29.822 [2024-12-09 06:29:24.094345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.822 [2024-12-09 06:29:24.094376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.822 qpair failed and we were unable to recover it. 00:30:29.822 [2024-12-09 06:29:24.094641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.822 [2024-12-09 06:29:24.094673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.822 qpair failed and we were unable to recover it. 00:30:29.822 [2024-12-09 06:29:24.095047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.822 [2024-12-09 06:29:24.095078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.822 qpair failed and we were unable to recover it. 00:30:29.822 [2024-12-09 06:29:24.095424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.822 [2024-12-09 06:29:24.095482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.822 qpair failed and we were unable to recover it. 00:30:29.822 [2024-12-09 06:29:24.095860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.822 [2024-12-09 06:29:24.095891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.822 qpair failed and we were unable to recover it. 00:30:29.822 [2024-12-09 06:29:24.096244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.822 [2024-12-09 06:29:24.096277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.822 qpair failed and we were unable to recover it. 00:30:29.822 [2024-12-09 06:29:24.096597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.822 [2024-12-09 06:29:24.096628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.822 qpair failed and we were unable to recover it. 00:30:29.822 [2024-12-09 06:29:24.096976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.822 [2024-12-09 06:29:24.097007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.822 qpair failed and we were unable to recover it. 00:30:29.822 [2024-12-09 06:29:24.097356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.822 [2024-12-09 06:29:24.097387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.822 qpair failed and we were unable to recover it. 00:30:29.822 [2024-12-09 06:29:24.097639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.822 [2024-12-09 06:29:24.097670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.822 qpair failed and we were unable to recover it. 00:30:29.822 [2024-12-09 06:29:24.097948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.822 [2024-12-09 06:29:24.097981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.822 qpair failed and we were unable to recover it. 00:30:29.822 [2024-12-09 06:29:24.098383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.822 [2024-12-09 06:29:24.098417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.822 qpair failed and we were unable to recover it. 00:30:29.822 [2024-12-09 06:29:24.098808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.822 [2024-12-09 06:29:24.098840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.822 qpair failed and we were unable to recover it. 00:30:29.822 [2024-12-09 06:29:24.099179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.822 [2024-12-09 06:29:24.099213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.822 qpair failed and we were unable to recover it. 00:30:29.822 [2024-12-09 06:29:24.099409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.822 [2024-12-09 06:29:24.099439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.822 qpair failed and we were unable to recover it. 00:30:29.822 [2024-12-09 06:29:24.099717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.822 [2024-12-09 06:29:24.099750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.822 qpair failed and we were unable to recover it. 00:30:29.822 [2024-12-09 06:29:24.100116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.822 [2024-12-09 06:29:24.100149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.822 qpair failed and we were unable to recover it. 00:30:29.822 [2024-12-09 06:29:24.100529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.822 [2024-12-09 06:29:24.100563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.822 qpair failed and we were unable to recover it. 00:30:29.822 [2024-12-09 06:29:24.100923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.822 [2024-12-09 06:29:24.100955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.822 qpair failed and we were unable to recover it. 00:30:29.822 [2024-12-09 06:29:24.101192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.822 [2024-12-09 06:29:24.101224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.822 qpair failed and we were unable to recover it. 00:30:29.822 [2024-12-09 06:29:24.101537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.822 [2024-12-09 06:29:24.101570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.822 qpair failed and we were unable to recover it. 00:30:29.822 [2024-12-09 06:29:24.101916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.822 [2024-12-09 06:29:24.101949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.822 qpair failed and we were unable to recover it. 00:30:29.822 [2024-12-09 06:29:24.102271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.822 [2024-12-09 06:29:24.102303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.822 qpair failed and we were unable to recover it. 00:30:29.822 [2024-12-09 06:29:24.102666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.822 [2024-12-09 06:29:24.102700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.822 qpair failed and we were unable to recover it. 00:30:29.822 [2024-12-09 06:29:24.102983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.822 [2024-12-09 06:29:24.103014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.822 qpair failed and we were unable to recover it. 00:30:29.822 [2024-12-09 06:29:24.103358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.822 [2024-12-09 06:29:24.103390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.822 qpair failed and we were unable to recover it. 00:30:29.822 [2024-12-09 06:29:24.103707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.822 [2024-12-09 06:29:24.103742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.822 qpair failed and we were unable to recover it. 00:30:29.822 [2024-12-09 06:29:24.104051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.822 [2024-12-09 06:29:24.104082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.822 qpair failed and we were unable to recover it. 00:30:29.822 [2024-12-09 06:29:24.104440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.822 [2024-12-09 06:29:24.104485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.822 qpair failed and we were unable to recover it. 00:30:29.823 [2024-12-09 06:29:24.104744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.823 [2024-12-09 06:29:24.104776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.823 qpair failed and we were unable to recover it. 00:30:29.823 [2024-12-09 06:29:24.105120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.823 [2024-12-09 06:29:24.105152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.823 qpair failed and we were unable to recover it. 00:30:29.823 [2024-12-09 06:29:24.105504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.823 [2024-12-09 06:29:24.105539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.823 qpair failed and we were unable to recover it. 00:30:29.823 [2024-12-09 06:29:24.105880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.823 [2024-12-09 06:29:24.105913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.823 qpair failed and we were unable to recover it. 00:30:29.823 [2024-12-09 06:29:24.106260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.823 [2024-12-09 06:29:24.106291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.823 qpair failed and we were unable to recover it. 00:30:29.823 [2024-12-09 06:29:24.106632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.823 [2024-12-09 06:29:24.106664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.823 qpair failed and we were unable to recover it. 00:30:29.823 [2024-12-09 06:29:24.107006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.823 [2024-12-09 06:29:24.107040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.823 qpair failed and we were unable to recover it. 00:30:29.823 [2024-12-09 06:29:24.107375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.823 [2024-12-09 06:29:24.107408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.823 qpair failed and we were unable to recover it. 00:30:29.823 [2024-12-09 06:29:24.107672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.823 [2024-12-09 06:29:24.107707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.823 qpair failed and we were unable to recover it. 00:30:29.823 [2024-12-09 06:29:24.108050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.823 [2024-12-09 06:29:24.108084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.823 qpair failed and we were unable to recover it. 00:30:29.823 [2024-12-09 06:29:24.108487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.823 [2024-12-09 06:29:24.108533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.823 qpair failed and we were unable to recover it. 00:30:29.823 [2024-12-09 06:29:24.108900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.823 [2024-12-09 06:29:24.108932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.823 qpair failed and we were unable to recover it. 00:30:29.823 [2024-12-09 06:29:24.109267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.823 [2024-12-09 06:29:24.109301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.823 qpair failed and we were unable to recover it. 00:30:29.823 [2024-12-09 06:29:24.109681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.823 [2024-12-09 06:29:24.109721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.823 qpair failed and we were unable to recover it. 00:30:29.823 [2024-12-09 06:29:24.110099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.823 [2024-12-09 06:29:24.110131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.823 qpair failed and we were unable to recover it. 00:30:29.823 [2024-12-09 06:29:24.110536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.823 [2024-12-09 06:29:24.110570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.823 qpair failed and we were unable to recover it. 00:30:29.823 [2024-12-09 06:29:24.110930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.823 [2024-12-09 06:29:24.110966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.823 qpair failed and we were unable to recover it. 00:30:29.823 [2024-12-09 06:29:24.111324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.823 [2024-12-09 06:29:24.111355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.823 qpair failed and we were unable to recover it. 00:30:29.823 [2024-12-09 06:29:24.111708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.823 [2024-12-09 06:29:24.111742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.823 qpair failed and we were unable to recover it. 00:30:29.823 [2024-12-09 06:29:24.112127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.823 [2024-12-09 06:29:24.112160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.823 qpair failed and we were unable to recover it. 00:30:29.823 [2024-12-09 06:29:24.112509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.823 [2024-12-09 06:29:24.112542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.823 qpair failed and we were unable to recover it. 00:30:29.823 [2024-12-09 06:29:24.112892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.823 [2024-12-09 06:29:24.112926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.823 qpair failed and we were unable to recover it. 00:30:29.823 [2024-12-09 06:29:24.113270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.823 [2024-12-09 06:29:24.113303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.823 qpair failed and we were unable to recover it. 00:30:29.823 [2024-12-09 06:29:24.113532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.823 [2024-12-09 06:29:24.113586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.823 qpair failed and we were unable to recover it. 00:30:29.823 [2024-12-09 06:29:24.113824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.823 [2024-12-09 06:29:24.113856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.823 qpair failed and we were unable to recover it. 00:30:29.823 [2024-12-09 06:29:24.114194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.823 [2024-12-09 06:29:24.114224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.823 qpair failed and we were unable to recover it. 00:30:29.823 [2024-12-09 06:29:24.114573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.823 [2024-12-09 06:29:24.114608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.823 qpair failed and we were unable to recover it. 00:30:29.823 [2024-12-09 06:29:24.114973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.823 [2024-12-09 06:29:24.115005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.823 qpair failed and we were unable to recover it. 00:30:29.823 [2024-12-09 06:29:24.115339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.823 [2024-12-09 06:29:24.115372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.823 qpair failed and we were unable to recover it. 00:30:29.823 [2024-12-09 06:29:24.115751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.823 [2024-12-09 06:29:24.115784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.823 qpair failed and we were unable to recover it. 00:30:29.823 [2024-12-09 06:29:24.116166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.823 [2024-12-09 06:29:24.116200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.823 qpair failed and we were unable to recover it. 00:30:29.823 [2024-12-09 06:29:24.116473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.823 [2024-12-09 06:29:24.116507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.823 qpair failed and we were unable to recover it. 00:30:29.823 [2024-12-09 06:29:24.116865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.823 [2024-12-09 06:29:24.116899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.823 qpair failed and we were unable to recover it. 00:30:29.823 [2024-12-09 06:29:24.117141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.823 [2024-12-09 06:29:24.117173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.823 qpair failed and we were unable to recover it. 00:30:29.823 [2024-12-09 06:29:24.117525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.823 [2024-12-09 06:29:24.117557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.823 qpair failed and we were unable to recover it. 00:30:29.823 [2024-12-09 06:29:24.117912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.823 [2024-12-09 06:29:24.117945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.823 qpair failed and we were unable to recover it. 00:30:29.823 [2024-12-09 06:29:24.118320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.823 [2024-12-09 06:29:24.118354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.823 qpair failed and we were unable to recover it. 00:30:29.823 [2024-12-09 06:29:24.118789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.823 [2024-12-09 06:29:24.118822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.823 qpair failed and we were unable to recover it. 00:30:29.824 [2024-12-09 06:29:24.119041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.824 [2024-12-09 06:29:24.119072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.824 qpair failed and we were unable to recover it. 00:30:29.824 [2024-12-09 06:29:24.119443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.824 [2024-12-09 06:29:24.119488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.824 qpair failed and we were unable to recover it. 00:30:29.824 [2024-12-09 06:29:24.119858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.824 [2024-12-09 06:29:24.119889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.824 qpair failed and we were unable to recover it. 00:30:29.824 [2024-12-09 06:29:24.120158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.824 [2024-12-09 06:29:24.120190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.824 qpair failed and we were unable to recover it. 00:30:29.824 [2024-12-09 06:29:24.120572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.824 [2024-12-09 06:29:24.120604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.824 qpair failed and we were unable to recover it. 00:30:29.824 [2024-12-09 06:29:24.120966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.824 [2024-12-09 06:29:24.120999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.824 qpair failed and we were unable to recover it. 00:30:29.824 [2024-12-09 06:29:24.121344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.824 [2024-12-09 06:29:24.121378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.824 qpair failed and we were unable to recover it. 00:30:29.824 [2024-12-09 06:29:24.121722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.824 [2024-12-09 06:29:24.121755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.824 qpair failed and we were unable to recover it. 00:30:29.824 [2024-12-09 06:29:24.122128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.824 [2024-12-09 06:29:24.122160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.824 qpair failed and we were unable to recover it. 00:30:29.824 [2024-12-09 06:29:24.122535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.824 [2024-12-09 06:29:24.122569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.824 qpair failed and we were unable to recover it. 00:30:29.824 [2024-12-09 06:29:24.122892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.824 [2024-12-09 06:29:24.122924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.824 qpair failed and we were unable to recover it. 00:30:29.824 [2024-12-09 06:29:24.123265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.824 [2024-12-09 06:29:24.123299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.824 qpair failed and we were unable to recover it. 00:30:29.824 [2024-12-09 06:29:24.123616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.824 [2024-12-09 06:29:24.123650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.824 qpair failed and we were unable to recover it. 00:30:29.824 [2024-12-09 06:29:24.124017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.824 [2024-12-09 06:29:24.124050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.824 qpair failed and we were unable to recover it. 00:30:29.824 [2024-12-09 06:29:24.124381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.824 [2024-12-09 06:29:24.124414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.824 qpair failed and we were unable to recover it. 00:30:29.824 [2024-12-09 06:29:24.124760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.824 [2024-12-09 06:29:24.124793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.824 qpair failed and we were unable to recover it. 00:30:29.824 [2024-12-09 06:29:24.125118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.824 [2024-12-09 06:29:24.125159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.824 qpair failed and we were unable to recover it. 00:30:29.824 [2024-12-09 06:29:24.125504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.824 [2024-12-09 06:29:24.125537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.824 qpair failed and we were unable to recover it. 00:30:29.824 [2024-12-09 06:29:24.125863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.838 [2024-12-09 06:29:24.125894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.838 qpair failed and we were unable to recover it. 00:30:29.838 [2024-12-09 06:29:24.126259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.838 [2024-12-09 06:29:24.126291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.838 qpair failed and we were unable to recover it. 00:30:29.838 [2024-12-09 06:29:24.126685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.838 [2024-12-09 06:29:24.126719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.838 qpair failed and we were unable to recover it. 00:30:29.838 [2024-12-09 06:29:24.127044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.838 [2024-12-09 06:29:24.127078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.838 qpair failed and we were unable to recover it. 00:30:29.838 [2024-12-09 06:29:24.127482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.838 [2024-12-09 06:29:24.127514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.838 qpair failed and we were unable to recover it. 00:30:29.838 [2024-12-09 06:29:24.127870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.838 [2024-12-09 06:29:24.127904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.838 qpair failed and we were unable to recover it. 00:30:29.838 [2024-12-09 06:29:24.128167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.838 [2024-12-09 06:29:24.128200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.838 qpair failed and we were unable to recover it. 00:30:29.838 [2024-12-09 06:29:24.128494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.838 [2024-12-09 06:29:24.128525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.838 qpair failed and we were unable to recover it. 00:30:29.838 [2024-12-09 06:29:24.128909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.838 [2024-12-09 06:29:24.128940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.838 qpair failed and we were unable to recover it. 00:30:29.838 [2024-12-09 06:29:24.129277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.838 [2024-12-09 06:29:24.129310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.838 qpair failed and we were unable to recover it. 00:30:29.838 [2024-12-09 06:29:24.129598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.838 [2024-12-09 06:29:24.129630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.838 qpair failed and we were unable to recover it. 00:30:29.838 [2024-12-09 06:29:24.129997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.838 [2024-12-09 06:29:24.130029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.838 qpair failed and we were unable to recover it. 00:30:29.838 [2024-12-09 06:29:24.130380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.838 [2024-12-09 06:29:24.130415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.838 qpair failed and we were unable to recover it. 00:30:29.838 [2024-12-09 06:29:24.130771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.838 [2024-12-09 06:29:24.130803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.838 qpair failed and we were unable to recover it. 00:30:29.838 [2024-12-09 06:29:24.131143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.838 [2024-12-09 06:29:24.131176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.838 qpair failed and we were unable to recover it. 00:30:29.838 [2024-12-09 06:29:24.131522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.838 [2024-12-09 06:29:24.131558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.838 qpair failed and we were unable to recover it. 00:30:29.838 [2024-12-09 06:29:24.131892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.838 [2024-12-09 06:29:24.131925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.838 qpair failed and we were unable to recover it. 00:30:29.838 [2024-12-09 06:29:24.132289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.838 [2024-12-09 06:29:24.132321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.838 qpair failed and we were unable to recover it. 00:30:29.838 [2024-12-09 06:29:24.132620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.838 [2024-12-09 06:29:24.132652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.838 qpair failed and we were unable to recover it. 00:30:29.838 [2024-12-09 06:29:24.132966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.838 [2024-12-09 06:29:24.132999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.838 qpair failed and we were unable to recover it. 00:30:29.838 [2024-12-09 06:29:24.133229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.838 [2024-12-09 06:29:24.133261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.838 qpair failed and we were unable to recover it. 00:30:29.838 [2024-12-09 06:29:24.133647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.839 [2024-12-09 06:29:24.133681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.839 qpair failed and we were unable to recover it. 00:30:29.839 [2024-12-09 06:29:24.133903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.839 [2024-12-09 06:29:24.133934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.839 qpair failed and we were unable to recover it. 00:30:29.839 [2024-12-09 06:29:24.134290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.839 [2024-12-09 06:29:24.134322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.839 qpair failed and we were unable to recover it. 00:30:29.839 [2024-12-09 06:29:24.134686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.839 [2024-12-09 06:29:24.134720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.839 qpair failed and we were unable to recover it. 00:30:29.839 [2024-12-09 06:29:24.135059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.839 [2024-12-09 06:29:24.135099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.839 qpair failed and we were unable to recover it. 00:30:29.839 [2024-12-09 06:29:24.135487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.839 [2024-12-09 06:29:24.135520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.839 qpair failed and we were unable to recover it. 00:30:29.839 [2024-12-09 06:29:24.135744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.839 [2024-12-09 06:29:24.135778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.839 qpair failed and we were unable to recover it. 00:30:29.839 [2024-12-09 06:29:24.136130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.839 [2024-12-09 06:29:24.136163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.839 qpair failed and we were unable to recover it. 00:30:29.839 [2024-12-09 06:29:24.136505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.839 [2024-12-09 06:29:24.136539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.839 qpair failed and we were unable to recover it. 00:30:29.839 [2024-12-09 06:29:24.136859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.839 [2024-12-09 06:29:24.136891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.839 qpair failed and we were unable to recover it. 00:30:29.839 [2024-12-09 06:29:24.137223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.839 [2024-12-09 06:29:24.137258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.839 qpair failed and we were unable to recover it. 00:30:29.839 [2024-12-09 06:29:24.137620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.839 [2024-12-09 06:29:24.137653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.839 qpair failed and we were unable to recover it. 00:30:29.839 [2024-12-09 06:29:24.137910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.839 [2024-12-09 06:29:24.137943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.839 qpair failed and we were unable to recover it. 00:30:29.839 [2024-12-09 06:29:24.138292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.839 [2024-12-09 06:29:24.138323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.839 qpair failed and we were unable to recover it. 00:30:29.839 [2024-12-09 06:29:24.138555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.839 [2024-12-09 06:29:24.138587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.839 qpair failed and we were unable to recover it. 00:30:29.839 [2024-12-09 06:29:24.138934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.839 [2024-12-09 06:29:24.138965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.839 qpair failed and we were unable to recover it. 00:30:29.839 [2024-12-09 06:29:24.139202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.839 [2024-12-09 06:29:24.139234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.839 qpair failed and we were unable to recover it. 00:30:29.839 [2024-12-09 06:29:24.139587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.839 [2024-12-09 06:29:24.139621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.839 qpair failed and we were unable to recover it. 00:30:29.839 [2024-12-09 06:29:24.140002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.839 [2024-12-09 06:29:24.140035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.839 qpair failed and we were unable to recover it. 00:30:29.839 [2024-12-09 06:29:24.140366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.839 [2024-12-09 06:29:24.140399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.839 qpair failed and we were unable to recover it. 00:30:29.839 [2024-12-09 06:29:24.140744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.839 [2024-12-09 06:29:24.140778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.839 qpair failed and we were unable to recover it. 00:30:29.839 [2024-12-09 06:29:24.141106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.839 [2024-12-09 06:29:24.141139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.839 qpair failed and we were unable to recover it. 00:30:29.839 [2024-12-09 06:29:24.141513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.839 [2024-12-09 06:29:24.141548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.839 qpair failed and we were unable to recover it. 00:30:29.839 [2024-12-09 06:29:24.141931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.839 [2024-12-09 06:29:24.141964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.839 qpair failed and we were unable to recover it. 00:30:29.839 [2024-12-09 06:29:24.142327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.839 [2024-12-09 06:29:24.142360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.839 qpair failed and we were unable to recover it. 00:30:29.839 [2024-12-09 06:29:24.142726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.839 [2024-12-09 06:29:24.142759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.839 qpair failed and we were unable to recover it. 00:30:29.839 [2024-12-09 06:29:24.142993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.839 [2024-12-09 06:29:24.143025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.839 qpair failed and we were unable to recover it. 00:30:29.839 [2024-12-09 06:29:24.143383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.839 [2024-12-09 06:29:24.143417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.839 qpair failed and we were unable to recover it. 00:30:29.839 [2024-12-09 06:29:24.143848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.839 [2024-12-09 06:29:24.143880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.839 qpair failed and we were unable to recover it. 00:30:29.839 [2024-12-09 06:29:24.144231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.839 [2024-12-09 06:29:24.144265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.839 qpair failed and we were unable to recover it. 00:30:29.839 [2024-12-09 06:29:24.144620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.839 [2024-12-09 06:29:24.144655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.839 qpair failed and we were unable to recover it. 00:30:29.839 [2024-12-09 06:29:24.144898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.839 [2024-12-09 06:29:24.144929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.839 qpair failed and we were unable to recover it. 00:30:29.839 [2024-12-09 06:29:24.145211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.839 [2024-12-09 06:29:24.145243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.839 qpair failed and we were unable to recover it. 00:30:29.839 [2024-12-09 06:29:24.145629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.839 [2024-12-09 06:29:24.145663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.839 qpair failed and we were unable to recover it. 00:30:29.839 [2024-12-09 06:29:24.145868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.839 [2024-12-09 06:29:24.145900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.839 qpair failed and we were unable to recover it. 00:30:29.839 [2024-12-09 06:29:24.146243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.839 [2024-12-09 06:29:24.146275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.839 qpair failed and we were unable to recover it. 00:30:29.839 [2024-12-09 06:29:24.146634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.839 [2024-12-09 06:29:24.146668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.839 qpair failed and we were unable to recover it. 00:30:29.839 [2024-12-09 06:29:24.147013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.839 [2024-12-09 06:29:24.147044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.839 qpair failed and we were unable to recover it. 00:30:29.839 [2024-12-09 06:29:24.147472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.840 [2024-12-09 06:29:24.147504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.840 qpair failed and we were unable to recover it. 00:30:29.840 [2024-12-09 06:29:24.147885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.840 [2024-12-09 06:29:24.147917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.840 qpair failed and we were unable to recover it. 00:30:29.840 [2024-12-09 06:29:24.148256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.840 [2024-12-09 06:29:24.148290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.840 qpair failed and we were unable to recover it. 00:30:29.840 [2024-12-09 06:29:24.148479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.840 [2024-12-09 06:29:24.148512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.840 qpair failed and we were unable to recover it. 00:30:29.840 [2024-12-09 06:29:24.148849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.840 [2024-12-09 06:29:24.148881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.840 qpair failed and we were unable to recover it. 00:30:29.840 [2024-12-09 06:29:24.149237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.840 [2024-12-09 06:29:24.149269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.840 qpair failed and we were unable to recover it. 00:30:29.840 [2024-12-09 06:29:24.149604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.840 [2024-12-09 06:29:24.149639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.840 qpair failed and we were unable to recover it. 00:30:29.840 [2024-12-09 06:29:24.150023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.840 [2024-12-09 06:29:24.150061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.840 qpair failed and we were unable to recover it. 00:30:29.840 [2024-12-09 06:29:24.150415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.840 [2024-12-09 06:29:24.150446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.840 qpair failed and we were unable to recover it. 00:30:29.840 [2024-12-09 06:29:24.150791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.840 [2024-12-09 06:29:24.150824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.840 qpair failed and we were unable to recover it. 00:30:29.840 [2024-12-09 06:29:24.151240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.840 [2024-12-09 06:29:24.151272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.840 qpair failed and we were unable to recover it. 00:30:29.840 [2024-12-09 06:29:24.151510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.840 [2024-12-09 06:29:24.151542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.840 qpair failed and we were unable to recover it. 00:30:29.840 [2024-12-09 06:29:24.151887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.840 [2024-12-09 06:29:24.151918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.840 qpair failed and we were unable to recover it. 00:30:29.840 [2024-12-09 06:29:24.152282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.840 [2024-12-09 06:29:24.152314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.840 qpair failed and we were unable to recover it. 00:30:29.840 [2024-12-09 06:29:24.152666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.840 [2024-12-09 06:29:24.152698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.840 qpair failed and we were unable to recover it. 00:30:29.840 [2024-12-09 06:29:24.153085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.840 [2024-12-09 06:29:24.153116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.840 qpair failed and we were unable to recover it. 00:30:29.840 [2024-12-09 06:29:24.153471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.840 [2024-12-09 06:29:24.153505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.840 qpair failed and we were unable to recover it. 00:30:29.840 [2024-12-09 06:29:24.153876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.840 [2024-12-09 06:29:24.153908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.840 qpair failed and we were unable to recover it. 00:30:29.840 [2024-12-09 06:29:24.154255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.840 [2024-12-09 06:29:24.154287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.840 qpair failed and we were unable to recover it. 00:30:29.840 [2024-12-09 06:29:24.154617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.840 [2024-12-09 06:29:24.154650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.840 qpair failed and we were unable to recover it. 00:30:29.840 [2024-12-09 06:29:24.154995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.840 [2024-12-09 06:29:24.155025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.840 qpair failed and we were unable to recover it. 00:30:29.840 [2024-12-09 06:29:24.155408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.840 [2024-12-09 06:29:24.155440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.840 qpair failed and we were unable to recover it. 00:30:29.840 [2024-12-09 06:29:24.155720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.840 [2024-12-09 06:29:24.155752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.840 qpair failed and we were unable to recover it. 00:30:29.840 [2024-12-09 06:29:24.156076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.840 [2024-12-09 06:29:24.156106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.840 qpair failed and we were unable to recover it. 00:30:29.840 [2024-12-09 06:29:24.156486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.840 [2024-12-09 06:29:24.156519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.840 qpair failed and we were unable to recover it. 00:30:29.840 [2024-12-09 06:29:24.156890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.840 [2024-12-09 06:29:24.156924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.840 qpair failed and we were unable to recover it. 00:30:29.840 [2024-12-09 06:29:24.157262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.840 [2024-12-09 06:29:24.157292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.840 qpair failed and we were unable to recover it. 00:30:29.840 [2024-12-09 06:29:24.157654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.840 [2024-12-09 06:29:24.157687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.840 qpair failed and we were unable to recover it. 00:30:29.840 [2024-12-09 06:29:24.158087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.840 [2024-12-09 06:29:24.158118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.840 qpair failed and we were unable to recover it. 00:30:29.840 [2024-12-09 06:29:24.158470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.840 [2024-12-09 06:29:24.158504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.840 qpair failed and we were unable to recover it. 00:30:29.840 [2024-12-09 06:29:24.158891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.840 [2024-12-09 06:29:24.158921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.840 qpair failed and we were unable to recover it. 00:30:29.840 [2024-12-09 06:29:24.159306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.840 [2024-12-09 06:29:24.159338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.840 qpair failed and we were unable to recover it. 00:30:29.840 [2024-12-09 06:29:24.159672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.840 [2024-12-09 06:29:24.159704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.840 qpair failed and we were unable to recover it. 00:30:29.840 [2024-12-09 06:29:24.160046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.840 [2024-12-09 06:29:24.160079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.840 qpair failed and we were unable to recover it. 00:30:29.840 [2024-12-09 06:29:24.160389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.840 [2024-12-09 06:29:24.160427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.840 qpair failed and we were unable to recover it. 00:30:29.840 [2024-12-09 06:29:24.160745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.840 [2024-12-09 06:29:24.160778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.840 qpair failed and we were unable to recover it. 00:30:29.840 [2024-12-09 06:29:24.161146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.840 [2024-12-09 06:29:24.161178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.840 qpair failed and we were unable to recover it. 00:30:29.840 [2024-12-09 06:29:24.161408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.840 [2024-12-09 06:29:24.161441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.840 qpair failed and we were unable to recover it. 00:30:29.841 [2024-12-09 06:29:24.161812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.841 [2024-12-09 06:29:24.161845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.841 qpair failed and we were unable to recover it. 00:30:29.841 [2024-12-09 06:29:24.162161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.841 [2024-12-09 06:29:24.162193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.841 qpair failed and we were unable to recover it. 00:30:29.841 [2024-12-09 06:29:24.162525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.841 [2024-12-09 06:29:24.162557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.841 qpair failed and we were unable to recover it. 00:30:29.841 [2024-12-09 06:29:24.162914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.841 [2024-12-09 06:29:24.162946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.841 qpair failed and we were unable to recover it. 00:30:29.841 [2024-12-09 06:29:24.163284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.841 [2024-12-09 06:29:24.163314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.841 qpair failed and we were unable to recover it. 00:30:29.841 [2024-12-09 06:29:24.163666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.841 [2024-12-09 06:29:24.163699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.841 qpair failed and we were unable to recover it. 00:30:29.841 [2024-12-09 06:29:24.164113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.841 [2024-12-09 06:29:24.164146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.841 qpair failed and we were unable to recover it. 00:30:29.841 [2024-12-09 06:29:24.164559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.841 [2024-12-09 06:29:24.164591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.841 qpair failed and we were unable to recover it. 00:30:29.841 [2024-12-09 06:29:24.164938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.841 [2024-12-09 06:29:24.164971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.841 qpair failed and we were unable to recover it. 00:30:29.841 [2024-12-09 06:29:24.165322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.841 [2024-12-09 06:29:24.165354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.841 qpair failed and we were unable to recover it. 00:30:29.841 [2024-12-09 06:29:24.165714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.841 [2024-12-09 06:29:24.165748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.841 qpair failed and we were unable to recover it. 00:30:29.841 [2024-12-09 06:29:24.166095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.841 [2024-12-09 06:29:24.166129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.841 qpair failed and we were unable to recover it. 00:30:29.841 [2024-12-09 06:29:24.166554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.841 [2024-12-09 06:29:24.166586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.841 qpair failed and we were unable to recover it. 00:30:29.841 [2024-12-09 06:29:24.166904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.841 [2024-12-09 06:29:24.166938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.841 qpair failed and we were unable to recover it. 00:30:29.841 [2024-12-09 06:29:24.167299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.841 [2024-12-09 06:29:24.167330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.841 qpair failed and we were unable to recover it. 00:30:29.841 [2024-12-09 06:29:24.167685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.841 [2024-12-09 06:29:24.167719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.841 qpair failed and we were unable to recover it. 00:30:29.841 [2024-12-09 06:29:24.168098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.841 [2024-12-09 06:29:24.168129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.841 qpair failed and we were unable to recover it. 00:30:29.841 [2024-12-09 06:29:24.168495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.841 [2024-12-09 06:29:24.168527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.841 qpair failed and we were unable to recover it. 00:30:29.841 [2024-12-09 06:29:24.168887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.841 [2024-12-09 06:29:24.168919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.841 qpair failed and we were unable to recover it. 00:30:29.841 [2024-12-09 06:29:24.169257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.841 [2024-12-09 06:29:24.169290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.841 qpair failed and we were unable to recover it. 00:30:29.841 [2024-12-09 06:29:24.169653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.841 [2024-12-09 06:29:24.169686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.841 qpair failed and we were unable to recover it. 00:30:29.841 [2024-12-09 06:29:24.170017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.841 [2024-12-09 06:29:24.170051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.841 qpair failed and we were unable to recover it. 00:30:29.841 [2024-12-09 06:29:24.170389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.841 [2024-12-09 06:29:24.170420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.841 qpair failed and we were unable to recover it. 00:30:29.841 [2024-12-09 06:29:24.170786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.841 [2024-12-09 06:29:24.170819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.841 qpair failed and we were unable to recover it. 00:30:29.841 [2024-12-09 06:29:24.171196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.841 [2024-12-09 06:29:24.171229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.841 qpair failed and we were unable to recover it. 00:30:29.841 [2024-12-09 06:29:24.171553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.841 [2024-12-09 06:29:24.171586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.841 qpair failed and we were unable to recover it. 00:30:29.841 [2024-12-09 06:29:24.171979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.841 [2024-12-09 06:29:24.172010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.841 qpair failed and we were unable to recover it. 00:30:29.841 [2024-12-09 06:29:24.172241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.841 [2024-12-09 06:29:24.172270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.841 qpair failed and we were unable to recover it. 00:30:29.841 [2024-12-09 06:29:24.172591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.841 [2024-12-09 06:29:24.172624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.841 qpair failed and we were unable to recover it. 00:30:29.841 [2024-12-09 06:29:24.172942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.841 [2024-12-09 06:29:24.172975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.841 qpair failed and we were unable to recover it. 00:30:29.841 [2024-12-09 06:29:24.173350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.841 [2024-12-09 06:29:24.173383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.841 qpair failed and we were unable to recover it. 00:30:29.841 [2024-12-09 06:29:24.173639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.841 [2024-12-09 06:29:24.173670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.841 qpair failed and we were unable to recover it. 00:30:29.841 [2024-12-09 06:29:24.174011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.841 [2024-12-09 06:29:24.174042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.841 qpair failed and we were unable to recover it. 00:30:29.841 [2024-12-09 06:29:24.174429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.841 [2024-12-09 06:29:24.174473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.841 qpair failed and we were unable to recover it. 00:30:29.841 [2024-12-09 06:29:24.174823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.842 [2024-12-09 06:29:24.174855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.842 qpair failed and we were unable to recover it. 00:30:29.842 [2024-12-09 06:29:24.175196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.842 [2024-12-09 06:29:24.175227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.842 qpair failed and we were unable to recover it. 00:30:29.842 [2024-12-09 06:29:24.175487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.842 [2024-12-09 06:29:24.175522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.842 qpair failed and we were unable to recover it. 00:30:29.842 [2024-12-09 06:29:24.175895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.842 [2024-12-09 06:29:24.175933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.842 qpair failed and we were unable to recover it. 00:30:29.842 [2024-12-09 06:29:24.176260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.842 [2024-12-09 06:29:24.176291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.842 qpair failed and we were unable to recover it. 00:30:29.842 [2024-12-09 06:29:24.176541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.842 [2024-12-09 06:29:24.176574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.842 qpair failed and we were unable to recover it. 00:30:29.842 [2024-12-09 06:29:24.176875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.842 [2024-12-09 06:29:24.176908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.842 qpair failed and we were unable to recover it. 00:30:29.842 [2024-12-09 06:29:24.177253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.842 [2024-12-09 06:29:24.177285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.842 qpair failed and we were unable to recover it. 00:30:29.842 [2024-12-09 06:29:24.177619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.842 [2024-12-09 06:29:24.177653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.842 qpair failed and we were unable to recover it. 00:30:29.842 [2024-12-09 06:29:24.177985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.842 [2024-12-09 06:29:24.178015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.842 qpair failed and we were unable to recover it. 00:30:29.842 [2024-12-09 06:29:24.178353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.842 [2024-12-09 06:29:24.178387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.842 qpair failed and we were unable to recover it. 00:30:29.842 [2024-12-09 06:29:24.178775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.842 [2024-12-09 06:29:24.178808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.842 qpair failed and we were unable to recover it. 00:30:29.842 [2024-12-09 06:29:24.179148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.842 [2024-12-09 06:29:24.179180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.842 qpair failed and we were unable to recover it. 00:30:29.842 [2024-12-09 06:29:24.179493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.842 [2024-12-09 06:29:24.179526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.842 qpair failed and we were unable to recover it. 00:30:29.842 [2024-12-09 06:29:24.179949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.842 [2024-12-09 06:29:24.179981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.842 qpair failed and we were unable to recover it. 00:30:29.842 [2024-12-09 06:29:24.180250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.842 [2024-12-09 06:29:24.180281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.842 qpair failed and we were unable to recover it. 00:30:29.842 [2024-12-09 06:29:24.180523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.842 [2024-12-09 06:29:24.180555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.842 qpair failed and we were unable to recover it. 00:30:29.842 [2024-12-09 06:29:24.180926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.842 [2024-12-09 06:29:24.180957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.842 qpair failed and we were unable to recover it. 00:30:29.842 [2024-12-09 06:29:24.181294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.842 [2024-12-09 06:29:24.181324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.842 qpair failed and we were unable to recover it. 00:30:29.842 [2024-12-09 06:29:24.181671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.842 [2024-12-09 06:29:24.181703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.842 qpair failed and we were unable to recover it. 00:30:29.842 [2024-12-09 06:29:24.182041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.842 [2024-12-09 06:29:24.182074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.842 qpair failed and we were unable to recover it. 00:30:29.842 [2024-12-09 06:29:24.182484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.842 [2024-12-09 06:29:24.182516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.842 qpair failed and we were unable to recover it. 00:30:29.842 [2024-12-09 06:29:24.182865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.842 [2024-12-09 06:29:24.182897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.842 qpair failed and we were unable to recover it. 00:30:29.842 [2024-12-09 06:29:24.183277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.842 [2024-12-09 06:29:24.183308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.842 qpair failed and we were unable to recover it. 00:30:29.842 [2024-12-09 06:29:24.183720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.842 [2024-12-09 06:29:24.183752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.842 qpair failed and we were unable to recover it. 00:30:29.842 [2024-12-09 06:29:24.184129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.842 [2024-12-09 06:29:24.184162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.842 qpair failed and we were unable to recover it. 00:30:29.842 [2024-12-09 06:29:24.184505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.842 [2024-12-09 06:29:24.184539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.842 qpair failed and we were unable to recover it. 00:30:29.842 [2024-12-09 06:29:24.184767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.842 [2024-12-09 06:29:24.184798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.842 qpair failed and we were unable to recover it. 00:30:29.842 [2024-12-09 06:29:24.185143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.842 [2024-12-09 06:29:24.185175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.842 qpair failed and we were unable to recover it. 00:30:29.842 [2024-12-09 06:29:24.185487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.842 [2024-12-09 06:29:24.185521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.842 qpair failed and we were unable to recover it. 00:30:29.842 [2024-12-09 06:29:24.185908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.842 [2024-12-09 06:29:24.185945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.842 qpair failed and we were unable to recover it. 00:30:29.842 [2024-12-09 06:29:24.186170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.842 [2024-12-09 06:29:24.186203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.842 qpair failed and we were unable to recover it. 00:30:29.842 [2024-12-09 06:29:24.186525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.842 [2024-12-09 06:29:24.186557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.842 qpair failed and we were unable to recover it. 00:30:29.842 [2024-12-09 06:29:24.186821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.842 [2024-12-09 06:29:24.186852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.842 qpair failed and we were unable to recover it. 00:30:29.843 [2024-12-09 06:29:24.187223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.843 [2024-12-09 06:29:24.187254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.843 qpair failed and we were unable to recover it. 00:30:29.843 [2024-12-09 06:29:24.187500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.843 [2024-12-09 06:29:24.187532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.843 qpair failed and we were unable to recover it. 00:30:29.843 [2024-12-09 06:29:24.187808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.843 [2024-12-09 06:29:24.187841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.843 qpair failed and we were unable to recover it. 00:30:29.843 [2024-12-09 06:29:24.188181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.843 [2024-12-09 06:29:24.188214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.843 qpair failed and we were unable to recover it. 00:30:29.843 [2024-12-09 06:29:24.188573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.843 [2024-12-09 06:29:24.188605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.843 qpair failed and we were unable to recover it. 00:30:29.843 [2024-12-09 06:29:24.188873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.843 [2024-12-09 06:29:24.188908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.843 qpair failed and we were unable to recover it. 00:30:29.843 [2024-12-09 06:29:24.189320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.843 [2024-12-09 06:29:24.189351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.843 qpair failed and we were unable to recover it. 00:30:29.843 [2024-12-09 06:29:24.189695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.843 [2024-12-09 06:29:24.189730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.843 qpair failed and we were unable to recover it. 00:30:29.843 [2024-12-09 06:29:24.189991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.843 [2024-12-09 06:29:24.190024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.843 qpair failed and we were unable to recover it. 00:30:29.843 [2024-12-09 06:29:24.190403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.843 [2024-12-09 06:29:24.190438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.843 qpair failed and we were unable to recover it. 00:30:29.843 [2024-12-09 06:29:24.190829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.843 [2024-12-09 06:29:24.190862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.843 qpair failed and we were unable to recover it. 00:30:29.843 [2024-12-09 06:29:24.191195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.843 [2024-12-09 06:29:24.191229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.843 qpair failed and we were unable to recover it. 00:30:29.843 [2024-12-09 06:29:24.191579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.843 [2024-12-09 06:29:24.191612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.843 qpair failed and we were unable to recover it. 00:30:29.843 [2024-12-09 06:29:24.192004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.843 [2024-12-09 06:29:24.192036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.843 qpair failed and we were unable to recover it. 00:30:29.843 [2024-12-09 06:29:24.192403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.843 [2024-12-09 06:29:24.192436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.843 qpair failed and we were unable to recover it. 00:30:29.843 [2024-12-09 06:29:24.192844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.843 [2024-12-09 06:29:24.192876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.843 qpair failed and we were unable to recover it. 00:30:29.843 [2024-12-09 06:29:24.193214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.843 [2024-12-09 06:29:24.193247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.843 qpair failed and we were unable to recover it. 00:30:29.843 [2024-12-09 06:29:24.193602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.843 [2024-12-09 06:29:24.193637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.843 qpair failed and we were unable to recover it. 00:30:29.843 [2024-12-09 06:29:24.193990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.843 [2024-12-09 06:29:24.194022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.843 qpair failed and we were unable to recover it. 00:30:29.843 [2024-12-09 06:29:24.194366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.843 [2024-12-09 06:29:24.194399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.843 qpair failed and we were unable to recover it. 00:30:29.843 [2024-12-09 06:29:24.194650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.843 [2024-12-09 06:29:24.194681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.843 qpair failed and we were unable to recover it. 00:30:29.843 [2024-12-09 06:29:24.195036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.843 [2024-12-09 06:29:24.195068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.843 qpair failed and we were unable to recover it. 00:30:29.843 [2024-12-09 06:29:24.195442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.843 [2024-12-09 06:29:24.195505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.843 qpair failed and we were unable to recover it. 00:30:29.843 [2024-12-09 06:29:24.195855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.843 [2024-12-09 06:29:24.195888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.843 qpair failed and we were unable to recover it. 00:30:29.843 [2024-12-09 06:29:24.196283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.843 [2024-12-09 06:29:24.196315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.843 qpair failed and we were unable to recover it. 00:30:29.843 [2024-12-09 06:29:24.196623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.843 [2024-12-09 06:29:24.196654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.843 qpair failed and we were unable to recover it. 00:30:29.843 [2024-12-09 06:29:24.197009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.843 [2024-12-09 06:29:24.197040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.843 qpair failed and we were unable to recover it. 00:30:29.843 [2024-12-09 06:29:24.197389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.843 [2024-12-09 06:29:24.197423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.843 qpair failed and we were unable to recover it. 00:30:29.843 [2024-12-09 06:29:24.197803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.843 [2024-12-09 06:29:24.197835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.843 qpair failed and we were unable to recover it. 00:30:29.843 [2024-12-09 06:29:24.198184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.843 [2024-12-09 06:29:24.198217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.843 qpair failed and we were unable to recover it. 00:30:29.843 [2024-12-09 06:29:24.198532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.843 [2024-12-09 06:29:24.198565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.843 qpair failed and we were unable to recover it. 00:30:29.843 [2024-12-09 06:29:24.198916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.843 [2024-12-09 06:29:24.198949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.843 qpair failed and we were unable to recover it. 00:30:29.843 [2024-12-09 06:29:24.199292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.843 [2024-12-09 06:29:24.199323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.843 qpair failed and we were unable to recover it. 00:30:29.843 [2024-12-09 06:29:24.199653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.843 [2024-12-09 06:29:24.199686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.843 qpair failed and we were unable to recover it. 00:30:29.843 [2024-12-09 06:29:24.200093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.844 [2024-12-09 06:29:24.200126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.844 qpair failed and we were unable to recover it. 00:30:29.844 [2024-12-09 06:29:24.200481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.844 [2024-12-09 06:29:24.200515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.844 qpair failed and we were unable to recover it. 00:30:29.844 [2024-12-09 06:29:24.200864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.844 [2024-12-09 06:29:24.200896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.844 qpair failed and we were unable to recover it. 00:30:29.844 [2024-12-09 06:29:24.201245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.844 [2024-12-09 06:29:24.201283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.844 qpair failed and we were unable to recover it. 00:30:29.844 [2024-12-09 06:29:24.201536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.844 [2024-12-09 06:29:24.201568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.844 qpair failed and we were unable to recover it. 00:30:29.844 [2024-12-09 06:29:24.201923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.844 [2024-12-09 06:29:24.201955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.844 qpair failed and we were unable to recover it. 00:30:29.844 [2024-12-09 06:29:24.202285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.844 [2024-12-09 06:29:24.202318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.844 qpair failed and we were unable to recover it. 00:30:29.844 [2024-12-09 06:29:24.202755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.844 [2024-12-09 06:29:24.202788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.844 qpair failed and we were unable to recover it. 00:30:29.844 [2024-12-09 06:29:24.203124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.844 [2024-12-09 06:29:24.203157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.844 qpair failed and we were unable to recover it. 00:30:29.844 [2024-12-09 06:29:24.203437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.844 [2024-12-09 06:29:24.203478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.844 qpair failed and we were unable to recover it. 00:30:29.844 [2024-12-09 06:29:24.203825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.844 [2024-12-09 06:29:24.203858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.844 qpair failed and we were unable to recover it. 00:30:29.844 [2024-12-09 06:29:24.204216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.844 [2024-12-09 06:29:24.204248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.844 qpair failed and we were unable to recover it. 00:30:29.844 [2024-12-09 06:29:24.204616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.844 [2024-12-09 06:29:24.204648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.844 qpair failed and we were unable to recover it. 00:30:29.844 [2024-12-09 06:29:24.204984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.844 [2024-12-09 06:29:24.205015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.844 qpair failed and we were unable to recover it. 00:30:29.844 [2024-12-09 06:29:24.205373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.844 [2024-12-09 06:29:24.205404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.844 qpair failed and we were unable to recover it. 00:30:29.844 [2024-12-09 06:29:24.205755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.844 [2024-12-09 06:29:24.205786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.844 qpair failed and we were unable to recover it. 00:30:29.844 [2024-12-09 06:29:24.206142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.844 [2024-12-09 06:29:24.206172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.844 qpair failed and we were unable to recover it. 00:30:29.844 [2024-12-09 06:29:24.206492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.844 [2024-12-09 06:29:24.206525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.844 qpair failed and we were unable to recover it. 00:30:29.844 [2024-12-09 06:29:24.206881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.844 [2024-12-09 06:29:24.206914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.844 qpair failed and we were unable to recover it. 00:30:29.844 [2024-12-09 06:29:24.207218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.844 [2024-12-09 06:29:24.207249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.844 qpair failed and we were unable to recover it. 00:30:29.844 [2024-12-09 06:29:24.207591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.844 [2024-12-09 06:29:24.207622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.844 qpair failed and we were unable to recover it. 00:30:29.844 [2024-12-09 06:29:24.207975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.844 [2024-12-09 06:29:24.208006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.844 qpair failed and we were unable to recover it. 00:30:29.844 [2024-12-09 06:29:24.208255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.844 [2024-12-09 06:29:24.208286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.844 qpair failed and we were unable to recover it. 00:30:29.844 [2024-12-09 06:29:24.208517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.844 [2024-12-09 06:29:24.208550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.844 qpair failed and we were unable to recover it. 00:30:29.844 [2024-12-09 06:29:24.208917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.844 [2024-12-09 06:29:24.208949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.844 qpair failed and we were unable to recover it. 00:30:29.844 [2024-12-09 06:29:24.209291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.844 [2024-12-09 06:29:24.209323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.844 qpair failed and we were unable to recover it. 00:30:29.844 [2024-12-09 06:29:24.209689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.844 [2024-12-09 06:29:24.209721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.844 qpair failed and we were unable to recover it. 00:30:29.844 [2024-12-09 06:29:24.210055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.844 [2024-12-09 06:29:24.210088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.844 qpair failed and we were unable to recover it. 00:30:29.844 [2024-12-09 06:29:24.210472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.844 [2024-12-09 06:29:24.210505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.844 qpair failed and we were unable to recover it. 00:30:29.844 [2024-12-09 06:29:24.210859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.844 [2024-12-09 06:29:24.210891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.844 qpair failed and we were unable to recover it. 00:30:29.844 [2024-12-09 06:29:24.211253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.844 [2024-12-09 06:29:24.211286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.844 qpair failed and we were unable to recover it. 00:30:29.844 [2024-12-09 06:29:24.211662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.844 [2024-12-09 06:29:24.211695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.844 qpair failed and we were unable to recover it. 00:30:29.844 [2024-12-09 06:29:24.211938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.844 [2024-12-09 06:29:24.211967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.844 qpair failed and we were unable to recover it. 00:30:29.844 [2024-12-09 06:29:24.212196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.844 [2024-12-09 06:29:24.212227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.844 qpair failed and we were unable to recover it. 00:30:29.844 [2024-12-09 06:29:24.212554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.844 [2024-12-09 06:29:24.212585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.844 qpair failed and we were unable to recover it. 00:30:29.844 [2024-12-09 06:29:24.212962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.844 [2024-12-09 06:29:24.212994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.844 qpair failed and we were unable to recover it. 00:30:29.844 [2024-12-09 06:29:24.213317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.844 [2024-12-09 06:29:24.213349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.844 qpair failed and we were unable to recover it. 00:30:29.844 [2024-12-09 06:29:24.213694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.845 [2024-12-09 06:29:24.213726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.845 qpair failed and we were unable to recover it. 00:30:29.845 [2024-12-09 06:29:24.214108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.845 [2024-12-09 06:29:24.214141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.845 qpair failed and we were unable to recover it. 00:30:29.845 [2024-12-09 06:29:24.214439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.845 [2024-12-09 06:29:24.214483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.845 qpair failed and we were unable to recover it. 00:30:29.845 [2024-12-09 06:29:24.214891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.845 [2024-12-09 06:29:24.214922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.845 qpair failed and we were unable to recover it. 00:30:29.845 [2024-12-09 06:29:24.215139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.845 [2024-12-09 06:29:24.215172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.845 qpair failed and we were unable to recover it. 00:30:29.845 [2024-12-09 06:29:24.215546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.845 [2024-12-09 06:29:24.215578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.845 qpair failed and we were unable to recover it. 00:30:29.845 [2024-12-09 06:29:24.215957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.845 [2024-12-09 06:29:24.215991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.845 qpair failed and we were unable to recover it. 00:30:29.845 [2024-12-09 06:29:24.216361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.845 [2024-12-09 06:29:24.216393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.845 qpair failed and we were unable to recover it. 00:30:29.845 [2024-12-09 06:29:24.216727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.845 [2024-12-09 06:29:24.216760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.845 qpair failed and we were unable to recover it. 00:30:29.845 [2024-12-09 06:29:24.217099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.845 [2024-12-09 06:29:24.217132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.845 qpair failed and we were unable to recover it. 00:30:29.845 [2024-12-09 06:29:24.217493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.845 [2024-12-09 06:29:24.217541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.845 qpair failed and we were unable to recover it. 00:30:29.845 [2024-12-09 06:29:24.217904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.845 [2024-12-09 06:29:24.217937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.845 qpair failed and we were unable to recover it. 00:30:29.845 [2024-12-09 06:29:24.218275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.845 [2024-12-09 06:29:24.218308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.845 qpair failed and we were unable to recover it. 00:30:29.845 [2024-12-09 06:29:24.218668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.845 [2024-12-09 06:29:24.218702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.845 qpair failed and we were unable to recover it. 00:30:29.845 [2024-12-09 06:29:24.219082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.845 [2024-12-09 06:29:24.219114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.845 qpair failed and we were unable to recover it. 00:30:29.845 [2024-12-09 06:29:24.219343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.845 [2024-12-09 06:29:24.219374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.845 qpair failed and we were unable to recover it. 00:30:29.845 [2024-12-09 06:29:24.219707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.845 [2024-12-09 06:29:24.219739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.845 qpair failed and we were unable to recover it. 00:30:29.845 [2024-12-09 06:29:24.220123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.845 [2024-12-09 06:29:24.220155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.845 qpair failed and we were unable to recover it. 00:30:29.845 [2024-12-09 06:29:24.220506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.845 [2024-12-09 06:29:24.220538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.845 qpair failed and we were unable to recover it. 00:30:29.845 [2024-12-09 06:29:24.220681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.845 [2024-12-09 06:29:24.220715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.845 qpair failed and we were unable to recover it. 00:30:29.845 [2024-12-09 06:29:24.221080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.845 [2024-12-09 06:29:24.221111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.845 qpair failed and we were unable to recover it. 00:30:29.845 [2024-12-09 06:29:24.221472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.845 [2024-12-09 06:29:24.221504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.845 qpair failed and we were unable to recover it. 00:30:29.845 [2024-12-09 06:29:24.221731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.845 [2024-12-09 06:29:24.221762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.845 qpair failed and we were unable to recover it. 00:30:29.845 [2024-12-09 06:29:24.222129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.845 [2024-12-09 06:29:24.222161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.845 qpair failed and we were unable to recover it. 00:30:29.845 [2024-12-09 06:29:24.222505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.845 [2024-12-09 06:29:24.222537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.845 qpair failed and we were unable to recover it. 00:30:29.845 [2024-12-09 06:29:24.222742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.845 [2024-12-09 06:29:24.222773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.845 qpair failed and we were unable to recover it. 00:30:29.845 [2024-12-09 06:29:24.223124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.845 [2024-12-09 06:29:24.223156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.845 qpair failed and we were unable to recover it. 00:30:29.845 [2024-12-09 06:29:24.223496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.845 [2024-12-09 06:29:24.223528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.845 qpair failed and we were unable to recover it. 00:30:29.845 [2024-12-09 06:29:24.223957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.845 [2024-12-09 06:29:24.223989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.845 qpair failed and we were unable to recover it. 00:30:29.845 [2024-12-09 06:29:24.224204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.845 [2024-12-09 06:29:24.224236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.845 qpair failed and we were unable to recover it. 00:30:29.845 [2024-12-09 06:29:24.224604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.845 [2024-12-09 06:29:24.224637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.845 qpair failed and we were unable to recover it. 00:30:29.845 [2024-12-09 06:29:24.224976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.845 [2024-12-09 06:29:24.225007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.845 qpair failed and we were unable to recover it. 00:30:29.845 [2024-12-09 06:29:24.225348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.845 [2024-12-09 06:29:24.225382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.845 qpair failed and we were unable to recover it. 00:30:29.845 [2024-12-09 06:29:24.225738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.845 [2024-12-09 06:29:24.225777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.845 qpair failed and we were unable to recover it. 00:30:29.845 [2024-12-09 06:29:24.226061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.845 [2024-12-09 06:29:24.226096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.845 qpair failed and we were unable to recover it. 00:30:29.845 [2024-12-09 06:29:24.226333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.845 [2024-12-09 06:29:24.226364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.845 qpair failed and we were unable to recover it. 00:30:29.845 [2024-12-09 06:29:24.226720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.845 [2024-12-09 06:29:24.226751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.845 qpair failed and we were unable to recover it. 00:30:29.845 [2024-12-09 06:29:24.227110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.845 [2024-12-09 06:29:24.227141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.845 qpair failed and we were unable to recover it. 00:30:29.845 [2024-12-09 06:29:24.227394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.845 [2024-12-09 06:29:24.227427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.846 qpair failed and we were unable to recover it. 00:30:29.846 [2024-12-09 06:29:24.227788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.846 [2024-12-09 06:29:24.227819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.846 qpair failed and we were unable to recover it. 00:30:29.846 [2024-12-09 06:29:24.228204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.846 [2024-12-09 06:29:24.228235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.846 qpair failed and we were unable to recover it. 00:30:29.846 [2024-12-09 06:29:24.228562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.846 [2024-12-09 06:29:24.228593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.846 qpair failed and we were unable to recover it. 00:30:29.846 [2024-12-09 06:29:24.228953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.846 [2024-12-09 06:29:24.228983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.846 qpair failed and we were unable to recover it. 00:30:29.846 [2024-12-09 06:29:24.229346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.846 [2024-12-09 06:29:24.229375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.846 qpair failed and we were unable to recover it. 00:30:29.846 [2024-12-09 06:29:24.229639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.846 [2024-12-09 06:29:24.229669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.846 qpair failed and we were unable to recover it. 00:30:29.846 [2024-12-09 06:29:24.229912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.846 [2024-12-09 06:29:24.229944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.846 qpair failed and we were unable to recover it. 00:30:29.846 [2024-12-09 06:29:24.230201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.846 [2024-12-09 06:29:24.230232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.846 qpair failed and we were unable to recover it. 00:30:29.846 [2024-12-09 06:29:24.230592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.846 [2024-12-09 06:29:24.230623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.846 qpair failed and we were unable to recover it. 00:30:29.846 [2024-12-09 06:29:24.231017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.846 [2024-12-09 06:29:24.231048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.846 qpair failed and we were unable to recover it. 00:30:29.846 [2024-12-09 06:29:24.231272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.846 [2024-12-09 06:29:24.231304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.846 qpair failed and we were unable to recover it. 00:30:29.846 [2024-12-09 06:29:24.231697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.846 [2024-12-09 06:29:24.231728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.846 qpair failed and we were unable to recover it. 00:30:29.846 [2024-12-09 06:29:24.232072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.846 [2024-12-09 06:29:24.232103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.846 qpair failed and we were unable to recover it. 00:30:29.846 [2024-12-09 06:29:24.232489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.846 [2024-12-09 06:29:24.232522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.846 qpair failed and we were unable to recover it. 00:30:29.846 [2024-12-09 06:29:24.232890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.846 [2024-12-09 06:29:24.232921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.846 qpair failed and we were unable to recover it. 00:30:29.846 [2024-12-09 06:29:24.233311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.846 [2024-12-09 06:29:24.233340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.846 qpair failed and we were unable to recover it. 00:30:29.846 [2024-12-09 06:29:24.233663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.846 [2024-12-09 06:29:24.233694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.846 qpair failed and we were unable to recover it. 00:30:29.846 [2024-12-09 06:29:24.234048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.846 [2024-12-09 06:29:24.234078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.846 qpair failed and we were unable to recover it. 00:30:29.846 [2024-12-09 06:29:24.234321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.846 [2024-12-09 06:29:24.234351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.846 qpair failed and we were unable to recover it. 00:30:29.846 [2024-12-09 06:29:24.234665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.846 [2024-12-09 06:29:24.234697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.846 qpair failed and we were unable to recover it. 00:30:29.846 [2024-12-09 06:29:24.234917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.846 [2024-12-09 06:29:24.234951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.846 qpair failed and we were unable to recover it. 00:30:29.846 [2024-12-09 06:29:24.235319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.846 [2024-12-09 06:29:24.235350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.846 qpair failed and we were unable to recover it. 00:30:29.846 [2024-12-09 06:29:24.235713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.846 [2024-12-09 06:29:24.235745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.846 qpair failed and we were unable to recover it. 00:30:29.846 [2024-12-09 06:29:24.235999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.846 [2024-12-09 06:29:24.236028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.846 qpair failed and we were unable to recover it. 00:30:29.846 [2024-12-09 06:29:24.236252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.846 [2024-12-09 06:29:24.236281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.846 qpair failed and we were unable to recover it. 00:30:29.846 [2024-12-09 06:29:24.236564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.846 [2024-12-09 06:29:24.236595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.846 qpair failed and we were unable to recover it. 00:30:29.846 [2024-12-09 06:29:24.236822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.846 [2024-12-09 06:29:24.236853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.846 qpair failed and we were unable to recover it. 00:30:29.846 [2024-12-09 06:29:24.237236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.846 [2024-12-09 06:29:24.237266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.846 qpair failed and we were unable to recover it. 00:30:29.846 [2024-12-09 06:29:24.237618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.846 [2024-12-09 06:29:24.237650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.846 qpair failed and we were unable to recover it. 00:30:29.846 [2024-12-09 06:29:24.237883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.846 [2024-12-09 06:29:24.237912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.846 qpair failed and we were unable to recover it. 00:30:29.846 [2024-12-09 06:29:24.238257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.846 [2024-12-09 06:29:24.238288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.846 qpair failed and we were unable to recover it. 00:30:29.846 [2024-12-09 06:29:24.238659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.846 [2024-12-09 06:29:24.238691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.846 qpair failed and we were unable to recover it. 00:30:29.846 [2024-12-09 06:29:24.238912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.846 [2024-12-09 06:29:24.238942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.846 qpair failed and we were unable to recover it. 00:30:29.846 [2024-12-09 06:29:24.239321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.846 [2024-12-09 06:29:24.239350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.846 qpair failed and we were unable to recover it. 00:30:29.846 [2024-12-09 06:29:24.239676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.846 [2024-12-09 06:29:24.239708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.846 qpair failed and we were unable to recover it. 00:30:29.846 [2024-12-09 06:29:24.239983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.846 [2024-12-09 06:29:24.240014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.846 qpair failed and we were unable to recover it. 00:30:29.846 [2024-12-09 06:29:24.240390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.846 [2024-12-09 06:29:24.240432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.846 qpair failed and we were unable to recover it. 00:30:29.846 [2024-12-09 06:29:24.240829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.847 [2024-12-09 06:29:24.240860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.847 qpair failed and we were unable to recover it. 00:30:29.847 [2024-12-09 06:29:24.241190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.847 [2024-12-09 06:29:24.241220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.847 qpair failed and we were unable to recover it. 00:30:29.847 [2024-12-09 06:29:24.241596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.847 [2024-12-09 06:29:24.241627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.847 qpair failed and we were unable to recover it. 00:30:29.847 [2024-12-09 06:29:24.241881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.847 [2024-12-09 06:29:24.241912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.847 qpair failed and we were unable to recover it. 00:30:29.847 [2024-12-09 06:29:24.242171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.847 [2024-12-09 06:29:24.242201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.847 qpair failed and we were unable to recover it. 00:30:29.847 [2024-12-09 06:29:24.242460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.847 [2024-12-09 06:29:24.242492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.847 qpair failed and we were unable to recover it. 00:30:29.847 [2024-12-09 06:29:24.242725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.847 [2024-12-09 06:29:24.242762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.847 qpair failed and we were unable to recover it. 00:30:29.847 [2024-12-09 06:29:24.243137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.847 [2024-12-09 06:29:24.243167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.847 qpair failed and we were unable to recover it. 00:30:29.847 [2024-12-09 06:29:24.243518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.847 [2024-12-09 06:29:24.243548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.847 qpair failed and we were unable to recover it. 00:30:29.847 [2024-12-09 06:29:24.243920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.847 [2024-12-09 06:29:24.243951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.847 qpair failed and we were unable to recover it. 00:30:29.847 [2024-12-09 06:29:24.244332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.847 [2024-12-09 06:29:24.244361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.847 qpair failed and we were unable to recover it. 00:30:29.847 [2024-12-09 06:29:24.244686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.847 [2024-12-09 06:29:24.244716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.847 qpair failed and we were unable to recover it. 00:30:29.847 [2024-12-09 06:29:24.245066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.847 [2024-12-09 06:29:24.245095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.847 qpair failed and we were unable to recover it. 00:30:29.847 [2024-12-09 06:29:24.245472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.847 [2024-12-09 06:29:24.245502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.847 qpair failed and we were unable to recover it. 00:30:29.847 [2024-12-09 06:29:24.245874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.847 [2024-12-09 06:29:24.245903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.847 qpair failed and we were unable to recover it. 00:30:29.847 [2024-12-09 06:29:24.246270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.847 [2024-12-09 06:29:24.246301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.847 qpair failed and we were unable to recover it. 00:30:29.847 [2024-12-09 06:29:24.246663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.847 [2024-12-09 06:29:24.246695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.847 qpair failed and we were unable to recover it. 00:30:29.847 [2024-12-09 06:29:24.247051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.847 [2024-12-09 06:29:24.247080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.847 qpair failed and we were unable to recover it. 00:30:29.847 [2024-12-09 06:29:24.247430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.847 [2024-12-09 06:29:24.247477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.847 qpair failed and we were unable to recover it. 00:30:29.847 [2024-12-09 06:29:24.247803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.847 [2024-12-09 06:29:24.247834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.847 qpair failed and we were unable to recover it. 00:30:29.847 [2024-12-09 06:29:24.248176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.847 [2024-12-09 06:29:24.248206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.847 qpair failed and we were unable to recover it. 00:30:29.847 [2024-12-09 06:29:24.248613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.847 [2024-12-09 06:29:24.248644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.847 qpair failed and we were unable to recover it. 00:30:29.847 [2024-12-09 06:29:24.248893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.847 [2024-12-09 06:29:24.248924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.847 qpair failed and we were unable to recover it. 00:30:29.847 [2024-12-09 06:29:24.249114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.847 [2024-12-09 06:29:24.249143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.847 qpair failed and we were unable to recover it. 00:30:29.847 [2024-12-09 06:29:24.249391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.847 [2024-12-09 06:29:24.249420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.847 qpair failed and we were unable to recover it. 00:30:29.847 [2024-12-09 06:29:24.249756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.847 [2024-12-09 06:29:24.249785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.847 qpair failed and we were unable to recover it. 00:30:29.847 [2024-12-09 06:29:24.250159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.847 [2024-12-09 06:29:24.250195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.847 qpair failed and we were unable to recover it. 00:30:29.847 [2024-12-09 06:29:24.250551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.847 [2024-12-09 06:29:24.250581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.847 qpair failed and we were unable to recover it. 00:30:29.847 [2024-12-09 06:29:24.250969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.847 [2024-12-09 06:29:24.250999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.847 qpair failed and we were unable to recover it. 00:30:29.847 [2024-12-09 06:29:24.251382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.847 [2024-12-09 06:29:24.251412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.847 qpair failed and we were unable to recover it. 00:30:29.847 [2024-12-09 06:29:24.251815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.847 [2024-12-09 06:29:24.251845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.847 qpair failed and we were unable to recover it. 00:30:29.847 [2024-12-09 06:29:24.252074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.847 [2024-12-09 06:29:24.252104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.847 qpair failed and we were unable to recover it. 00:30:29.847 [2024-12-09 06:29:24.252508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.847 [2024-12-09 06:29:24.252539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.847 qpair failed and we were unable to recover it. 00:30:29.847 [2024-12-09 06:29:24.252797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.847 [2024-12-09 06:29:24.252831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.847 qpair failed and we were unable to recover it. 00:30:29.847 [2024-12-09 06:29:24.253159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.848 [2024-12-09 06:29:24.253192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.848 qpair failed and we were unable to recover it. 00:30:29.848 [2024-12-09 06:29:24.253540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.848 [2024-12-09 06:29:24.253571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.848 qpair failed and we were unable to recover it. 00:30:29.848 [2024-12-09 06:29:24.253957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.848 [2024-12-09 06:29:24.253986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.848 qpair failed and we were unable to recover it. 00:30:29.848 [2024-12-09 06:29:24.254211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.848 [2024-12-09 06:29:24.254244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.848 qpair failed and we were unable to recover it. 00:30:29.848 [2024-12-09 06:29:24.254477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.848 [2024-12-09 06:29:24.254510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.848 qpair failed and we were unable to recover it. 00:30:29.848 [2024-12-09 06:29:24.254868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.848 [2024-12-09 06:29:24.254897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.848 qpair failed and we were unable to recover it. 00:30:29.848 [2024-12-09 06:29:24.255285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.848 [2024-12-09 06:29:24.255317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.848 qpair failed and we were unable to recover it. 00:30:29.848 [2024-12-09 06:29:24.255645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.848 [2024-12-09 06:29:24.255676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.848 qpair failed and we were unable to recover it. 00:30:29.848 [2024-12-09 06:29:24.256064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.848 [2024-12-09 06:29:24.256093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.848 qpair failed and we were unable to recover it. 00:30:29.848 [2024-12-09 06:29:24.256427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.848 [2024-12-09 06:29:24.256469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.848 qpair failed and we were unable to recover it. 00:30:29.848 [2024-12-09 06:29:24.256725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.848 [2024-12-09 06:29:24.256757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.848 qpair failed and we were unable to recover it. 00:30:29.848 [2024-12-09 06:29:24.257030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.848 [2024-12-09 06:29:24.257058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.848 qpair failed and we were unable to recover it. 00:30:29.848 [2024-12-09 06:29:24.257397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.848 [2024-12-09 06:29:24.257427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.848 qpair failed and we were unable to recover it. 00:30:29.848 [2024-12-09 06:29:24.257699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.848 [2024-12-09 06:29:24.257730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.848 qpair failed and we were unable to recover it. 00:30:29.848 [2024-12-09 06:29:24.258108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.848 [2024-12-09 06:29:24.258138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.848 qpair failed and we were unable to recover it. 00:30:29.848 [2024-12-09 06:29:24.258485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.848 [2024-12-09 06:29:24.258517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.848 qpair failed and we were unable to recover it. 00:30:29.848 [2024-12-09 06:29:24.258733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.848 [2024-12-09 06:29:24.258762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.848 qpair failed and we were unable to recover it. 00:30:29.848 [2024-12-09 06:29:24.259080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.848 [2024-12-09 06:29:24.259109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.848 qpair failed and we were unable to recover it. 00:30:29.848 [2024-12-09 06:29:24.259500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.848 [2024-12-09 06:29:24.259531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.848 qpair failed and we were unable to recover it. 00:30:29.848 [2024-12-09 06:29:24.259889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.848 [2024-12-09 06:29:24.259918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.848 qpair failed and we were unable to recover it. 00:30:29.848 [2024-12-09 06:29:24.260302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.848 [2024-12-09 06:29:24.260333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.848 qpair failed and we were unable to recover it. 00:30:29.848 [2024-12-09 06:29:24.260694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.848 [2024-12-09 06:29:24.260726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.848 qpair failed and we were unable to recover it. 00:30:29.848 [2024-12-09 06:29:24.261060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.848 [2024-12-09 06:29:24.261090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.848 qpair failed and we were unable to recover it. 00:30:29.848 [2024-12-09 06:29:24.261435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.848 [2024-12-09 06:29:24.261477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.848 qpair failed and we were unable to recover it. 00:30:29.848 [2024-12-09 06:29:24.261868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.848 [2024-12-09 06:29:24.261897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.848 qpair failed and we were unable to recover it. 00:30:29.848 [2024-12-09 06:29:24.262316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.848 [2024-12-09 06:29:24.262347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.848 qpair failed and we were unable to recover it. 00:30:29.848 [2024-12-09 06:29:24.262562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.848 [2024-12-09 06:29:24.262595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.848 qpair failed and we were unable to recover it. 00:30:29.848 [2024-12-09 06:29:24.262979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.848 [2024-12-09 06:29:24.263009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.848 qpair failed and we were unable to recover it. 00:30:29.848 [2024-12-09 06:29:24.263384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.848 [2024-12-09 06:29:24.263414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.848 qpair failed and we were unable to recover it. 00:30:29.848 [2024-12-09 06:29:24.263786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.848 [2024-12-09 06:29:24.263817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.848 qpair failed and we were unable to recover it. 00:30:29.848 [2024-12-09 06:29:24.264118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.848 [2024-12-09 06:29:24.264147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.848 qpair failed and we were unable to recover it. 00:30:29.848 [2024-12-09 06:29:24.264491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.848 [2024-12-09 06:29:24.264523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.848 qpair failed and we were unable to recover it. 00:30:29.848 [2024-12-09 06:29:24.264898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.848 [2024-12-09 06:29:24.264927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.848 qpair failed and we were unable to recover it. 00:30:29.848 [2024-12-09 06:29:24.265273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.848 [2024-12-09 06:29:24.265309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.848 qpair failed and we were unable to recover it. 00:30:29.848 [2024-12-09 06:29:24.265641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.848 [2024-12-09 06:29:24.265672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.848 qpair failed and we were unable to recover it. 00:30:29.848 [2024-12-09 06:29:24.266002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.848 [2024-12-09 06:29:24.266031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.848 qpair failed and we were unable to recover it. 00:30:29.848 [2024-12-09 06:29:24.266284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.848 [2024-12-09 06:29:24.266312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.848 qpair failed and we were unable to recover it. 00:30:29.848 [2024-12-09 06:29:24.266621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.848 [2024-12-09 06:29:24.266652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.848 qpair failed and we were unable to recover it. 00:30:29.848 [2024-12-09 06:29:24.267019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.848 [2024-12-09 06:29:24.267050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.848 qpair failed and we were unable to recover it. 00:30:29.849 [2024-12-09 06:29:24.267395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.849 [2024-12-09 06:29:24.267424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.849 qpair failed and we were unable to recover it. 00:30:29.849 [2024-12-09 06:29:24.267835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.849 [2024-12-09 06:29:24.267864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.849 qpair failed and we were unable to recover it. 00:30:29.849 [2024-12-09 06:29:24.268233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.849 [2024-12-09 06:29:24.268262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.849 qpair failed and we were unable to recover it. 00:30:29.849 [2024-12-09 06:29:24.268603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.849 [2024-12-09 06:29:24.268634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.849 qpair failed and we were unable to recover it. 00:30:29.849 [2024-12-09 06:29:24.269015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.849 [2024-12-09 06:29:24.269045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.849 qpair failed and we were unable to recover it. 00:30:29.849 [2024-12-09 06:29:24.269414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.849 [2024-12-09 06:29:24.269445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.849 qpair failed and we were unable to recover it. 00:30:29.849 [2024-12-09 06:29:24.269845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.849 [2024-12-09 06:29:24.269874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.849 qpair failed and we were unable to recover it. 00:30:29.849 [2024-12-09 06:29:24.270239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.849 [2024-12-09 06:29:24.270268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.849 qpair failed and we were unable to recover it. 00:30:29.849 [2024-12-09 06:29:24.270617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.849 [2024-12-09 06:29:24.270648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.849 qpair failed and we were unable to recover it. 00:30:29.849 [2024-12-09 06:29:24.271028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.849 [2024-12-09 06:29:24.271059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.849 qpair failed and we were unable to recover it. 00:30:29.849 [2024-12-09 06:29:24.271367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.849 [2024-12-09 06:29:24.271397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.849 qpair failed and we were unable to recover it. 00:30:29.849 [2024-12-09 06:29:24.271807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.849 [2024-12-09 06:29:24.271838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.849 qpair failed and we were unable to recover it. 00:30:29.849 [2024-12-09 06:29:24.272179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.849 [2024-12-09 06:29:24.272209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.849 qpair failed and we were unable to recover it. 00:30:29.849 [2024-12-09 06:29:24.272442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.849 [2024-12-09 06:29:24.272484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.849 qpair failed and we were unable to recover it. 00:30:29.849 [2024-12-09 06:29:24.272830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.849 [2024-12-09 06:29:24.272861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.849 qpair failed and we were unable to recover it. 00:30:29.849 [2024-12-09 06:29:24.273186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.849 [2024-12-09 06:29:24.273217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.849 qpair failed and we were unable to recover it. 00:30:29.849 [2024-12-09 06:29:24.273568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.849 [2024-12-09 06:29:24.273600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.849 qpair failed and we were unable to recover it. 00:30:29.849 [2024-12-09 06:29:24.273939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.849 [2024-12-09 06:29:24.273968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.849 qpair failed and we were unable to recover it. 00:30:29.849 [2024-12-09 06:29:24.274304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.849 [2024-12-09 06:29:24.274333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.849 qpair failed and we were unable to recover it. 00:30:29.849 [2024-12-09 06:29:24.274686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.849 [2024-12-09 06:29:24.274716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.849 qpair failed and we were unable to recover it. 00:30:29.849 [2024-12-09 06:29:24.275055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.849 [2024-12-09 06:29:24.275085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.849 qpair failed and we were unable to recover it. 00:30:29.849 [2024-12-09 06:29:24.275482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.849 [2024-12-09 06:29:24.275519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.849 qpair failed and we were unable to recover it. 00:30:29.849 [2024-12-09 06:29:24.275800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.849 [2024-12-09 06:29:24.275832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.849 qpair failed and we were unable to recover it. 00:30:29.849 [2024-12-09 06:29:24.276195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.849 [2024-12-09 06:29:24.276224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.849 qpair failed and we were unable to recover it. 00:30:29.849 [2024-12-09 06:29:24.276471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.849 [2024-12-09 06:29:24.276502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.849 qpair failed and we were unable to recover it. 00:30:29.849 [2024-12-09 06:29:24.276854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.849 [2024-12-09 06:29:24.276884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.849 qpair failed and we were unable to recover it. 00:30:29.849 [2024-12-09 06:29:24.277233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.849 [2024-12-09 06:29:24.277262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.849 qpair failed and we were unable to recover it. 00:30:29.849 [2024-12-09 06:29:24.277620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.849 [2024-12-09 06:29:24.277650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.849 qpair failed and we were unable to recover it. 00:30:29.849 [2024-12-09 06:29:24.277994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.849 [2024-12-09 06:29:24.278025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.849 qpair failed and we were unable to recover it. 00:30:29.849 [2024-12-09 06:29:24.278369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.849 [2024-12-09 06:29:24.278399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.849 qpair failed and we were unable to recover it. 00:30:29.849 [2024-12-09 06:29:24.278659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.849 [2024-12-09 06:29:24.278690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.849 qpair failed and we were unable to recover it. 00:30:29.849 [2024-12-09 06:29:24.279025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.849 [2024-12-09 06:29:24.279055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.849 qpair failed and we were unable to recover it. 00:30:29.849 [2024-12-09 06:29:24.279464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.849 [2024-12-09 06:29:24.279495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.849 qpair failed and we were unable to recover it. 00:30:29.849 [2024-12-09 06:29:24.279902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.849 [2024-12-09 06:29:24.279932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.849 qpair failed and we were unable to recover it. 00:30:29.849 [2024-12-09 06:29:24.280267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.849 [2024-12-09 06:29:24.280298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.849 qpair failed and we were unable to recover it. 00:30:29.849 [2024-12-09 06:29:24.280638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.849 [2024-12-09 06:29:24.280670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.849 qpair failed and we were unable to recover it. 00:30:29.849 [2024-12-09 06:29:24.281037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.849 [2024-12-09 06:29:24.281066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.849 qpair failed and we were unable to recover it. 00:30:29.849 [2024-12-09 06:29:24.281400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.849 [2024-12-09 06:29:24.281429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.849 qpair failed and we were unable to recover it. 00:30:29.849 [2024-12-09 06:29:24.281794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.849 [2024-12-09 06:29:24.281825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.850 qpair failed and we were unable to recover it. 00:30:29.850 [2024-12-09 06:29:24.282221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.850 [2024-12-09 06:29:24.282251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.850 qpair failed and we were unable to recover it. 00:30:29.850 [2024-12-09 06:29:24.282588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.850 [2024-12-09 06:29:24.282619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.850 qpair failed and we were unable to recover it. 00:30:29.850 [2024-12-09 06:29:24.283015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.850 [2024-12-09 06:29:24.283045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.850 qpair failed and we were unable to recover it. 00:30:29.850 [2024-12-09 06:29:24.283277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.850 [2024-12-09 06:29:24.283306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.850 qpair failed and we were unable to recover it. 00:30:29.850 [2024-12-09 06:29:24.283727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.850 [2024-12-09 06:29:24.283757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.850 qpair failed and we were unable to recover it. 00:30:29.850 [2024-12-09 06:29:24.284096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.850 [2024-12-09 06:29:24.284124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.850 qpair failed and we were unable to recover it. 00:30:29.850 [2024-12-09 06:29:24.284495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.850 [2024-12-09 06:29:24.284526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.850 qpair failed and we were unable to recover it. 00:30:29.850 [2024-12-09 06:29:24.284918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.850 [2024-12-09 06:29:24.284950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.850 qpair failed and we were unable to recover it. 00:30:29.850 [2024-12-09 06:29:24.285285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.850 [2024-12-09 06:29:24.285315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.850 qpair failed and we were unable to recover it. 00:30:29.850 [2024-12-09 06:29:24.285661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.850 [2024-12-09 06:29:24.285691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.850 qpair failed and we were unable to recover it. 00:30:29.850 [2024-12-09 06:29:24.285914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.850 [2024-12-09 06:29:24.285946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.850 qpair failed and we were unable to recover it. 00:30:29.850 [2024-12-09 06:29:24.286316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.850 [2024-12-09 06:29:24.286345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.850 qpair failed and we were unable to recover it. 00:30:29.850 [2024-12-09 06:29:24.286694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.850 [2024-12-09 06:29:24.286724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.850 qpair failed and we were unable to recover it. 00:30:29.850 [2024-12-09 06:29:24.287072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.850 [2024-12-09 06:29:24.287103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.850 qpair failed and we were unable to recover it. 00:30:29.850 [2024-12-09 06:29:24.287326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.850 [2024-12-09 06:29:24.287358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.850 qpair failed and we were unable to recover it. 00:30:29.850 [2024-12-09 06:29:24.287685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.850 [2024-12-09 06:29:24.287716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.850 qpair failed and we were unable to recover it. 00:30:29.850 [2024-12-09 06:29:24.288099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.850 [2024-12-09 06:29:24.288128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.850 qpair failed and we were unable to recover it. 00:30:29.850 [2024-12-09 06:29:24.288477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.850 [2024-12-09 06:29:24.288508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.850 qpair failed and we were unable to recover it. 00:30:29.850 [2024-12-09 06:29:24.288859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.850 [2024-12-09 06:29:24.288888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.850 qpair failed and we were unable to recover it. 00:30:29.850 [2024-12-09 06:29:24.289229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.850 [2024-12-09 06:29:24.289260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.850 qpair failed and we were unable to recover it. 00:30:29.850 [2024-12-09 06:29:24.289644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.850 [2024-12-09 06:29:24.289675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.850 qpair failed and we were unable to recover it. 00:30:29.850 [2024-12-09 06:29:24.290083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.850 [2024-12-09 06:29:24.290111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.850 qpair failed and we were unable to recover it. 00:30:29.850 [2024-12-09 06:29:24.290436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.850 [2024-12-09 06:29:24.290480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.850 qpair failed and we were unable to recover it. 00:30:29.850 [2024-12-09 06:29:24.290802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.850 [2024-12-09 06:29:24.290837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.850 qpair failed and we were unable to recover it. 00:30:29.850 [2024-12-09 06:29:24.291180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.850 [2024-12-09 06:29:24.291209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.850 qpair failed and we were unable to recover it. 00:30:29.850 [2024-12-09 06:29:24.291555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.850 [2024-12-09 06:29:24.291587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.850 qpair failed and we were unable to recover it. 00:30:29.850 [2024-12-09 06:29:24.291974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.850 [2024-12-09 06:29:24.292004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.850 qpair failed and we were unable to recover it. 00:30:29.850 [2024-12-09 06:29:24.292225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.850 [2024-12-09 06:29:24.292257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.850 qpair failed and we were unable to recover it. 00:30:29.850 [2024-12-09 06:29:24.292620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.850 [2024-12-09 06:29:24.292650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.850 qpair failed and we were unable to recover it. 00:30:29.850 [2024-12-09 06:29:24.293006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.850 [2024-12-09 06:29:24.293035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.850 qpair failed and we were unable to recover it. 00:30:29.850 [2024-12-09 06:29:24.293386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.850 [2024-12-09 06:29:24.293414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.850 qpair failed and we were unable to recover it. 00:30:29.850 [2024-12-09 06:29:24.293819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.850 [2024-12-09 06:29:24.293850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.850 qpair failed and we were unable to recover it. 00:30:29.850 [2024-12-09 06:29:24.294177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.850 [2024-12-09 06:29:24.294206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.850 qpair failed and we were unable to recover it. 00:30:29.850 [2024-12-09 06:29:24.294569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.850 [2024-12-09 06:29:24.294601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.850 qpair failed and we were unable to recover it. 00:30:29.850 [2024-12-09 06:29:24.294968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.850 [2024-12-09 06:29:24.294997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.850 qpair failed and we were unable to recover it. 00:30:29.850 [2024-12-09 06:29:24.295377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.850 [2024-12-09 06:29:24.295405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.850 qpair failed and we were unable to recover it. 00:30:29.850 [2024-12-09 06:29:24.295684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.850 [2024-12-09 06:29:24.295717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.850 qpair failed and we were unable to recover it. 00:30:29.850 [2024-12-09 06:29:24.296072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.850 [2024-12-09 06:29:24.296103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.850 qpair failed and we were unable to recover it. 00:30:29.850 [2024-12-09 06:29:24.296444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.851 [2024-12-09 06:29:24.296489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.851 qpair failed and we were unable to recover it. 00:30:29.851 [2024-12-09 06:29:24.296863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.851 [2024-12-09 06:29:24.296891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.851 qpair failed and we were unable to recover it. 00:30:29.851 [2024-12-09 06:29:24.297226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.851 [2024-12-09 06:29:24.297255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.851 qpair failed and we were unable to recover it. 00:30:29.851 [2024-12-09 06:29:24.297598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.851 [2024-12-09 06:29:24.297629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.851 qpair failed and we were unable to recover it. 00:30:29.851 [2024-12-09 06:29:24.297858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.851 [2024-12-09 06:29:24.297890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.851 qpair failed and we were unable to recover it. 00:30:29.851 [2024-12-09 06:29:24.298220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.851 [2024-12-09 06:29:24.298251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.851 qpair failed and we were unable to recover it. 00:30:29.851 [2024-12-09 06:29:24.298634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.851 [2024-12-09 06:29:24.298665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.851 qpair failed and we were unable to recover it. 00:30:29.851 [2024-12-09 06:29:24.299008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.851 [2024-12-09 06:29:24.299038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.851 qpair failed and we were unable to recover it. 00:30:29.851 [2024-12-09 06:29:24.299380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.851 [2024-12-09 06:29:24.299409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.851 qpair failed and we were unable to recover it. 00:30:29.851 [2024-12-09 06:29:24.299753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.851 [2024-12-09 06:29:24.299783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.851 qpair failed and we were unable to recover it. 00:30:29.851 [2024-12-09 06:29:24.300007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.851 [2024-12-09 06:29:24.300040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.851 qpair failed and we were unable to recover it. 00:30:29.851 [2024-12-09 06:29:24.300404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.851 [2024-12-09 06:29:24.300434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.851 qpair failed and we were unable to recover it. 00:30:29.851 [2024-12-09 06:29:24.300696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.851 [2024-12-09 06:29:24.300735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.851 qpair failed and we were unable to recover it. 00:30:29.851 [2024-12-09 06:29:24.301146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.851 [2024-12-09 06:29:24.301175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.851 qpair failed and we were unable to recover it. 00:30:29.851 [2024-12-09 06:29:24.301517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.851 [2024-12-09 06:29:24.301548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.851 qpair failed and we were unable to recover it. 00:30:29.851 [2024-12-09 06:29:24.301903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.851 [2024-12-09 06:29:24.301932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.851 qpair failed and we were unable to recover it. 00:30:29.851 [2024-12-09 06:29:24.302276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.851 [2024-12-09 06:29:24.302305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.851 qpair failed and we were unable to recover it. 00:30:29.851 [2024-12-09 06:29:24.302665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.851 [2024-12-09 06:29:24.302696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.851 qpair failed and we were unable to recover it. 00:30:29.851 [2024-12-09 06:29:24.303007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.851 [2024-12-09 06:29:24.303038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.851 qpair failed and we were unable to recover it. 00:30:29.851 [2024-12-09 06:29:24.303378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.851 [2024-12-09 06:29:24.303407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.851 qpair failed and we were unable to recover it. 00:30:29.851 [2024-12-09 06:29:24.303752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.851 [2024-12-09 06:29:24.303782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.851 qpair failed and we were unable to recover it. 00:30:29.851 [2024-12-09 06:29:24.304130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.851 [2024-12-09 06:29:24.304159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.851 qpair failed and we were unable to recover it. 00:30:29.851 [2024-12-09 06:29:24.304505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.851 [2024-12-09 06:29:24.304536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.851 qpair failed and we were unable to recover it. 00:30:29.851 [2024-12-09 06:29:24.304895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.851 [2024-12-09 06:29:24.304923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.851 qpair failed and we were unable to recover it. 00:30:29.851 [2024-12-09 06:29:24.305266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.851 [2024-12-09 06:29:24.305296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.851 qpair failed and we were unable to recover it. 00:30:29.851 [2024-12-09 06:29:24.305649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.851 [2024-12-09 06:29:24.305681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.851 qpair failed and we were unable to recover it. 00:30:29.851 [2024-12-09 06:29:24.306068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.851 [2024-12-09 06:29:24.306098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.851 qpair failed and we were unable to recover it. 00:30:29.851 [2024-12-09 06:29:24.306439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.851 [2024-12-09 06:29:24.306494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.851 qpair failed and we were unable to recover it. 00:30:29.851 [2024-12-09 06:29:24.306885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.851 [2024-12-09 06:29:24.306915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.851 qpair failed and we were unable to recover it. 00:30:29.851 [2024-12-09 06:29:24.307253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.851 [2024-12-09 06:29:24.307283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.851 qpair failed and we were unable to recover it. 00:30:29.851 [2024-12-09 06:29:24.307653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.851 [2024-12-09 06:29:24.307684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.851 qpair failed and we were unable to recover it. 00:30:29.851 [2024-12-09 06:29:24.308092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.851 [2024-12-09 06:29:24.308121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.851 qpair failed and we were unable to recover it. 00:30:29.851 [2024-12-09 06:29:24.308508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.851 [2024-12-09 06:29:24.308538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.851 qpair failed and we were unable to recover it. 00:30:29.851 [2024-12-09 06:29:24.308873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.851 [2024-12-09 06:29:24.308902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.851 qpair failed and we were unable to recover it. 00:30:29.851 [2024-12-09 06:29:24.309253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.851 [2024-12-09 06:29:24.309282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.851 qpair failed and we were unable to recover it. 00:30:29.851 [2024-12-09 06:29:24.309635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.851 [2024-12-09 06:29:24.309666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.851 qpair failed and we were unable to recover it. 00:30:29.851 [2024-12-09 06:29:24.310042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.851 [2024-12-09 06:29:24.310072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.851 qpair failed and we were unable to recover it. 00:30:29.851 [2024-12-09 06:29:24.310412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.851 [2024-12-09 06:29:24.310441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.851 qpair failed and we were unable to recover it. 00:30:29.851 [2024-12-09 06:29:24.310840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.851 [2024-12-09 06:29:24.310870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.852 qpair failed and we were unable to recover it. 00:30:29.852 [2024-12-09 06:29:24.311211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.852 [2024-12-09 06:29:24.311240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.852 qpair failed and we were unable to recover it. 00:30:29.852 [2024-12-09 06:29:24.311623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.852 [2024-12-09 06:29:24.311655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.852 qpair failed and we were unable to recover it. 00:30:29.852 [2024-12-09 06:29:24.312004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.852 [2024-12-09 06:29:24.312033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.852 qpair failed and we were unable to recover it. 00:30:29.852 [2024-12-09 06:29:24.312361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.852 [2024-12-09 06:29:24.312390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.852 qpair failed and we were unable to recover it. 00:30:29.852 [2024-12-09 06:29:24.312740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.852 [2024-12-09 06:29:24.312770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.852 qpair failed and we were unable to recover it. 00:30:29.852 [2024-12-09 06:29:24.313108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.852 [2024-12-09 06:29:24.313137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.852 qpair failed and we were unable to recover it. 00:30:29.852 [2024-12-09 06:29:24.313494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.852 [2024-12-09 06:29:24.313526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.852 qpair failed and we were unable to recover it. 00:30:29.852 [2024-12-09 06:29:24.313881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.852 [2024-12-09 06:29:24.313913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.852 qpair failed and we were unable to recover it. 00:30:29.852 [2024-12-09 06:29:24.314294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.852 [2024-12-09 06:29:24.314323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.852 qpair failed and we were unable to recover it. 00:30:29.852 [2024-12-09 06:29:24.314675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.852 [2024-12-09 06:29:24.314706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.852 qpair failed and we were unable to recover it. 00:30:29.852 [2024-12-09 06:29:24.315043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.852 [2024-12-09 06:29:24.315072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.852 qpair failed and we were unable to recover it. 00:30:29.852 [2024-12-09 06:29:24.315306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.852 [2024-12-09 06:29:24.315334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.852 qpair failed and we were unable to recover it. 00:30:29.852 [2024-12-09 06:29:24.315671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.852 [2024-12-09 06:29:24.315701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.852 qpair failed and we were unable to recover it. 00:30:29.852 [2024-12-09 06:29:24.316049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.852 [2024-12-09 06:29:24.316080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.852 qpair failed and we were unable to recover it. 00:30:29.852 [2024-12-09 06:29:24.316419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.852 [2024-12-09 06:29:24.316474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.852 qpair failed and we were unable to recover it. 00:30:29.852 [2024-12-09 06:29:24.316825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.852 [2024-12-09 06:29:24.316855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.852 qpair failed and we were unable to recover it. 00:30:29.852 [2024-12-09 06:29:24.317237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.852 [2024-12-09 06:29:24.317265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.852 qpair failed and we were unable to recover it. 00:30:29.852 [2024-12-09 06:29:24.317487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.852 [2024-12-09 06:29:24.317520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.852 qpair failed and we were unable to recover it. 00:30:29.852 [2024-12-09 06:29:24.317874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.852 [2024-12-09 06:29:24.317904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.852 qpair failed and we were unable to recover it. 00:30:29.852 [2024-12-09 06:29:24.318262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.852 [2024-12-09 06:29:24.318293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.852 qpair failed and we were unable to recover it. 00:30:29.852 [2024-12-09 06:29:24.318672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.852 [2024-12-09 06:29:24.318704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.852 qpair failed and we were unable to recover it. 00:30:29.852 [2024-12-09 06:29:24.319063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.852 [2024-12-09 06:29:24.319092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.852 qpair failed and we were unable to recover it. 00:30:29.852 [2024-12-09 06:29:24.319462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.852 [2024-12-09 06:29:24.319494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.852 qpair failed and we were unable to recover it. 00:30:29.852 [2024-12-09 06:29:24.319862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.852 [2024-12-09 06:29:24.319892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.852 qpair failed and we were unable to recover it. 00:30:29.852 [2024-12-09 06:29:24.320237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.852 [2024-12-09 06:29:24.320266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.852 qpair failed and we were unable to recover it. 00:30:29.852 [2024-12-09 06:29:24.320648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.852 [2024-12-09 06:29:24.320680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.852 qpair failed and we were unable to recover it. 00:30:29.852 [2024-12-09 06:29:24.321054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.852 [2024-12-09 06:29:24.321085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.852 qpair failed and we were unable to recover it. 00:30:29.852 [2024-12-09 06:29:24.321621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.852 [2024-12-09 06:29:24.321658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.852 qpair failed and we were unable to recover it. 00:30:29.852 [2024-12-09 06:29:24.321990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.852 [2024-12-09 06:29:24.322021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.852 qpair failed and we were unable to recover it. 00:30:29.852 [2024-12-09 06:29:24.322367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.852 [2024-12-09 06:29:24.322396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.852 qpair failed and we were unable to recover it. 00:30:29.852 [2024-12-09 06:29:24.322830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.852 [2024-12-09 06:29:24.322861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.852 qpair failed and we were unable to recover it. 00:30:29.852 [2024-12-09 06:29:24.323248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.852 [2024-12-09 06:29:24.323278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.852 qpair failed and we were unable to recover it. 00:30:29.852 [2024-12-09 06:29:24.323507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.852 [2024-12-09 06:29:24.323541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.852 qpair failed and we were unable to recover it. 00:30:29.852 [2024-12-09 06:29:24.323904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.852 [2024-12-09 06:29:24.323933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.852 qpair failed and we were unable to recover it. 00:30:29.852 [2024-12-09 06:29:24.324286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.852 [2024-12-09 06:29:24.324316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.853 qpair failed and we were unable to recover it. 00:30:29.853 [2024-12-09 06:29:24.324541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.853 [2024-12-09 06:29:24.324575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.853 qpair failed and we were unable to recover it. 00:30:29.853 [2024-12-09 06:29:24.324878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.853 [2024-12-09 06:29:24.324907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.853 qpair failed and we were unable to recover it. 00:30:29.853 [2024-12-09 06:29:24.325254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.853 [2024-12-09 06:29:24.325285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.853 qpair failed and we were unable to recover it. 00:30:29.853 [2024-12-09 06:29:24.325647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.853 [2024-12-09 06:29:24.325677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.853 qpair failed and we were unable to recover it. 00:30:29.853 [2024-12-09 06:29:24.325993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.853 [2024-12-09 06:29:24.326022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.853 qpair failed and we were unable to recover it. 00:30:29.853 [2024-12-09 06:29:24.326363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.853 [2024-12-09 06:29:24.326393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.853 qpair failed and we were unable to recover it. 00:30:29.853 [2024-12-09 06:29:24.326762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.853 [2024-12-09 06:29:24.326795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.853 qpair failed and we were unable to recover it. 00:30:29.853 [2024-12-09 06:29:24.327168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.853 [2024-12-09 06:29:24.327199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.853 qpair failed and we were unable to recover it. 00:30:29.853 [2024-12-09 06:29:24.327557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.853 [2024-12-09 06:29:24.327588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.853 qpair failed and we were unable to recover it. 00:30:29.853 [2024-12-09 06:29:24.327927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.853 [2024-12-09 06:29:24.327955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.853 qpair failed and we were unable to recover it. 00:30:29.853 [2024-12-09 06:29:24.328291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.853 [2024-12-09 06:29:24.328320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.853 qpair failed and we were unable to recover it. 00:30:29.853 [2024-12-09 06:29:24.328554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.853 [2024-12-09 06:29:24.328585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.853 qpair failed and we were unable to recover it. 00:30:29.853 [2024-12-09 06:29:24.328949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.853 [2024-12-09 06:29:24.328978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.853 qpair failed and we were unable to recover it. 00:30:29.853 [2024-12-09 06:29:24.329319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.853 [2024-12-09 06:29:24.329349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.853 qpair failed and we were unable to recover it. 00:30:29.853 [2024-12-09 06:29:24.329687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.853 [2024-12-09 06:29:24.329718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.853 qpair failed and we were unable to recover it. 00:30:29.853 [2024-12-09 06:29:24.330065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.853 [2024-12-09 06:29:24.330095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.853 qpair failed and we were unable to recover it. 00:30:29.853 [2024-12-09 06:29:24.330471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.853 [2024-12-09 06:29:24.330503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.853 qpair failed and we were unable to recover it. 00:30:29.853 [2024-12-09 06:29:24.330874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.853 [2024-12-09 06:29:24.330902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.853 qpair failed and we were unable to recover it. 00:30:29.853 [2024-12-09 06:29:24.331237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.853 [2024-12-09 06:29:24.331266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.853 qpair failed and we were unable to recover it. 00:30:29.853 [2024-12-09 06:29:24.331606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.853 [2024-12-09 06:29:24.331637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.853 qpair failed and we were unable to recover it. 00:30:29.853 [2024-12-09 06:29:24.331779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.853 [2024-12-09 06:29:24.331811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.853 qpair failed and we were unable to recover it. 00:30:29.853 [2024-12-09 06:29:24.332070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.853 [2024-12-09 06:29:24.332102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.853 qpair failed and we were unable to recover it. 00:30:29.853 [2024-12-09 06:29:24.332436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.853 [2024-12-09 06:29:24.332480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.853 qpair failed and we were unable to recover it. 00:30:29.853 [2024-12-09 06:29:24.332835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.853 [2024-12-09 06:29:24.332864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.853 qpair failed and we were unable to recover it. 00:30:29.853 [2024-12-09 06:29:24.333201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.853 [2024-12-09 06:29:24.333230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.853 qpair failed and we were unable to recover it. 00:30:29.853 [2024-12-09 06:29:24.333500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.853 [2024-12-09 06:29:24.333530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.853 qpair failed and we were unable to recover it. 00:30:29.853 [2024-12-09 06:29:24.333907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.853 [2024-12-09 06:29:24.333936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.853 qpair failed and we were unable to recover it. 00:30:29.853 [2024-12-09 06:29:24.334282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.853 [2024-12-09 06:29:24.334313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.853 qpair failed and we were unable to recover it. 00:30:29.853 [2024-12-09 06:29:24.334660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.853 [2024-12-09 06:29:24.334691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.853 qpair failed and we were unable to recover it. 00:30:29.853 [2024-12-09 06:29:24.335045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.853 [2024-12-09 06:29:24.335074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.853 qpair failed and we were unable to recover it. 00:30:29.853 [2024-12-09 06:29:24.335467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.853 [2024-12-09 06:29:24.335498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.853 qpair failed and we were unable to recover it. 00:30:29.853 [2024-12-09 06:29:24.335821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.853 [2024-12-09 06:29:24.335850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.853 qpair failed and we were unable to recover it. 00:30:29.853 [2024-12-09 06:29:24.336221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.853 [2024-12-09 06:29:24.336252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.853 qpair failed and we were unable to recover it. 00:30:29.853 [2024-12-09 06:29:24.336597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.853 [2024-12-09 06:29:24.336628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.853 qpair failed and we were unable to recover it. 00:30:29.853 [2024-12-09 06:29:24.337016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.853 [2024-12-09 06:29:24.337045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.853 qpair failed and we were unable to recover it. 00:30:29.853 [2024-12-09 06:29:24.337280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.853 [2024-12-09 06:29:24.337310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.853 qpair failed and we were unable to recover it. 00:30:29.853 [2024-12-09 06:29:24.337630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.853 [2024-12-09 06:29:24.337660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.853 qpair failed and we were unable to recover it. 00:30:29.853 [2024-12-09 06:29:24.337999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.853 [2024-12-09 06:29:24.338029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.853 qpair failed and we were unable to recover it. 00:30:29.853 [2024-12-09 06:29:24.338425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.854 [2024-12-09 06:29:24.338479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.854 qpair failed and we were unable to recover it. 00:30:29.854 [2024-12-09 06:29:24.338882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.854 [2024-12-09 06:29:24.338912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.854 qpair failed and we were unable to recover it. 00:30:29.854 [2024-12-09 06:29:24.339256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.854 [2024-12-09 06:29:24.339286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.854 qpair failed and we were unable to recover it. 00:30:29.854 [2024-12-09 06:29:24.339622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.854 [2024-12-09 06:29:24.339652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.854 qpair failed and we were unable to recover it. 00:30:29.854 [2024-12-09 06:29:24.340032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.854 [2024-12-09 06:29:24.340062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.854 qpair failed and we were unable to recover it. 00:30:29.854 [2024-12-09 06:29:24.340403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.854 [2024-12-09 06:29:24.340432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.854 qpair failed and we were unable to recover it. 00:30:29.854 [2024-12-09 06:29:24.340766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.854 [2024-12-09 06:29:24.340797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.854 qpair failed and we were unable to recover it. 00:30:29.854 [2024-12-09 06:29:24.341020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.854 [2024-12-09 06:29:24.341048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.854 qpair failed and we were unable to recover it. 00:30:29.854 [2024-12-09 06:29:24.341422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.854 [2024-12-09 06:29:24.341466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.854 qpair failed and we were unable to recover it. 00:30:29.854 [2024-12-09 06:29:24.341842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.854 [2024-12-09 06:29:24.341877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.854 qpair failed and we were unable to recover it. 00:30:29.854 [2024-12-09 06:29:24.342229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.854 [2024-12-09 06:29:24.342258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.854 qpair failed and we were unable to recover it. 00:30:29.854 [2024-12-09 06:29:24.342625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.854 [2024-12-09 06:29:24.342655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.854 qpair failed and we were unable to recover it. 00:30:29.854 [2024-12-09 06:29:24.343002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.854 [2024-12-09 06:29:24.343032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.854 qpair failed and we were unable to recover it. 00:30:29.854 [2024-12-09 06:29:24.343399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.854 [2024-12-09 06:29:24.343429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.854 qpair failed and we were unable to recover it. 00:30:29.854 [2024-12-09 06:29:24.343796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.854 [2024-12-09 06:29:24.343825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.854 qpair failed and we were unable to recover it. 00:30:29.854 [2024-12-09 06:29:24.344231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.854 [2024-12-09 06:29:24.344261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.854 qpair failed and we were unable to recover it. 00:30:29.854 [2024-12-09 06:29:24.344600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.854 [2024-12-09 06:29:24.344631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.854 qpair failed and we were unable to recover it. 00:30:29.854 [2024-12-09 06:29:24.344983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.854 [2024-12-09 06:29:24.345011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.854 qpair failed and we were unable to recover it. 00:30:29.854 [2024-12-09 06:29:24.345339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.854 [2024-12-09 06:29:24.345370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.854 qpair failed and we were unable to recover it. 00:30:29.854 [2024-12-09 06:29:24.345722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.854 [2024-12-09 06:29:24.345752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.854 qpair failed and we were unable to recover it. 00:30:29.854 [2024-12-09 06:29:24.346095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.854 [2024-12-09 06:29:24.346124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.854 qpair failed and we were unable to recover it. 00:30:29.854 [2024-12-09 06:29:24.346480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.854 [2024-12-09 06:29:24.346511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.854 qpair failed and we were unable to recover it. 00:30:29.854 [2024-12-09 06:29:24.346884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.854 [2024-12-09 06:29:24.346912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.854 qpair failed and we were unable to recover it. 00:30:29.854 [2024-12-09 06:29:24.347298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.854 [2024-12-09 06:29:24.347327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.854 qpair failed and we were unable to recover it. 00:30:29.854 [2024-12-09 06:29:24.347709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.854 [2024-12-09 06:29:24.347740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.854 qpair failed and we were unable to recover it. 00:30:29.854 [2024-12-09 06:29:24.347972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.854 [2024-12-09 06:29:24.348002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.854 qpair failed and we were unable to recover it. 00:30:29.854 [2024-12-09 06:29:24.348400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.854 [2024-12-09 06:29:24.348431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.854 qpair failed and we were unable to recover it. 00:30:29.854 [2024-12-09 06:29:24.348847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.854 [2024-12-09 06:29:24.348877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.854 qpair failed and we were unable to recover it. 00:30:29.854 [2024-12-09 06:29:24.349141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.854 [2024-12-09 06:29:24.349170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.854 qpair failed and we were unable to recover it. 00:30:29.854 [2024-12-09 06:29:24.349523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.854 [2024-12-09 06:29:24.349553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.854 qpair failed and we were unable to recover it. 00:30:29.854 [2024-12-09 06:29:24.349941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.854 [2024-12-09 06:29:24.349972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.854 qpair failed and we were unable to recover it. 00:30:29.854 [2024-12-09 06:29:24.350194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.854 [2024-12-09 06:29:24.350224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.854 qpair failed and we were unable to recover it. 00:30:29.854 [2024-12-09 06:29:24.350614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.854 [2024-12-09 06:29:24.350646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.854 qpair failed and we were unable to recover it. 00:30:29.854 [2024-12-09 06:29:24.351006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.854 [2024-12-09 06:29:24.351035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.854 qpair failed and we were unable to recover it. 00:30:29.854 [2024-12-09 06:29:24.351366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.854 [2024-12-09 06:29:24.351395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.854 qpair failed and we were unable to recover it. 00:30:29.854 [2024-12-09 06:29:24.351793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.854 [2024-12-09 06:29:24.351824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.854 qpair failed and we were unable to recover it. 00:30:29.854 [2024-12-09 06:29:24.352192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.854 [2024-12-09 06:29:24.352223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.854 qpair failed and we were unable to recover it. 00:30:29.854 [2024-12-09 06:29:24.352613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.854 [2024-12-09 06:29:24.352645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.854 qpair failed and we were unable to recover it. 00:30:29.854 [2024-12-09 06:29:24.352959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.854 [2024-12-09 06:29:24.352988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.854 qpair failed and we were unable to recover it. 00:30:29.855 [2024-12-09 06:29:24.353340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.855 [2024-12-09 06:29:24.353370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.855 qpair failed and we were unable to recover it. 00:30:29.855 [2024-12-09 06:29:24.353802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.855 [2024-12-09 06:29:24.353833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.855 qpair failed and we were unable to recover it. 00:30:29.855 [2024-12-09 06:29:24.354169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.855 [2024-12-09 06:29:24.354199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.855 qpair failed and we were unable to recover it. 00:30:29.855 [2024-12-09 06:29:24.354556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.855 [2024-12-09 06:29:24.354588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.855 qpair failed and we were unable to recover it. 00:30:29.855 [2024-12-09 06:29:24.354929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.855 [2024-12-09 06:29:24.354958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.855 qpair failed and we were unable to recover it. 00:30:29.855 [2024-12-09 06:29:24.355298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.855 [2024-12-09 06:29:24.355327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.855 qpair failed and we were unable to recover it. 00:30:29.855 [2024-12-09 06:29:24.355681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.855 [2024-12-09 06:29:24.355712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.855 qpair failed and we were unable to recover it. 00:30:29.855 [2024-12-09 06:29:24.356046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.855 [2024-12-09 06:29:24.356076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.855 qpair failed and we were unable to recover it. 00:30:29.855 [2024-12-09 06:29:24.356400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.855 [2024-12-09 06:29:24.356430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.855 qpair failed and we were unable to recover it. 00:30:29.855 [2024-12-09 06:29:24.356699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.855 [2024-12-09 06:29:24.356729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.855 qpair failed and we were unable to recover it. 00:30:29.855 [2024-12-09 06:29:24.357116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.855 [2024-12-09 06:29:24.357145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.855 qpair failed and we were unable to recover it. 00:30:29.855 [2024-12-09 06:29:24.357467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.855 [2024-12-09 06:29:24.357500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.855 qpair failed and we were unable to recover it. 00:30:29.855 [2024-12-09 06:29:24.357860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.855 [2024-12-09 06:29:24.357889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.855 qpair failed and we were unable to recover it. 00:30:29.855 [2024-12-09 06:29:24.358267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.855 [2024-12-09 06:29:24.358296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.855 qpair failed and we were unable to recover it. 00:30:29.855 [2024-12-09 06:29:24.358653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.855 [2024-12-09 06:29:24.358684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.855 qpair failed and we were unable to recover it. 00:30:29.855 [2024-12-09 06:29:24.359027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.855 [2024-12-09 06:29:24.359058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.855 qpair failed and we were unable to recover it. 00:30:29.855 [2024-12-09 06:29:24.359398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.855 [2024-12-09 06:29:24.359428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.855 qpair failed and we were unable to recover it. 00:30:29.855 [2024-12-09 06:29:24.359826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.855 [2024-12-09 06:29:24.359856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.855 qpair failed and we were unable to recover it. 00:30:29.855 [2024-12-09 06:29:24.360100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.855 [2024-12-09 06:29:24.360131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.855 qpair failed and we were unable to recover it. 00:30:29.855 [2024-12-09 06:29:24.360493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.855 [2024-12-09 06:29:24.360524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.855 qpair failed and we were unable to recover it. 00:30:29.855 [2024-12-09 06:29:24.360872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.855 [2024-12-09 06:29:24.360902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.855 qpair failed and we were unable to recover it. 00:30:29.855 [2024-12-09 06:29:24.361241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.855 [2024-12-09 06:29:24.361272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.855 qpair failed and we were unable to recover it. 00:30:29.855 [2024-12-09 06:29:24.361685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.855 [2024-12-09 06:29:24.361715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.855 qpair failed and we were unable to recover it. 00:30:29.855 [2024-12-09 06:29:24.362061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.855 [2024-12-09 06:29:24.362090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.855 qpair failed and we were unable to recover it. 00:30:29.855 [2024-12-09 06:29:24.362434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.855 [2024-12-09 06:29:24.362478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.855 qpair failed and we were unable to recover it. 00:30:29.855 [2024-12-09 06:29:24.362865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.855 [2024-12-09 06:29:24.362895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.855 qpair failed and we were unable to recover it. 00:30:29.855 [2024-12-09 06:29:24.363239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.855 [2024-12-09 06:29:24.363269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.855 qpair failed and we were unable to recover it. 00:30:29.855 [2024-12-09 06:29:24.363622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.855 [2024-12-09 06:29:24.363654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.855 qpair failed and we were unable to recover it. 00:30:29.855 [2024-12-09 06:29:24.364026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.855 [2024-12-09 06:29:24.364055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.855 qpair failed and we were unable to recover it. 00:30:29.855 [2024-12-09 06:29:24.364424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.855 [2024-12-09 06:29:24.364471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.855 qpair failed and we were unable to recover it. 00:30:29.855 [2024-12-09 06:29:24.364832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.855 [2024-12-09 06:29:24.364861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.855 qpair failed and we were unable to recover it. 00:30:29.855 [2024-12-09 06:29:24.365182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.855 [2024-12-09 06:29:24.365211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.855 qpair failed and we were unable to recover it. 00:30:29.855 [2024-12-09 06:29:24.365443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.855 [2024-12-09 06:29:24.365488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.855 qpair failed and we were unable to recover it. 00:30:29.855 [2024-12-09 06:29:24.365836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.855 [2024-12-09 06:29:24.365867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.855 qpair failed and we were unable to recover it. 00:30:29.855 [2024-12-09 06:29:24.366192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.855 [2024-12-09 06:29:24.366222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.855 qpair failed and we were unable to recover it. 00:30:29.855 [2024-12-09 06:29:24.366544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.855 [2024-12-09 06:29:24.366584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.855 qpair failed and we were unable to recover it. 00:30:29.855 [2024-12-09 06:29:24.366999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.855 [2024-12-09 06:29:24.367047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.855 qpair failed and we were unable to recover it. 00:30:29.855 [2024-12-09 06:29:24.367486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.855 [2024-12-09 06:29:24.367543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.855 qpair failed and we were unable to recover it. 00:30:29.855 [2024-12-09 06:29:24.367939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.855 [2024-12-09 06:29:24.367995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.856 qpair failed and we were unable to recover it. 00:30:29.856 [2024-12-09 06:29:24.368374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.856 [2024-12-09 06:29:24.368416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.856 qpair failed and we were unable to recover it. 00:30:29.856 [2024-12-09 06:29:24.368748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.856 [2024-12-09 06:29:24.368781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.856 qpair failed and we were unable to recover it. 00:30:29.856 [2024-12-09 06:29:24.369122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.856 [2024-12-09 06:29:24.369152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.856 qpair failed and we were unable to recover it. 00:30:29.856 [2024-12-09 06:29:24.369531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.856 [2024-12-09 06:29:24.369563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.856 qpair failed and we were unable to recover it. 00:30:29.856 [2024-12-09 06:29:24.369895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.856 [2024-12-09 06:29:24.369925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.856 qpair failed and we were unable to recover it. 00:30:29.856 [2024-12-09 06:29:24.370262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.856 [2024-12-09 06:29:24.370291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.856 qpair failed and we were unable to recover it. 00:30:29.856 [2024-12-09 06:29:24.370631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.856 [2024-12-09 06:29:24.370664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.856 qpair failed and we were unable to recover it. 00:30:29.856 [2024-12-09 06:29:24.370994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.856 [2024-12-09 06:29:24.371024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.856 qpair failed and we were unable to recover it. 00:30:29.856 [2024-12-09 06:29:24.371390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.856 [2024-12-09 06:29:24.371420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.856 qpair failed and we were unable to recover it. 00:30:29.856 [2024-12-09 06:29:24.371787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.856 [2024-12-09 06:29:24.371818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.856 qpair failed and we were unable to recover it. 00:30:29.856 [2024-12-09 06:29:24.372210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.856 [2024-12-09 06:29:24.372240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.856 qpair failed and we were unable to recover it. 00:30:29.856 [2024-12-09 06:29:24.372557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.856 [2024-12-09 06:29:24.372587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.856 qpair failed and we were unable to recover it. 00:30:29.856 [2024-12-09 06:29:24.372818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.856 [2024-12-09 06:29:24.372848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.856 qpair failed and we were unable to recover it. 00:30:29.856 [2024-12-09 06:29:24.373199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.856 [2024-12-09 06:29:24.373230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.856 qpair failed and we were unable to recover it. 00:30:29.856 [2024-12-09 06:29:24.373564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.856 [2024-12-09 06:29:24.373595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.856 qpair failed and we were unable to recover it. 00:30:29.856 [2024-12-09 06:29:24.373937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.856 [2024-12-09 06:29:24.373967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.856 qpair failed and we were unable to recover it. 00:30:29.856 [2024-12-09 06:29:24.374325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.856 [2024-12-09 06:29:24.374354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.856 qpair failed and we were unable to recover it. 00:30:29.856 [2024-12-09 06:29:24.374704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.856 [2024-12-09 06:29:24.374735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.856 qpair failed and we were unable to recover it. 00:30:29.856 [2024-12-09 06:29:24.375131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.856 [2024-12-09 06:29:24.375161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.856 qpair failed and we were unable to recover it. 00:30:29.856 [2024-12-09 06:29:24.375506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.856 [2024-12-09 06:29:24.375539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.856 qpair failed and we were unable to recover it. 00:30:29.856 [2024-12-09 06:29:24.375897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.856 [2024-12-09 06:29:24.375926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.856 qpair failed and we were unable to recover it. 00:30:29.856 [2024-12-09 06:29:24.376261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.856 [2024-12-09 06:29:24.376289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.856 qpair failed and we were unable to recover it. 00:30:29.856 [2024-12-09 06:29:24.376679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.856 [2024-12-09 06:29:24.376709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.856 qpair failed and we were unable to recover it. 00:30:29.856 [2024-12-09 06:29:24.377121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.856 [2024-12-09 06:29:24.377150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.856 qpair failed and we were unable to recover it. 00:30:29.856 [2024-12-09 06:29:24.377385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.856 [2024-12-09 06:29:24.377415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.856 qpair failed and we were unable to recover it. 00:30:29.856 [2024-12-09 06:29:24.377804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.856 [2024-12-09 06:29:24.377837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.856 qpair failed and we were unable to recover it. 00:30:29.856 [2024-12-09 06:29:24.378099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.856 [2024-12-09 06:29:24.378136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.856 qpair failed and we were unable to recover it. 00:30:29.856 [2024-12-09 06:29:24.378514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.856 [2024-12-09 06:29:24.378546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.856 qpair failed and we were unable to recover it. 00:30:29.856 [2024-12-09 06:29:24.378893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.856 [2024-12-09 06:29:24.378922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.856 qpair failed and we were unable to recover it. 00:30:29.856 [2024-12-09 06:29:24.379262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.856 [2024-12-09 06:29:24.379292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.856 qpair failed and we were unable to recover it. 00:30:29.856 [2024-12-09 06:29:24.379633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.856 [2024-12-09 06:29:24.379664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.856 qpair failed and we were unable to recover it. 00:30:29.856 [2024-12-09 06:29:24.379872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.856 [2024-12-09 06:29:24.379905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.856 qpair failed and we were unable to recover it. 00:30:29.856 [2024-12-09 06:29:24.380312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.856 [2024-12-09 06:29:24.380343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.856 qpair failed and we were unable to recover it. 00:30:29.856 [2024-12-09 06:29:24.380566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.857 [2024-12-09 06:29:24.380596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.857 qpair failed and we were unable to recover it. 00:30:29.857 [2024-12-09 06:29:24.380930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.857 [2024-12-09 06:29:24.380960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.857 qpair failed and we were unable to recover it. 00:30:29.857 [2024-12-09 06:29:24.381306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.857 [2024-12-09 06:29:24.381336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.857 qpair failed and we were unable to recover it. 00:30:29.857 [2024-12-09 06:29:24.381572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.857 [2024-12-09 06:29:24.381603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.857 qpair failed and we were unable to recover it. 00:30:29.857 [2024-12-09 06:29:24.381937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.857 [2024-12-09 06:29:24.381967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.857 qpair failed and we were unable to recover it. 00:30:29.857 [2024-12-09 06:29:24.382319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.857 [2024-12-09 06:29:24.382350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.857 qpair failed and we were unable to recover it. 00:30:29.857 [2024-12-09 06:29:24.382671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.857 [2024-12-09 06:29:24.382702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.857 qpair failed and we were unable to recover it. 00:30:29.857 [2024-12-09 06:29:24.383038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.857 [2024-12-09 06:29:24.383074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.857 qpair failed and we were unable to recover it. 00:30:29.857 [2024-12-09 06:29:24.383307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.857 [2024-12-09 06:29:24.383336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.857 qpair failed and we were unable to recover it. 00:30:29.857 [2024-12-09 06:29:24.383705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.857 [2024-12-09 06:29:24.383735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.857 qpair failed and we were unable to recover it. 00:30:29.857 [2024-12-09 06:29:24.384050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.857 [2024-12-09 06:29:24.384079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.857 qpair failed and we were unable to recover it. 00:30:29.857 [2024-12-09 06:29:24.384430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.857 [2024-12-09 06:29:24.384474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.857 qpair failed and we were unable to recover it. 00:30:29.857 [2024-12-09 06:29:24.384850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.857 [2024-12-09 06:29:24.384881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.857 qpair failed and we were unable to recover it. 00:30:29.857 [2024-12-09 06:29:24.385269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.857 [2024-12-09 06:29:24.385299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.857 qpair failed and we were unable to recover it. 00:30:29.857 [2024-12-09 06:29:24.385542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.857 [2024-12-09 06:29:24.385573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.857 qpair failed and we were unable to recover it. 00:30:29.857 [2024-12-09 06:29:24.385964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.857 [2024-12-09 06:29:24.385992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.857 qpair failed and we were unable to recover it. 00:30:29.857 [2024-12-09 06:29:24.386210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.857 [2024-12-09 06:29:24.386240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.857 qpair failed and we were unable to recover it. 00:30:29.857 [2024-12-09 06:29:24.386608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.857 [2024-12-09 06:29:24.386640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.857 qpair failed and we were unable to recover it. 00:30:29.857 [2024-12-09 06:29:24.386867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.857 [2024-12-09 06:29:24.386895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.857 qpair failed and we were unable to recover it. 00:30:29.857 [2024-12-09 06:29:24.387059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.857 [2024-12-09 06:29:24.387089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.857 qpair failed and we were unable to recover it. 00:30:29.857 [2024-12-09 06:29:24.387447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.857 [2024-12-09 06:29:24.387490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.857 qpair failed and we were unable to recover it. 00:30:29.857 [2024-12-09 06:29:24.387838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.857 [2024-12-09 06:29:24.387867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.857 qpair failed and we were unable to recover it. 00:30:29.857 [2024-12-09 06:29:24.388208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.857 [2024-12-09 06:29:24.388240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.857 qpair failed and we were unable to recover it. 00:30:29.857 [2024-12-09 06:29:24.388478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.857 [2024-12-09 06:29:24.388510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.857 qpair failed and we were unable to recover it. 00:30:29.857 [2024-12-09 06:29:24.390236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.857 [2024-12-09 06:29:24.390295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.857 qpair failed and we were unable to recover it. 00:30:29.857 [2024-12-09 06:29:24.390701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.857 [2024-12-09 06:29:24.390736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:29.857 qpair failed and we were unable to recover it. 00:30:30.133 [2024-12-09 06:29:24.393014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.133 [2024-12-09 06:29:24.393079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.133 qpair failed and we were unable to recover it. 00:30:30.133 [2024-12-09 06:29:24.393481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.133 [2024-12-09 06:29:24.393517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.133 qpair failed and we were unable to recover it. 00:30:30.133 [2024-12-09 06:29:24.393880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.133 [2024-12-09 06:29:24.393911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.133 qpair failed and we were unable to recover it. 00:30:30.133 [2024-12-09 06:29:24.394285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.133 [2024-12-09 06:29:24.394314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.133 qpair failed and we were unable to recover it. 00:30:30.133 [2024-12-09 06:29:24.394636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.133 [2024-12-09 06:29:24.394669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.133 qpair failed and we were unable to recover it. 00:30:30.133 [2024-12-09 06:29:24.395035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.133 [2024-12-09 06:29:24.395065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.133 qpair failed and we were unable to recover it. 00:30:30.133 [2024-12-09 06:29:24.395405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.133 [2024-12-09 06:29:24.395437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.133 qpair failed and we were unable to recover it. 00:30:30.133 [2024-12-09 06:29:24.395859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.133 [2024-12-09 06:29:24.395891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.133 qpair failed and we were unable to recover it. 00:30:30.133 [2024-12-09 06:29:24.396294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.133 [2024-12-09 06:29:24.396332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.133 qpair failed and we were unable to recover it. 00:30:30.133 [2024-12-09 06:29:24.396714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.133 [2024-12-09 06:29:24.396745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.133 qpair failed and we were unable to recover it. 00:30:30.133 [2024-12-09 06:29:24.397088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.133 [2024-12-09 06:29:24.397117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.133 qpair failed and we were unable to recover it. 00:30:30.133 [2024-12-09 06:29:24.397498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.133 [2024-12-09 06:29:24.397531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.133 qpair failed and we were unable to recover it. 00:30:30.133 [2024-12-09 06:29:24.397894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.133 [2024-12-09 06:29:24.397924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.133 qpair failed and we were unable to recover it. 00:30:30.133 [2024-12-09 06:29:24.398280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.133 [2024-12-09 06:29:24.398309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.133 qpair failed and we were unable to recover it. 00:30:30.133 [2024-12-09 06:29:24.398670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.133 [2024-12-09 06:29:24.398700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.133 qpair failed and we were unable to recover it. 00:30:30.133 [2024-12-09 06:29:24.399077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.133 [2024-12-09 06:29:24.399106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.133 qpair failed and we were unable to recover it. 00:30:30.133 [2024-12-09 06:29:24.399439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.133 [2024-12-09 06:29:24.399485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.133 qpair failed and we were unable to recover it. 00:30:30.133 [2024-12-09 06:29:24.399860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.133 [2024-12-09 06:29:24.399889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.133 qpair failed and we were unable to recover it. 00:30:30.133 [2024-12-09 06:29:24.400106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.133 [2024-12-09 06:29:24.400136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.133 qpair failed and we were unable to recover it. 00:30:30.133 [2024-12-09 06:29:24.400510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.133 [2024-12-09 06:29:24.400540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.133 qpair failed and we were unable to recover it. 00:30:30.133 [2024-12-09 06:29:24.400889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.133 [2024-12-09 06:29:24.400918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.133 qpair failed and we were unable to recover it. 00:30:30.133 [2024-12-09 06:29:24.401268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.133 [2024-12-09 06:29:24.401296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-12-09 06:29:24.401683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.134 [2024-12-09 06:29:24.401716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-12-09 06:29:24.402056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.134 [2024-12-09 06:29:24.402087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-12-09 06:29:24.402429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.134 [2024-12-09 06:29:24.402472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-12-09 06:29:24.402820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.134 [2024-12-09 06:29:24.402850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-12-09 06:29:24.403195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.134 [2024-12-09 06:29:24.403225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-12-09 06:29:24.403584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.134 [2024-12-09 06:29:24.403616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-12-09 06:29:24.403978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.134 [2024-12-09 06:29:24.404008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-12-09 06:29:24.404242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.134 [2024-12-09 06:29:24.404275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-12-09 06:29:24.404509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.134 [2024-12-09 06:29:24.404544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-12-09 06:29:24.404909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.134 [2024-12-09 06:29:24.404939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-12-09 06:29:24.405267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.134 [2024-12-09 06:29:24.405296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-12-09 06:29:24.405653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.134 [2024-12-09 06:29:24.405689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-12-09 06:29:24.405986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.134 [2024-12-09 06:29:24.406015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-12-09 06:29:24.406360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.134 [2024-12-09 06:29:24.406392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-12-09 06:29:24.406770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.134 [2024-12-09 06:29:24.406802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-12-09 06:29:24.407130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.134 [2024-12-09 06:29:24.407159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-12-09 06:29:24.407502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.134 [2024-12-09 06:29:24.407534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-12-09 06:29:24.407875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.134 [2024-12-09 06:29:24.407904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-12-09 06:29:24.408290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.134 [2024-12-09 06:29:24.408320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-12-09 06:29:24.408680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.134 [2024-12-09 06:29:24.408710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-12-09 06:29:24.409052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.134 [2024-12-09 06:29:24.409082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-12-09 06:29:24.409426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.134 [2024-12-09 06:29:24.409477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-12-09 06:29:24.409846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.134 [2024-12-09 06:29:24.409876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-12-09 06:29:24.410108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.134 [2024-12-09 06:29:24.410137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-12-09 06:29:24.410379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.134 [2024-12-09 06:29:24.410411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-12-09 06:29:24.410809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.134 [2024-12-09 06:29:24.410839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-12-09 06:29:24.411215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.134 [2024-12-09 06:29:24.411245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-12-09 06:29:24.411574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.134 [2024-12-09 06:29:24.411613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-12-09 06:29:24.411857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.134 [2024-12-09 06:29:24.411886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-12-09 06:29:24.412242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.134 [2024-12-09 06:29:24.412270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-12-09 06:29:24.412494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.134 [2024-12-09 06:29:24.412528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-12-09 06:29:24.412891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.134 [2024-12-09 06:29:24.412920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-12-09 06:29:24.413254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.134 [2024-12-09 06:29:24.413284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-12-09 06:29:24.413660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.134 [2024-12-09 06:29:24.413692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-12-09 06:29:24.414039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.134 [2024-12-09 06:29:24.414069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-12-09 06:29:24.414318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.134 [2024-12-09 06:29:24.414347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-12-09 06:29:24.414696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.134 [2024-12-09 06:29:24.414726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.134 qpair failed and we were unable to recover it. 00:30:30.134 [2024-12-09 06:29:24.415106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.134 [2024-12-09 06:29:24.415136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.135 qpair failed and we were unable to recover it. 00:30:30.135 [2024-12-09 06:29:24.415478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.135 [2024-12-09 06:29:24.415510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.135 qpair failed and we were unable to recover it. 00:30:30.135 [2024-12-09 06:29:24.415880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.135 [2024-12-09 06:29:24.415909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.135 qpair failed and we were unable to recover it. 00:30:30.135 [2024-12-09 06:29:24.416257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.135 [2024-12-09 06:29:24.416286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.135 qpair failed and we were unable to recover it. 00:30:30.135 [2024-12-09 06:29:24.416629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.135 [2024-12-09 06:29:24.416659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.135 qpair failed and we were unable to recover it. 00:30:30.135 [2024-12-09 06:29:24.417006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.135 [2024-12-09 06:29:24.417035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.135 qpair failed and we were unable to recover it. 00:30:30.135 [2024-12-09 06:29:24.417412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.135 [2024-12-09 06:29:24.417443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.135 qpair failed and we were unable to recover it. 00:30:30.135 [2024-12-09 06:29:24.417810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.135 [2024-12-09 06:29:24.417839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.135 qpair failed and we were unable to recover it. 00:30:30.135 [2024-12-09 06:29:24.418181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.135 [2024-12-09 06:29:24.418210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.135 qpair failed and we were unable to recover it. 00:30:30.135 [2024-12-09 06:29:24.418436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.135 [2024-12-09 06:29:24.418478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.135 qpair failed and we were unable to recover it. 00:30:30.135 [2024-12-09 06:29:24.418835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.135 [2024-12-09 06:29:24.418864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.135 qpair failed and we were unable to recover it. 00:30:30.135 [2024-12-09 06:29:24.419205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.135 [2024-12-09 06:29:24.419235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.135 qpair failed and we were unable to recover it. 00:30:30.135 [2024-12-09 06:29:24.419615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.135 [2024-12-09 06:29:24.419646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.135 qpair failed and we were unable to recover it. 00:30:30.135 [2024-12-09 06:29:24.419986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.135 [2024-12-09 06:29:24.420015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.135 qpair failed and we were unable to recover it. 00:30:30.135 [2024-12-09 06:29:24.420377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.135 [2024-12-09 06:29:24.420406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.135 qpair failed and we were unable to recover it. 00:30:30.135 [2024-12-09 06:29:24.420794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.135 [2024-12-09 06:29:24.420825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.135 qpair failed and we were unable to recover it. 00:30:30.135 [2024-12-09 06:29:24.421163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.135 [2024-12-09 06:29:24.421192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.135 qpair failed and we were unable to recover it. 00:30:30.135 [2024-12-09 06:29:24.421542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.135 [2024-12-09 06:29:24.421579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.135 qpair failed and we were unable to recover it. 00:30:30.135 [2024-12-09 06:29:24.421841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.135 [2024-12-09 06:29:24.421870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.135 qpair failed and we were unable to recover it. 00:30:30.135 [2024-12-09 06:29:24.422253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.135 [2024-12-09 06:29:24.422284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.135 qpair failed and we were unable to recover it. 00:30:30.135 [2024-12-09 06:29:24.422625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.135 [2024-12-09 06:29:24.422657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.135 qpair failed and we were unable to recover it. 00:30:30.135 [2024-12-09 06:29:24.423025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.135 [2024-12-09 06:29:24.423055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.135 qpair failed and we were unable to recover it. 00:30:30.135 [2024-12-09 06:29:24.423465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.135 [2024-12-09 06:29:24.423497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.135 qpair failed and we were unable to recover it. 00:30:30.135 [2024-12-09 06:29:24.423736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.135 [2024-12-09 06:29:24.423764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.135 qpair failed and we were unable to recover it. 00:30:30.135 [2024-12-09 06:29:24.424112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.135 [2024-12-09 06:29:24.424143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.135 qpair failed and we were unable to recover it. 00:30:30.135 [2024-12-09 06:29:24.424482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.135 [2024-12-09 06:29:24.424513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.135 qpair failed and we were unable to recover it. 00:30:30.135 [2024-12-09 06:29:24.424858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.135 [2024-12-09 06:29:24.424886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.135 qpair failed and we were unable to recover it. 00:30:30.135 [2024-12-09 06:29:24.425126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.135 [2024-12-09 06:29:24.425154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.135 qpair failed and we were unable to recover it. 00:30:30.135 [2024-12-09 06:29:24.425523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.135 [2024-12-09 06:29:24.425574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.135 qpair failed and we were unable to recover it. 00:30:30.135 [2024-12-09 06:29:24.425967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.135 [2024-12-09 06:29:24.425997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.135 qpair failed and we were unable to recover it. 00:30:30.135 [2024-12-09 06:29:24.426377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.135 [2024-12-09 06:29:24.426407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.135 qpair failed and we were unable to recover it. 00:30:30.135 [2024-12-09 06:29:24.426831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.135 [2024-12-09 06:29:24.426862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.135 qpair failed and we were unable to recover it. 00:30:30.135 [2024-12-09 06:29:24.427205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.135 [2024-12-09 06:29:24.427234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.135 qpair failed and we were unable to recover it. 00:30:30.135 [2024-12-09 06:29:24.427585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.135 [2024-12-09 06:29:24.427614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.135 qpair failed and we were unable to recover it. 00:30:30.135 [2024-12-09 06:29:24.427841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.135 [2024-12-09 06:29:24.427872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.135 qpair failed and we were unable to recover it. 00:30:30.135 [2024-12-09 06:29:24.428231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.135 [2024-12-09 06:29:24.428260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.135 qpair failed and we were unable to recover it. 00:30:30.136 [2024-12-09 06:29:24.428643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.136 [2024-12-09 06:29:24.428674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.136 qpair failed and we were unable to recover it. 00:30:30.136 [2024-12-09 06:29:24.429018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.136 [2024-12-09 06:29:24.429048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.136 qpair failed and we were unable to recover it. 00:30:30.136 [2024-12-09 06:29:24.429401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.136 [2024-12-09 06:29:24.429430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.136 qpair failed and we were unable to recover it. 00:30:30.136 [2024-12-09 06:29:24.429790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.136 [2024-12-09 06:29:24.429819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.136 qpair failed and we were unable to recover it. 00:30:30.136 [2024-12-09 06:29:24.430199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.136 [2024-12-09 06:29:24.430228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.136 qpair failed and we were unable to recover it. 00:30:30.136 [2024-12-09 06:29:24.430577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.136 [2024-12-09 06:29:24.430608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.136 qpair failed and we were unable to recover it. 00:30:30.136 [2024-12-09 06:29:24.430956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.136 [2024-12-09 06:29:24.430984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.136 qpair failed and we were unable to recover it. 00:30:30.136 [2024-12-09 06:29:24.431319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.136 [2024-12-09 06:29:24.431350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.136 qpair failed and we were unable to recover it. 00:30:30.136 [2024-12-09 06:29:24.431744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.136 [2024-12-09 06:29:24.431776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.136 qpair failed and we were unable to recover it. 00:30:30.136 [2024-12-09 06:29:24.432128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.136 [2024-12-09 06:29:24.432158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.136 qpair failed and we were unable to recover it. 00:30:30.136 [2024-12-09 06:29:24.432511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.136 [2024-12-09 06:29:24.432542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.136 qpair failed and we were unable to recover it. 00:30:30.136 [2024-12-09 06:29:24.432897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.136 [2024-12-09 06:29:24.432926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.136 qpair failed and we were unable to recover it. 00:30:30.136 [2024-12-09 06:29:24.433276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.136 [2024-12-09 06:29:24.433307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.136 qpair failed and we were unable to recover it. 00:30:30.136 [2024-12-09 06:29:24.433628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.136 [2024-12-09 06:29:24.433658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.136 qpair failed and we were unable to recover it. 00:30:30.136 [2024-12-09 06:29:24.434000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.136 [2024-12-09 06:29:24.434029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.136 qpair failed and we were unable to recover it. 00:30:30.136 [2024-12-09 06:29:24.434368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.136 [2024-12-09 06:29:24.434397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.136 qpair failed and we were unable to recover it. 00:30:30.136 [2024-12-09 06:29:24.434753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.136 [2024-12-09 06:29:24.434784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.136 qpair failed and we were unable to recover it. 00:30:30.136 [2024-12-09 06:29:24.435148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.136 [2024-12-09 06:29:24.435179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.136 qpair failed and we were unable to recover it. 00:30:30.136 [2024-12-09 06:29:24.435537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.136 [2024-12-09 06:29:24.435567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.136 qpair failed and we were unable to recover it. 00:30:30.136 [2024-12-09 06:29:24.435889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.136 [2024-12-09 06:29:24.435918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.136 qpair failed and we were unable to recover it. 00:30:30.136 [2024-12-09 06:29:24.436292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.136 [2024-12-09 06:29:24.436321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.136 qpair failed and we were unable to recover it. 00:30:30.136 [2024-12-09 06:29:24.436741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.136 [2024-12-09 06:29:24.436771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.136 qpair failed and we were unable to recover it. 00:30:30.136 [2024-12-09 06:29:24.437110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.136 [2024-12-09 06:29:24.437145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.136 qpair failed and we were unable to recover it. 00:30:30.136 [2024-12-09 06:29:24.437514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.136 [2024-12-09 06:29:24.437544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.136 qpair failed and we were unable to recover it. 00:30:30.136 [2024-12-09 06:29:24.437895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.136 [2024-12-09 06:29:24.437924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.136 qpair failed and we were unable to recover it. 00:30:30.136 [2024-12-09 06:29:24.438258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.136 [2024-12-09 06:29:24.438288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.136 qpair failed and we were unable to recover it. 00:30:30.136 [2024-12-09 06:29:24.438603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.136 [2024-12-09 06:29:24.438634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.136 qpair failed and we were unable to recover it. 00:30:30.136 [2024-12-09 06:29:24.438905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.136 [2024-12-09 06:29:24.438934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.136 qpair failed and we were unable to recover it. 00:30:30.136 [2024-12-09 06:29:24.439291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.136 [2024-12-09 06:29:24.439321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.136 qpair failed and we were unable to recover it. 00:30:30.136 [2024-12-09 06:29:24.439671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.136 [2024-12-09 06:29:24.439701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.136 qpair failed and we were unable to recover it. 00:30:30.136 [2024-12-09 06:29:24.440016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.136 [2024-12-09 06:29:24.440046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.136 qpair failed and we were unable to recover it. 00:30:30.136 [2024-12-09 06:29:24.440387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.136 [2024-12-09 06:29:24.440417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.136 qpair failed and we were unable to recover it. 00:30:30.136 [2024-12-09 06:29:24.440813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.136 [2024-12-09 06:29:24.440843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.136 qpair failed and we were unable to recover it. 00:30:30.136 [2024-12-09 06:29:24.441186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.136 [2024-12-09 06:29:24.441215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.136 qpair failed and we were unable to recover it. 00:30:30.136 [2024-12-09 06:29:24.441540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.136 [2024-12-09 06:29:24.441572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.136 qpair failed and we were unable to recover it. 00:30:30.136 [2024-12-09 06:29:24.441921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.137 [2024-12-09 06:29:24.441952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.137 qpair failed and we were unable to recover it. 00:30:30.137 [2024-12-09 06:29:24.442205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.137 [2024-12-09 06:29:24.442235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.137 qpair failed and we were unable to recover it. 00:30:30.137 [2024-12-09 06:29:24.442584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.137 [2024-12-09 06:29:24.442615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.137 qpair failed and we were unable to recover it. 00:30:30.137 [2024-12-09 06:29:24.442996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.137 [2024-12-09 06:29:24.443024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.137 qpair failed and we were unable to recover it. 00:30:30.137 [2024-12-09 06:29:24.443372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.137 [2024-12-09 06:29:24.443401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.137 qpair failed and we were unable to recover it. 00:30:30.137 [2024-12-09 06:29:24.443674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.137 [2024-12-09 06:29:24.443704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.137 qpair failed and we were unable to recover it. 00:30:30.137 [2024-12-09 06:29:24.444135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.137 [2024-12-09 06:29:24.444165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.137 qpair failed and we were unable to recover it. 00:30:30.137 [2024-12-09 06:29:24.444512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.137 [2024-12-09 06:29:24.444545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.137 qpair failed and we were unable to recover it. 00:30:30.137 [2024-12-09 06:29:24.444900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.137 [2024-12-09 06:29:24.444930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.137 qpair failed and we were unable to recover it. 00:30:30.137 [2024-12-09 06:29:24.445188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.137 [2024-12-09 06:29:24.445218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.137 qpair failed and we were unable to recover it. 00:30:30.137 [2024-12-09 06:29:24.445549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.137 [2024-12-09 06:29:24.445580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.137 qpair failed and we were unable to recover it. 00:30:30.137 [2024-12-09 06:29:24.445922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.137 [2024-12-09 06:29:24.445951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.137 qpair failed and we were unable to recover it. 00:30:30.137 [2024-12-09 06:29:24.446184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.137 [2024-12-09 06:29:24.446216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.137 qpair failed and we were unable to recover it. 00:30:30.137 [2024-12-09 06:29:24.446565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.137 [2024-12-09 06:29:24.446596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.137 qpair failed and we were unable to recover it. 00:30:30.137 [2024-12-09 06:29:24.446934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.137 [2024-12-09 06:29:24.446972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.137 qpair failed and we were unable to recover it. 00:30:30.137 [2024-12-09 06:29:24.447352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.137 [2024-12-09 06:29:24.447382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.137 qpair failed and we were unable to recover it. 00:30:30.137 [2024-12-09 06:29:24.447727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.137 [2024-12-09 06:29:24.447758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.137 qpair failed and we were unable to recover it. 00:30:30.137 [2024-12-09 06:29:24.448010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.137 [2024-12-09 06:29:24.448042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.137 qpair failed and we were unable to recover it. 00:30:30.137 [2024-12-09 06:29:24.448421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.137 [2024-12-09 06:29:24.448462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.137 qpair failed and we were unable to recover it. 00:30:30.137 [2024-12-09 06:29:24.448816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.137 [2024-12-09 06:29:24.448847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.137 qpair failed and we were unable to recover it. 00:30:30.137 [2024-12-09 06:29:24.449213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.137 [2024-12-09 06:29:24.449243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.137 qpair failed and we were unable to recover it. 00:30:30.137 [2024-12-09 06:29:24.449586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.137 [2024-12-09 06:29:24.449618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.137 qpair failed and we were unable to recover it. 00:30:30.137 [2024-12-09 06:29:24.449998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.137 [2024-12-09 06:29:24.450027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.137 qpair failed and we were unable to recover it. 00:30:30.137 [2024-12-09 06:29:24.450349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.137 [2024-12-09 06:29:24.450378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.137 qpair failed and we were unable to recover it. 00:30:30.137 [2024-12-09 06:29:24.450694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.137 [2024-12-09 06:29:24.450724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.137 qpair failed and we were unable to recover it. 00:30:30.137 [2024-12-09 06:29:24.451114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.137 [2024-12-09 06:29:24.451146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.137 qpair failed and we were unable to recover it. 00:30:30.137 [2024-12-09 06:29:24.451468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.137 [2024-12-09 06:29:24.451498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.137 qpair failed and we were unable to recover it. 00:30:30.137 [2024-12-09 06:29:24.451814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.137 [2024-12-09 06:29:24.451844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.137 qpair failed and we were unable to recover it. 00:30:30.137 [2024-12-09 06:29:24.452227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.137 [2024-12-09 06:29:24.452258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.137 qpair failed and we were unable to recover it. 00:30:30.137 [2024-12-09 06:29:24.452645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.137 [2024-12-09 06:29:24.452676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.137 qpair failed and we were unable to recover it. 00:30:30.137 [2024-12-09 06:29:24.453057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.137 [2024-12-09 06:29:24.453088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.137 qpair failed and we were unable to recover it. 00:30:30.137 [2024-12-09 06:29:24.453474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.137 [2024-12-09 06:29:24.453506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.137 qpair failed and we were unable to recover it. 00:30:30.137 [2024-12-09 06:29:24.453754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.137 [2024-12-09 06:29:24.453784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.137 qpair failed and we were unable to recover it. 00:30:30.137 [2024-12-09 06:29:24.454190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.137 [2024-12-09 06:29:24.454220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.137 qpair failed and we were unable to recover it. 00:30:30.137 [2024-12-09 06:29:24.454465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.137 [2024-12-09 06:29:24.454497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.137 qpair failed and we were unable to recover it. 00:30:30.137 [2024-12-09 06:29:24.454951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.137 [2024-12-09 06:29:24.454980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.137 qpair failed and we were unable to recover it. 00:30:30.137 [2024-12-09 06:29:24.455316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.137 [2024-12-09 06:29:24.455346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.137 qpair failed and we were unable to recover it. 00:30:30.137 [2024-12-09 06:29:24.455709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.137 [2024-12-09 06:29:24.455741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.137 qpair failed and we were unable to recover it. 00:30:30.137 [2024-12-09 06:29:24.456096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.137 [2024-12-09 06:29:24.456127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.138 qpair failed and we were unable to recover it. 00:30:30.138 [2024-12-09 06:29:24.456443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.138 [2024-12-09 06:29:24.456487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.138 qpair failed and we were unable to recover it. 00:30:30.138 [2024-12-09 06:29:24.456881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.138 [2024-12-09 06:29:24.456911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.138 qpair failed and we were unable to recover it. 00:30:30.138 [2024-12-09 06:29:24.457287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.138 [2024-12-09 06:29:24.457318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.138 qpair failed and we were unable to recover it. 00:30:30.138 [2024-12-09 06:29:24.457702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.138 [2024-12-09 06:29:24.457732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.138 qpair failed and we were unable to recover it. 00:30:30.138 [2024-12-09 06:29:24.458078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.138 [2024-12-09 06:29:24.458110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.138 qpair failed and we were unable to recover it. 00:30:30.138 [2024-12-09 06:29:24.458435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.138 [2024-12-09 06:29:24.458478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.138 qpair failed and we were unable to recover it. 00:30:30.138 [2024-12-09 06:29:24.458788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.138 [2024-12-09 06:29:24.458818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.138 qpair failed and we were unable to recover it. 00:30:30.138 [2024-12-09 06:29:24.459163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.138 [2024-12-09 06:29:24.459193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.138 qpair failed and we were unable to recover it. 00:30:30.138 [2024-12-09 06:29:24.459541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.138 [2024-12-09 06:29:24.459572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.138 qpair failed and we were unable to recover it. 00:30:30.138 [2024-12-09 06:29:24.459944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.138 [2024-12-09 06:29:24.459974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.138 qpair failed and we were unable to recover it. 00:30:30.138 [2024-12-09 06:29:24.460239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.138 [2024-12-09 06:29:24.460271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.138 qpair failed and we were unable to recover it. 00:30:30.138 [2024-12-09 06:29:24.460499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.138 [2024-12-09 06:29:24.460530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.138 qpair failed and we were unable to recover it. 00:30:30.138 [2024-12-09 06:29:24.460776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.138 [2024-12-09 06:29:24.460806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.138 qpair failed and we were unable to recover it. 00:30:30.138 [2024-12-09 06:29:24.461083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.138 [2024-12-09 06:29:24.461113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.138 qpair failed and we were unable to recover it. 00:30:30.138 [2024-12-09 06:29:24.461488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.138 [2024-12-09 06:29:24.461519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.138 qpair failed and we were unable to recover it. 00:30:30.138 [2024-12-09 06:29:24.461890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.138 [2024-12-09 06:29:24.461919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.138 qpair failed and we were unable to recover it. 00:30:30.138 [2024-12-09 06:29:24.462158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.138 [2024-12-09 06:29:24.462196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.138 qpair failed and we were unable to recover it. 00:30:30.138 [2024-12-09 06:29:24.462427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.138 [2024-12-09 06:29:24.462470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.138 qpair failed and we were unable to recover it. 00:30:30.138 [2024-12-09 06:29:24.462854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.138 [2024-12-09 06:29:24.462884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.138 qpair failed and we were unable to recover it. 00:30:30.138 [2024-12-09 06:29:24.463231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.138 [2024-12-09 06:29:24.463260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.138 qpair failed and we were unable to recover it. 00:30:30.138 [2024-12-09 06:29:24.463608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.138 [2024-12-09 06:29:24.463640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.138 qpair failed and we were unable to recover it. 00:30:30.138 [2024-12-09 06:29:24.463872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.138 [2024-12-09 06:29:24.463903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.138 qpair failed and we were unable to recover it. 00:30:30.138 [2024-12-09 06:29:24.464249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.138 [2024-12-09 06:29:24.464279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.138 qpair failed and we were unable to recover it. 00:30:30.138 [2024-12-09 06:29:24.464518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.138 [2024-12-09 06:29:24.464548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.138 qpair failed and we were unable to recover it. 00:30:30.138 [2024-12-09 06:29:24.464955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.138 [2024-12-09 06:29:24.464985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.138 qpair failed and we were unable to recover it. 00:30:30.138 [2024-12-09 06:29:24.465326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.138 [2024-12-09 06:29:24.465355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.138 qpair failed and we were unable to recover it. 00:30:30.138 [2024-12-09 06:29:24.465705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.138 [2024-12-09 06:29:24.465735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.138 qpair failed and we were unable to recover it. 00:30:30.138 [2024-12-09 06:29:24.466018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.138 [2024-12-09 06:29:24.466048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.138 qpair failed and we were unable to recover it. 00:30:30.139 [2024-12-09 06:29:24.466284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.139 [2024-12-09 06:29:24.466313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.139 qpair failed and we were unable to recover it. 00:30:30.139 [2024-12-09 06:29:24.466667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.139 [2024-12-09 06:29:24.466698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.139 qpair failed and we were unable to recover it. 00:30:30.139 [2024-12-09 06:29:24.466978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.139 [2024-12-09 06:29:24.467010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.139 qpair failed and we were unable to recover it. 00:30:30.139 [2024-12-09 06:29:24.467388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.139 [2024-12-09 06:29:24.467418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.139 qpair failed and we were unable to recover it. 00:30:30.139 [2024-12-09 06:29:24.467731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.139 [2024-12-09 06:29:24.467762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.139 qpair failed and we were unable to recover it. 00:30:30.139 [2024-12-09 06:29:24.467893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.139 [2024-12-09 06:29:24.467920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.139 qpair failed and we were unable to recover it. 00:30:30.139 [2024-12-09 06:29:24.468289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.139 [2024-12-09 06:29:24.468318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.139 qpair failed and we were unable to recover it. 00:30:30.139 [2024-12-09 06:29:24.468673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.139 [2024-12-09 06:29:24.468704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.139 qpair failed and we were unable to recover it. 00:30:30.139 [2024-12-09 06:29:24.469055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.139 [2024-12-09 06:29:24.469086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.139 qpair failed and we were unable to recover it. 00:30:30.139 [2024-12-09 06:29:24.469442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.139 [2024-12-09 06:29:24.469484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.139 qpair failed and we were unable to recover it. 00:30:30.139 [2024-12-09 06:29:24.469878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.139 [2024-12-09 06:29:24.469908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.139 qpair failed and we were unable to recover it. 00:30:30.139 [2024-12-09 06:29:24.470223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.139 [2024-12-09 06:29:24.470253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.139 qpair failed and we were unable to recover it. 00:30:30.139 [2024-12-09 06:29:24.470622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.139 [2024-12-09 06:29:24.470653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.139 qpair failed and we were unable to recover it. 00:30:30.139 [2024-12-09 06:29:24.470982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.139 [2024-12-09 06:29:24.471011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.139 qpair failed and we were unable to recover it. 00:30:30.139 [2024-12-09 06:29:24.471367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.139 [2024-12-09 06:29:24.471397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.139 qpair failed and we were unable to recover it. 00:30:30.139 [2024-12-09 06:29:24.471770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.139 [2024-12-09 06:29:24.471800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.139 qpair failed and we were unable to recover it. 00:30:30.139 [2024-12-09 06:29:24.472047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.139 [2024-12-09 06:29:24.472076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.139 qpair failed and we were unable to recover it. 00:30:30.139 [2024-12-09 06:29:24.472312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.139 [2024-12-09 06:29:24.472341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.139 qpair failed and we were unable to recover it. 00:30:30.139 [2024-12-09 06:29:24.472691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.139 [2024-12-09 06:29:24.472723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.139 qpair failed and we were unable to recover it. 00:30:30.139 [2024-12-09 06:29:24.472960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.139 [2024-12-09 06:29:24.472989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.139 qpair failed and we were unable to recover it. 00:30:30.139 [2024-12-09 06:29:24.473330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.139 [2024-12-09 06:29:24.473359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.139 qpair failed and we were unable to recover it. 00:30:30.139 [2024-12-09 06:29:24.473682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.139 [2024-12-09 06:29:24.473713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.139 qpair failed and we were unable to recover it. 00:30:30.139 [2024-12-09 06:29:24.474057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.139 [2024-12-09 06:29:24.474086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.139 qpair failed and we were unable to recover it. 00:30:30.139 [2024-12-09 06:29:24.474458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.139 [2024-12-09 06:29:24.474492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.139 qpair failed and we were unable to recover it. 00:30:30.139 [2024-12-09 06:29:24.474866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.139 [2024-12-09 06:29:24.474895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.139 qpair failed and we were unable to recover it. 00:30:30.139 [2024-12-09 06:29:24.475248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.139 [2024-12-09 06:29:24.475277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.139 qpair failed and we were unable to recover it. 00:30:30.139 [2024-12-09 06:29:24.475536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.139 [2024-12-09 06:29:24.475567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.139 qpair failed and we were unable to recover it. 00:30:30.139 [2024-12-09 06:29:24.475933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.139 [2024-12-09 06:29:24.475963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.139 qpair failed and we were unable to recover it. 00:30:30.139 [2024-12-09 06:29:24.476332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.139 [2024-12-09 06:29:24.476362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.139 qpair failed and we were unable to recover it. 00:30:30.139 [2024-12-09 06:29:24.476729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.139 [2024-12-09 06:29:24.476761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.139 qpair failed and we were unable to recover it. 00:30:30.139 [2024-12-09 06:29:24.477153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.139 [2024-12-09 06:29:24.477182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.139 qpair failed and we were unable to recover it. 00:30:30.139 [2024-12-09 06:29:24.477577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.139 [2024-12-09 06:29:24.477608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.139 qpair failed and we were unable to recover it. 00:30:30.139 [2024-12-09 06:29:24.477848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.139 [2024-12-09 06:29:24.477877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.139 qpair failed and we were unable to recover it. 00:30:30.139 [2024-12-09 06:29:24.478257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.139 [2024-12-09 06:29:24.478288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.139 qpair failed and we were unable to recover it. 00:30:30.139 [2024-12-09 06:29:24.478666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.139 [2024-12-09 06:29:24.478697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.139 qpair failed and we were unable to recover it. 00:30:30.139 [2024-12-09 06:29:24.479046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.139 [2024-12-09 06:29:24.479076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.139 qpair failed and we were unable to recover it. 00:30:30.139 [2024-12-09 06:29:24.479430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.139 [2024-12-09 06:29:24.479473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.139 qpair failed and we were unable to recover it. 00:30:30.139 [2024-12-09 06:29:24.479719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.139 [2024-12-09 06:29:24.479749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.139 qpair failed and we were unable to recover it. 00:30:30.139 [2024-12-09 06:29:24.480148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.140 [2024-12-09 06:29:24.480179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.140 qpair failed and we were unable to recover it. 00:30:30.140 [2024-12-09 06:29:24.480532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.140 [2024-12-09 06:29:24.480565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.140 qpair failed and we were unable to recover it. 00:30:30.140 [2024-12-09 06:29:24.480799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.140 [2024-12-09 06:29:24.480828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.140 qpair failed and we were unable to recover it. 00:30:30.140 [2024-12-09 06:29:24.481192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.140 [2024-12-09 06:29:24.481221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.140 qpair failed and we were unable to recover it. 00:30:30.140 [2024-12-09 06:29:24.481610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.140 [2024-12-09 06:29:24.481641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.140 qpair failed and we were unable to recover it. 00:30:30.140 [2024-12-09 06:29:24.482000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.140 [2024-12-09 06:29:24.482029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.140 qpair failed and we were unable to recover it. 00:30:30.140 [2024-12-09 06:29:24.482419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.140 [2024-12-09 06:29:24.482461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.140 qpair failed and we were unable to recover it. 00:30:30.140 [2024-12-09 06:29:24.482799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.140 [2024-12-09 06:29:24.482830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.140 qpair failed and we were unable to recover it. 00:30:30.140 [2024-12-09 06:29:24.483224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.140 [2024-12-09 06:29:24.483254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.140 qpair failed and we were unable to recover it. 00:30:30.140 [2024-12-09 06:29:24.483606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.140 [2024-12-09 06:29:24.483637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.140 qpair failed and we were unable to recover it. 00:30:30.140 [2024-12-09 06:29:24.483979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.140 [2024-12-09 06:29:24.484009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.140 qpair failed and we were unable to recover it. 00:30:30.140 [2024-12-09 06:29:24.484361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.140 [2024-12-09 06:29:24.484390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.140 qpair failed and we were unable to recover it. 00:30:30.140 [2024-12-09 06:29:24.484778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.140 [2024-12-09 06:29:24.484810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.140 qpair failed and we were unable to recover it. 00:30:30.140 [2024-12-09 06:29:24.485162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.140 [2024-12-09 06:29:24.485192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.140 qpair failed and we were unable to recover it. 00:30:30.140 [2024-12-09 06:29:24.485417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.140 [2024-12-09 06:29:24.485446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.140 qpair failed and we were unable to recover it. 00:30:30.140 [2024-12-09 06:29:24.485807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.140 [2024-12-09 06:29:24.485837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.140 qpair failed and we were unable to recover it. 00:30:30.140 [2024-12-09 06:29:24.486063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.140 [2024-12-09 06:29:24.486094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.140 qpair failed and we were unable to recover it. 00:30:30.140 [2024-12-09 06:29:24.486327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.140 [2024-12-09 06:29:24.486356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.140 qpair failed and we were unable to recover it. 00:30:30.140 [2024-12-09 06:29:24.486715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.140 [2024-12-09 06:29:24.486751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.140 qpair failed and we were unable to recover it. 00:30:30.140 [2024-12-09 06:29:24.486939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.140 [2024-12-09 06:29:24.486970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.140 qpair failed and we were unable to recover it. 00:30:30.140 [2024-12-09 06:29:24.487368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.140 [2024-12-09 06:29:24.487399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.140 qpair failed and we were unable to recover it. 00:30:30.140 [2024-12-09 06:29:24.487632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.140 [2024-12-09 06:29:24.487663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.140 qpair failed and we were unable to recover it. 00:30:30.140 [2024-12-09 06:29:24.487986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.140 [2024-12-09 06:29:24.488015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.140 qpair failed and we were unable to recover it. 00:30:30.140 [2024-12-09 06:29:24.488356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.140 [2024-12-09 06:29:24.488385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.140 qpair failed and we were unable to recover it. 00:30:30.140 [2024-12-09 06:29:24.488780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.140 [2024-12-09 06:29:24.488810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.140 qpair failed and we were unable to recover it. 00:30:30.140 [2024-12-09 06:29:24.489156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.140 [2024-12-09 06:29:24.489186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.140 qpair failed and we were unable to recover it. 00:30:30.140 [2024-12-09 06:29:24.489427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.140 [2024-12-09 06:29:24.489475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.140 qpair failed and we were unable to recover it. 00:30:30.140 [2024-12-09 06:29:24.489664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.140 [2024-12-09 06:29:24.489692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.140 qpair failed and we were unable to recover it. 00:30:30.140 [2024-12-09 06:29:24.490073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.140 [2024-12-09 06:29:24.490102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.140 qpair failed and we were unable to recover it. 00:30:30.140 [2024-12-09 06:29:24.490465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.140 [2024-12-09 06:29:24.490496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.140 qpair failed and we were unable to recover it. 00:30:30.140 [2024-12-09 06:29:24.490856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.140 [2024-12-09 06:29:24.490885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.140 qpair failed and we were unable to recover it. 00:30:30.140 [2024-12-09 06:29:24.491239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.140 [2024-12-09 06:29:24.491268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.140 qpair failed and we were unable to recover it. 00:30:30.140 [2024-12-09 06:29:24.491635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.140 [2024-12-09 06:29:24.491668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.140 qpair failed and we were unable to recover it. 00:30:30.140 [2024-12-09 06:29:24.492019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.140 [2024-12-09 06:29:24.492048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.140 qpair failed and we were unable to recover it. 00:30:30.140 [2024-12-09 06:29:24.492194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.140 [2024-12-09 06:29:24.492222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.140 qpair failed and we were unable to recover it. 00:30:30.140 [2024-12-09 06:29:24.492485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.140 [2024-12-09 06:29:24.492516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.140 qpair failed and we were unable to recover it. 00:30:30.140 [2024-12-09 06:29:24.492763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.140 [2024-12-09 06:29:24.492792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.140 qpair failed and we were unable to recover it. 00:30:30.140 [2024-12-09 06:29:24.493186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.140 [2024-12-09 06:29:24.493215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.140 qpair failed and we were unable to recover it. 00:30:30.140 [2024-12-09 06:29:24.493566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.140 [2024-12-09 06:29:24.493596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.141 qpair failed and we were unable to recover it. 00:30:30.141 [2024-12-09 06:29:24.493831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.141 [2024-12-09 06:29:24.493861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.141 qpair failed and we were unable to recover it. 00:30:30.141 [2024-12-09 06:29:24.494216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.141 [2024-12-09 06:29:24.494245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.141 qpair failed and we were unable to recover it. 00:30:30.141 [2024-12-09 06:29:24.494639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.141 [2024-12-09 06:29:24.494671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.141 qpair failed and we were unable to recover it. 00:30:30.141 [2024-12-09 06:29:24.494766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.141 [2024-12-09 06:29:24.494794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.141 qpair failed and we were unable to recover it. 00:30:30.141 [2024-12-09 06:29:24.494991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.141 [2024-12-09 06:29:24.495020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.141 qpair failed and we were unable to recover it. 00:30:30.141 [2024-12-09 06:29:24.495399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.141 [2024-12-09 06:29:24.495429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.141 qpair failed and we were unable to recover it. 00:30:30.141 [2024-12-09 06:29:24.495828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.141 [2024-12-09 06:29:24.495860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.141 qpair failed and we were unable to recover it. 00:30:30.141 [2024-12-09 06:29:24.496258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.141 [2024-12-09 06:29:24.496290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.141 qpair failed and we were unable to recover it. 00:30:30.141 [2024-12-09 06:29:24.496512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.141 [2024-12-09 06:29:24.496544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.141 qpair failed and we were unable to recover it. 00:30:30.141 [2024-12-09 06:29:24.496891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.141 [2024-12-09 06:29:24.496921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.141 qpair failed and we were unable to recover it. 00:30:30.141 [2024-12-09 06:29:24.497274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.141 [2024-12-09 06:29:24.497304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.141 qpair failed and we were unable to recover it. 00:30:30.141 [2024-12-09 06:29:24.497562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.141 [2024-12-09 06:29:24.497592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.141 qpair failed and we were unable to recover it. 00:30:30.141 [2024-12-09 06:29:24.498008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.141 [2024-12-09 06:29:24.498038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.141 qpair failed and we were unable to recover it. 00:30:30.141 [2024-12-09 06:29:24.498383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.141 [2024-12-09 06:29:24.498413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.141 qpair failed and we were unable to recover it. 00:30:30.141 [2024-12-09 06:29:24.498824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.141 [2024-12-09 06:29:24.498855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.141 qpair failed and we were unable to recover it. 00:30:30.141 [2024-12-09 06:29:24.499091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.141 [2024-12-09 06:29:24.499123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.141 qpair failed and we were unable to recover it. 00:30:30.141 [2024-12-09 06:29:24.499353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.141 [2024-12-09 06:29:24.499383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.141 qpair failed and we were unable to recover it. 00:30:30.141 [2024-12-09 06:29:24.499780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.141 [2024-12-09 06:29:24.499812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.141 qpair failed and we were unable to recover it. 00:30:30.141 [2024-12-09 06:29:24.500186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.141 [2024-12-09 06:29:24.500215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.141 qpair failed and we were unable to recover it. 00:30:30.141 [2024-12-09 06:29:24.500592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.141 [2024-12-09 06:29:24.500624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.141 qpair failed and we were unable to recover it. 00:30:30.141 [2024-12-09 06:29:24.500990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.141 [2024-12-09 06:29:24.501026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.141 qpair failed and we were unable to recover it. 00:30:30.141 [2024-12-09 06:29:24.501257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.141 [2024-12-09 06:29:24.501288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.141 qpair failed and we were unable to recover it. 00:30:30.141 [2024-12-09 06:29:24.501519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.141 [2024-12-09 06:29:24.501549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.141 qpair failed and we were unable to recover it. 00:30:30.141 [2024-12-09 06:29:24.501897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.141 [2024-12-09 06:29:24.501927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.141 qpair failed and we were unable to recover it. 00:30:30.141 [2024-12-09 06:29:24.502338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.141 [2024-12-09 06:29:24.502368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.141 qpair failed and we were unable to recover it. 00:30:30.141 [2024-12-09 06:29:24.502719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.141 [2024-12-09 06:29:24.502749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.141 qpair failed and we were unable to recover it. 00:30:30.141 [2024-12-09 06:29:24.503134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.141 [2024-12-09 06:29:24.503163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.141 qpair failed and we were unable to recover it. 00:30:30.141 [2024-12-09 06:29:24.503519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.141 [2024-12-09 06:29:24.503550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.141 qpair failed and we were unable to recover it. 00:30:30.141 [2024-12-09 06:29:24.503773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.141 [2024-12-09 06:29:24.503802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.141 qpair failed and we were unable to recover it. 00:30:30.141 [2024-12-09 06:29:24.504151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.141 [2024-12-09 06:29:24.504182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.141 qpair failed and we were unable to recover it. 00:30:30.141 [2024-12-09 06:29:24.504377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.141 [2024-12-09 06:29:24.504406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.141 qpair failed and we were unable to recover it. 00:30:30.141 [2024-12-09 06:29:24.504740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.141 [2024-12-09 06:29:24.504771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.141 qpair failed and we were unable to recover it. 00:30:30.141 [2024-12-09 06:29:24.505115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.141 [2024-12-09 06:29:24.505146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.141 qpair failed and we were unable to recover it. 00:30:30.141 [2024-12-09 06:29:24.505486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.141 [2024-12-09 06:29:24.505517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.141 qpair failed and we were unable to recover it. 00:30:30.141 [2024-12-09 06:29:24.505832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.141 [2024-12-09 06:29:24.505862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.141 qpair failed and we were unable to recover it. 00:30:30.141 [2024-12-09 06:29:24.506245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.141 [2024-12-09 06:29:24.506274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.141 qpair failed and we were unable to recover it. 00:30:30.141 [2024-12-09 06:29:24.506615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.141 [2024-12-09 06:29:24.506646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.141 qpair failed and we were unable to recover it. 00:30:30.141 [2024-12-09 06:29:24.506988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.142 [2024-12-09 06:29:24.507018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.142 qpair failed and we were unable to recover it. 00:30:30.142 [2024-12-09 06:29:24.507260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.142 [2024-12-09 06:29:24.507289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.142 qpair failed and we were unable to recover it. 00:30:30.142 [2024-12-09 06:29:24.507655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.142 [2024-12-09 06:29:24.507687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.142 qpair failed and we were unable to recover it. 00:30:30.142 [2024-12-09 06:29:24.508003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.142 [2024-12-09 06:29:24.508032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.142 qpair failed and we were unable to recover it. 00:30:30.142 [2024-12-09 06:29:24.508392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.142 [2024-12-09 06:29:24.508422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.142 qpair failed and we were unable to recover it. 00:30:30.142 [2024-12-09 06:29:24.508815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.142 [2024-12-09 06:29:24.508846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.142 qpair failed and we were unable to recover it. 00:30:30.142 [2024-12-09 06:29:24.509185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.142 [2024-12-09 06:29:24.509214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.142 qpair failed and we were unable to recover it. 00:30:30.142 [2024-12-09 06:29:24.509446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.142 [2024-12-09 06:29:24.509489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.142 qpair failed and we were unable to recover it. 00:30:30.142 [2024-12-09 06:29:24.509862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.142 [2024-12-09 06:29:24.509892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.142 qpair failed and we were unable to recover it. 00:30:30.142 [2024-12-09 06:29:24.510231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.142 [2024-12-09 06:29:24.510261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.142 qpair failed and we were unable to recover it. 00:30:30.142 [2024-12-09 06:29:24.510586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.142 [2024-12-09 06:29:24.510624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.142 qpair failed and we were unable to recover it. 00:30:30.142 [2024-12-09 06:29:24.510967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.142 [2024-12-09 06:29:24.510996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.142 qpair failed and we were unable to recover it. 00:30:30.142 [2024-12-09 06:29:24.511247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.142 [2024-12-09 06:29:24.511276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.142 qpair failed and we were unable to recover it. 00:30:30.142 [2024-12-09 06:29:24.511615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.142 [2024-12-09 06:29:24.511646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.142 qpair failed and we were unable to recover it. 00:30:30.142 [2024-12-09 06:29:24.511973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.142 [2024-12-09 06:29:24.512004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.142 qpair failed and we were unable to recover it. 00:30:30.142 [2024-12-09 06:29:24.512325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.142 [2024-12-09 06:29:24.512354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.142 qpair failed and we were unable to recover it. 00:30:30.142 [2024-12-09 06:29:24.512712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.142 [2024-12-09 06:29:24.512741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.142 qpair failed and we were unable to recover it. 00:30:30.142 [2024-12-09 06:29:24.513080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.142 [2024-12-09 06:29:24.513109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.142 qpair failed and we were unable to recover it. 00:30:30.142 [2024-12-09 06:29:24.513473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.142 [2024-12-09 06:29:24.513505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.142 qpair failed and we were unable to recover it. 00:30:30.142 [2024-12-09 06:29:24.513843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.142 [2024-12-09 06:29:24.513873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.142 qpair failed and we were unable to recover it. 00:30:30.142 [2024-12-09 06:29:24.514122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.142 [2024-12-09 06:29:24.514156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.142 qpair failed and we were unable to recover it. 00:30:30.142 [2024-12-09 06:29:24.514533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.142 [2024-12-09 06:29:24.514565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.142 qpair failed and we were unable to recover it. 00:30:30.142 [2024-12-09 06:29:24.514924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.142 [2024-12-09 06:29:24.514953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.142 qpair failed and we were unable to recover it. 00:30:30.142 [2024-12-09 06:29:24.515190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.142 [2024-12-09 06:29:24.515219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.142 qpair failed and we were unable to recover it. 00:30:30.142 [2024-12-09 06:29:24.515552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.142 [2024-12-09 06:29:24.515583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.142 qpair failed and we were unable to recover it. 00:30:30.142 [2024-12-09 06:29:24.515925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.142 [2024-12-09 06:29:24.515953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.142 qpair failed and we were unable to recover it. 00:30:30.142 [2024-12-09 06:29:24.516308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.142 [2024-12-09 06:29:24.516338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.142 qpair failed and we were unable to recover it. 00:30:30.142 [2024-12-09 06:29:24.516723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.142 [2024-12-09 06:29:24.516754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.142 qpair failed and we were unable to recover it. 00:30:30.142 [2024-12-09 06:29:24.517078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.142 [2024-12-09 06:29:24.517108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.142 qpair failed and we were unable to recover it. 00:30:30.142 [2024-12-09 06:29:24.517439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.142 [2024-12-09 06:29:24.517480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.142 qpair failed and we were unable to recover it. 00:30:30.142 [2024-12-09 06:29:24.517818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.142 [2024-12-09 06:29:24.517847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.142 qpair failed and we were unable to recover it. 00:30:30.142 [2024-12-09 06:29:24.518193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.142 [2024-12-09 06:29:24.518223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.142 qpair failed and we were unable to recover it. 00:30:30.142 [2024-12-09 06:29:24.518607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.142 [2024-12-09 06:29:24.518638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.142 qpair failed and we were unable to recover it. 00:30:30.143 [2024-12-09 06:29:24.519025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.143 [2024-12-09 06:29:24.519055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.143 qpair failed and we were unable to recover it. 00:30:30.143 [2024-12-09 06:29:24.519382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.143 [2024-12-09 06:29:24.519413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.143 qpair failed and we were unable to recover it. 00:30:30.143 [2024-12-09 06:29:24.519802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.143 [2024-12-09 06:29:24.519834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.143 qpair failed and we were unable to recover it. 00:30:30.143 [2024-12-09 06:29:24.520177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.143 [2024-12-09 06:29:24.520206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.143 qpair failed and we were unable to recover it. 00:30:30.143 [2024-12-09 06:29:24.520551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.143 [2024-12-09 06:29:24.520582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.143 qpair failed and we were unable to recover it. 00:30:30.143 [2024-12-09 06:29:24.520929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.143 [2024-12-09 06:29:24.520958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.143 qpair failed and we were unable to recover it. 00:30:30.143 [2024-12-09 06:29:24.521297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.143 [2024-12-09 06:29:24.521326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.143 qpair failed and we were unable to recover it. 00:30:30.143 [2024-12-09 06:29:24.521705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.143 [2024-12-09 06:29:24.521736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.143 qpair failed and we were unable to recover it. 00:30:30.143 [2024-12-09 06:29:24.522070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.143 [2024-12-09 06:29:24.522099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.143 qpair failed and we were unable to recover it. 00:30:30.143 [2024-12-09 06:29:24.522466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.143 [2024-12-09 06:29:24.522497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.143 qpair failed and we were unable to recover it. 00:30:30.143 [2024-12-09 06:29:24.522854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.143 [2024-12-09 06:29:24.522883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.143 qpair failed and we were unable to recover it. 00:30:30.143 [2024-12-09 06:29:24.523260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.143 [2024-12-09 06:29:24.523288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.143 qpair failed and we were unable to recover it. 00:30:30.143 [2024-12-09 06:29:24.523614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.143 [2024-12-09 06:29:24.523646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.143 qpair failed and we were unable to recover it. 00:30:30.143 [2024-12-09 06:29:24.524008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.143 [2024-12-09 06:29:24.524037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.143 qpair failed and we were unable to recover it. 00:30:30.143 [2024-12-09 06:29:24.524378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.143 [2024-12-09 06:29:24.524407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.143 qpair failed and we were unable to recover it. 00:30:30.143 [2024-12-09 06:29:24.524742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.143 [2024-12-09 06:29:24.524773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.143 qpair failed and we were unable to recover it. 00:30:30.143 [2024-12-09 06:29:24.525118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.143 [2024-12-09 06:29:24.525146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.143 qpair failed and we were unable to recover it. 00:30:30.143 [2024-12-09 06:29:24.525497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.143 [2024-12-09 06:29:24.525527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.143 qpair failed and we were unable to recover it. 00:30:30.143 [2024-12-09 06:29:24.525826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.143 [2024-12-09 06:29:24.525861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.143 qpair failed and we were unable to recover it. 00:30:30.143 [2024-12-09 06:29:24.526197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.143 [2024-12-09 06:29:24.526226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.143 qpair failed and we were unable to recover it. 00:30:30.143 [2024-12-09 06:29:24.526593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.143 [2024-12-09 06:29:24.526624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.143 qpair failed and we were unable to recover it. 00:30:30.143 [2024-12-09 06:29:24.526964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.143 [2024-12-09 06:29:24.526992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.143 qpair failed and we were unable to recover it. 00:30:30.143 [2024-12-09 06:29:24.527331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.143 [2024-12-09 06:29:24.527360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.143 qpair failed and we were unable to recover it. 00:30:30.143 [2024-12-09 06:29:24.527714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.143 [2024-12-09 06:29:24.527745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.143 qpair failed and we were unable to recover it. 00:30:30.143 [2024-12-09 06:29:24.528106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.143 [2024-12-09 06:29:24.528135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.143 qpair failed and we were unable to recover it. 00:30:30.143 [2024-12-09 06:29:24.528370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.143 [2024-12-09 06:29:24.528403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.143 qpair failed and we were unable to recover it. 00:30:30.143 [2024-12-09 06:29:24.528795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.143 [2024-12-09 06:29:24.528825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.143 qpair failed and we were unable to recover it. 00:30:30.143 [2024-12-09 06:29:24.529152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.143 [2024-12-09 06:29:24.529181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.143 qpair failed and we were unable to recover it. 00:30:30.143 [2024-12-09 06:29:24.529523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.143 [2024-12-09 06:29:24.529569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.143 qpair failed and we were unable to recover it. 00:30:30.143 [2024-12-09 06:29:24.529946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.143 [2024-12-09 06:29:24.529975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.143 qpair failed and we were unable to recover it. 00:30:30.143 [2024-12-09 06:29:24.530198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.143 [2024-12-09 06:29:24.530228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.143 qpair failed and we were unable to recover it. 00:30:30.143 [2024-12-09 06:29:24.530595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.143 [2024-12-09 06:29:24.530626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.143 qpair failed and we were unable to recover it. 00:30:30.143 [2024-12-09 06:29:24.530981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.143 [2024-12-09 06:29:24.531010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.143 qpair failed and we were unable to recover it. 00:30:30.143 [2024-12-09 06:29:24.531341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.143 [2024-12-09 06:29:24.531370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.143 qpair failed and we were unable to recover it. 00:30:30.143 [2024-12-09 06:29:24.531718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.143 [2024-12-09 06:29:24.531749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.143 qpair failed and we were unable to recover it. 00:30:30.143 [2024-12-09 06:29:24.532093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.143 [2024-12-09 06:29:24.532123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.143 qpair failed and we were unable to recover it. 00:30:30.143 [2024-12-09 06:29:24.532486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.143 [2024-12-09 06:29:24.532516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.143 qpair failed and we were unable to recover it. 00:30:30.143 [2024-12-09 06:29:24.532903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.143 [2024-12-09 06:29:24.532932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.143 qpair failed and we were unable to recover it. 00:30:30.143 [2024-12-09 06:29:24.533268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.144 [2024-12-09 06:29:24.533298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.144 qpair failed and we were unable to recover it. 00:30:30.144 [2024-12-09 06:29:24.533677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.144 [2024-12-09 06:29:24.533708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.144 qpair failed and we were unable to recover it. 00:30:30.144 [2024-12-09 06:29:24.534038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.144 [2024-12-09 06:29:24.534067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.144 qpair failed and we were unable to recover it. 00:30:30.144 [2024-12-09 06:29:24.534409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.144 [2024-12-09 06:29:24.534438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.144 qpair failed and we were unable to recover it. 00:30:30.144 [2024-12-09 06:29:24.534768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.144 [2024-12-09 06:29:24.534797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.144 qpair failed and we were unable to recover it. 00:30:30.144 [2024-12-09 06:29:24.535178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.144 [2024-12-09 06:29:24.535208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.144 qpair failed and we were unable to recover it. 00:30:30.144 [2024-12-09 06:29:24.535545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.144 [2024-12-09 06:29:24.535576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.144 qpair failed and we were unable to recover it. 00:30:30.144 [2024-12-09 06:29:24.535937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.144 [2024-12-09 06:29:24.535972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.144 qpair failed and we were unable to recover it. 00:30:30.144 [2024-12-09 06:29:24.536286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.144 [2024-12-09 06:29:24.536316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.144 qpair failed and we were unable to recover it. 00:30:30.144 [2024-12-09 06:29:24.536695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.144 [2024-12-09 06:29:24.536725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.144 qpair failed and we were unable to recover it. 00:30:30.144 [2024-12-09 06:29:24.537063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.144 [2024-12-09 06:29:24.537091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.144 qpair failed and we were unable to recover it. 00:30:30.144 [2024-12-09 06:29:24.537327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.144 [2024-12-09 06:29:24.537356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.144 qpair failed and we were unable to recover it. 00:30:30.144 [2024-12-09 06:29:24.537669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.144 [2024-12-09 06:29:24.537699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.144 qpair failed and we were unable to recover it. 00:30:30.144 [2024-12-09 06:29:24.538077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.144 [2024-12-09 06:29:24.538106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.144 qpair failed and we were unable to recover it. 00:30:30.144 [2024-12-09 06:29:24.538442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.144 [2024-12-09 06:29:24.538481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.144 qpair failed and we were unable to recover it. 00:30:30.144 [2024-12-09 06:29:24.538821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.144 [2024-12-09 06:29:24.538850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.144 qpair failed and we were unable to recover it. 00:30:30.144 [2024-12-09 06:29:24.539193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.144 [2024-12-09 06:29:24.539223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.144 qpair failed and we were unable to recover it. 00:30:30.144 [2024-12-09 06:29:24.539597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.144 [2024-12-09 06:29:24.539628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.144 qpair failed and we were unable to recover it. 00:30:30.144 [2024-12-09 06:29:24.539982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.144 [2024-12-09 06:29:24.540011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.144 qpair failed and we were unable to recover it. 00:30:30.144 [2024-12-09 06:29:24.540360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.144 [2024-12-09 06:29:24.540390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.144 qpair failed and we were unable to recover it. 00:30:30.144 [2024-12-09 06:29:24.540664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.144 [2024-12-09 06:29:24.540695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.144 qpair failed and we were unable to recover it. 00:30:30.144 [2024-12-09 06:29:24.541089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.144 [2024-12-09 06:29:24.541120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.144 qpair failed and we were unable to recover it. 00:30:30.144 [2024-12-09 06:29:24.541535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.144 [2024-12-09 06:29:24.541566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.144 qpair failed and we were unable to recover it. 00:30:30.144 [2024-12-09 06:29:24.541785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.144 [2024-12-09 06:29:24.541815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.144 qpair failed and we were unable to recover it. 00:30:30.144 [2024-12-09 06:29:24.542040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.144 [2024-12-09 06:29:24.542072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.144 qpair failed and we were unable to recover it. 00:30:30.144 [2024-12-09 06:29:24.542349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.144 [2024-12-09 06:29:24.542377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.144 qpair failed and we were unable to recover it. 00:30:30.144 [2024-12-09 06:29:24.542778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.144 [2024-12-09 06:29:24.542808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.144 qpair failed and we were unable to recover it. 00:30:30.144 [2024-12-09 06:29:24.543129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.144 [2024-12-09 06:29:24.543159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.144 qpair failed and we were unable to recover it. 00:30:30.144 [2024-12-09 06:29:24.543507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.144 [2024-12-09 06:29:24.543537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.144 qpair failed and we were unable to recover it. 00:30:30.144 [2024-12-09 06:29:24.543888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.144 [2024-12-09 06:29:24.543917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.144 qpair failed and we were unable to recover it. 00:30:30.144 [2024-12-09 06:29:24.544247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.144 [2024-12-09 06:29:24.544277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.144 qpair failed and we were unable to recover it. 00:30:30.144 [2024-12-09 06:29:24.544628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.144 [2024-12-09 06:29:24.544657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.144 qpair failed and we were unable to recover it. 00:30:30.144 [2024-12-09 06:29:24.545019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.144 [2024-12-09 06:29:24.545047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.144 qpair failed and we were unable to recover it. 00:30:30.144 [2024-12-09 06:29:24.545423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.144 [2024-12-09 06:29:24.545465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.144 qpair failed and we were unable to recover it. 00:30:30.144 [2024-12-09 06:29:24.545825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.144 [2024-12-09 06:29:24.545854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.144 qpair failed and we were unable to recover it. 00:30:30.144 [2024-12-09 06:29:24.546172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.144 [2024-12-09 06:29:24.546202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.144 qpair failed and we were unable to recover it. 00:30:30.144 [2024-12-09 06:29:24.546432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.144 [2024-12-09 06:29:24.546473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.144 qpair failed and we were unable to recover it. 00:30:30.144 [2024-12-09 06:29:24.546751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.144 [2024-12-09 06:29:24.546780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.144 qpair failed and we were unable to recover it. 00:30:30.144 [2024-12-09 06:29:24.547114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.145 [2024-12-09 06:29:24.547143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.145 qpair failed and we were unable to recover it. 00:30:30.145 [2024-12-09 06:29:24.547415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.145 [2024-12-09 06:29:24.547444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.145 qpair failed and we were unable to recover it. 00:30:30.145 [2024-12-09 06:29:24.547835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.145 [2024-12-09 06:29:24.547865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.145 qpair failed and we were unable to recover it. 00:30:30.145 [2024-12-09 06:29:24.548202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.145 [2024-12-09 06:29:24.548231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.145 qpair failed and we were unable to recover it. 00:30:30.145 [2024-12-09 06:29:24.548577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.145 [2024-12-09 06:29:24.548609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.145 qpair failed and we were unable to recover it. 00:30:30.145 [2024-12-09 06:29:24.548834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.145 [2024-12-09 06:29:24.548863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.145 qpair failed and we were unable to recover it. 00:30:30.145 [2024-12-09 06:29:24.549224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.145 [2024-12-09 06:29:24.549253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.145 qpair failed and we were unable to recover it. 00:30:30.145 [2024-12-09 06:29:24.549595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.145 [2024-12-09 06:29:24.549625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.145 qpair failed and we were unable to recover it. 00:30:30.145 [2024-12-09 06:29:24.549970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.145 [2024-12-09 06:29:24.549999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.145 qpair failed and we were unable to recover it. 00:30:30.145 [2024-12-09 06:29:24.550318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.145 [2024-12-09 06:29:24.550347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.145 qpair failed and we were unable to recover it. 00:30:30.145 [2024-12-09 06:29:24.550679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.145 [2024-12-09 06:29:24.550715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.145 qpair failed and we were unable to recover it. 00:30:30.145 [2024-12-09 06:29:24.551094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.145 [2024-12-09 06:29:24.551123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.145 qpair failed and we were unable to recover it. 00:30:30.145 [2024-12-09 06:29:24.551467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.145 [2024-12-09 06:29:24.551497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.145 qpair failed and we were unable to recover it. 00:30:30.145 [2024-12-09 06:29:24.551853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.145 [2024-12-09 06:29:24.551883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.145 qpair failed and we were unable to recover it. 00:30:30.145 [2024-12-09 06:29:24.552234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.145 [2024-12-09 06:29:24.552263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.145 qpair failed and we were unable to recover it. 00:30:30.145 [2024-12-09 06:29:24.552610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.145 [2024-12-09 06:29:24.552640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.145 qpair failed and we were unable to recover it. 00:30:30.145 [2024-12-09 06:29:24.552989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.145 [2024-12-09 06:29:24.553019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.145 qpair failed and we were unable to recover it. 00:30:30.145 [2024-12-09 06:29:24.553246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.145 [2024-12-09 06:29:24.553275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.145 qpair failed and we were unable to recover it. 00:30:30.145 [2024-12-09 06:29:24.553645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.145 [2024-12-09 06:29:24.553675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.145 qpair failed and we were unable to recover it. 00:30:30.145 [2024-12-09 06:29:24.553885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.145 [2024-12-09 06:29:24.553918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.145 qpair failed and we were unable to recover it. 00:30:30.145 [2024-12-09 06:29:24.554155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.145 [2024-12-09 06:29:24.554186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.145 qpair failed and we were unable to recover it. 00:30:30.145 [2024-12-09 06:29:24.554412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.145 [2024-12-09 06:29:24.554445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.145 qpair failed and we were unable to recover it. 00:30:30.145 [2024-12-09 06:29:24.554841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.145 [2024-12-09 06:29:24.554871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.145 qpair failed and we were unable to recover it. 00:30:30.145 [2024-12-09 06:29:24.555210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.145 [2024-12-09 06:29:24.555240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.145 qpair failed and we were unable to recover it. 00:30:30.145 [2024-12-09 06:29:24.555586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.145 [2024-12-09 06:29:24.555617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.145 qpair failed and we were unable to recover it. 00:30:30.145 [2024-12-09 06:29:24.555994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.145 [2024-12-09 06:29:24.556023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.145 qpair failed and we were unable to recover it. 00:30:30.145 [2024-12-09 06:29:24.556337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.145 [2024-12-09 06:29:24.556365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.145 qpair failed and we were unable to recover it. 00:30:30.145 [2024-12-09 06:29:24.556725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.145 [2024-12-09 06:29:24.556757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.145 qpair failed and we were unable to recover it. 00:30:30.145 [2024-12-09 06:29:24.557135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.145 [2024-12-09 06:29:24.557165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.145 qpair failed and we were unable to recover it. 00:30:30.145 [2024-12-09 06:29:24.557516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.145 [2024-12-09 06:29:24.557547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.145 qpair failed and we were unable to recover it. 00:30:30.145 [2024-12-09 06:29:24.557901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.145 [2024-12-09 06:29:24.557930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.145 qpair failed and we were unable to recover it. 00:30:30.145 [2024-12-09 06:29:24.558270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.145 [2024-12-09 06:29:24.558299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.145 qpair failed and we were unable to recover it. 00:30:30.145 [2024-12-09 06:29:24.558648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.145 [2024-12-09 06:29:24.558677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.145 qpair failed and we were unable to recover it. 00:30:30.145 [2024-12-09 06:29:24.559055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.145 [2024-12-09 06:29:24.559083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.145 qpair failed and we were unable to recover it. 00:30:30.145 [2024-12-09 06:29:24.559430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.145 [2024-12-09 06:29:24.559469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.145 qpair failed and we were unable to recover it. 00:30:30.145 [2024-12-09 06:29:24.559852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.145 [2024-12-09 06:29:24.559880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.145 qpair failed and we were unable to recover it. 00:30:30.145 [2024-12-09 06:29:24.560217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.145 [2024-12-09 06:29:24.560246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.145 qpair failed and we were unable to recover it. 00:30:30.145 [2024-12-09 06:29:24.560624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.145 [2024-12-09 06:29:24.560661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.145 qpair failed and we were unable to recover it. 00:30:30.145 [2024-12-09 06:29:24.561017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.146 [2024-12-09 06:29:24.561047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.146 qpair failed and we were unable to recover it. 00:30:30.146 [2024-12-09 06:29:24.561409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.146 [2024-12-09 06:29:24.561438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.146 qpair failed and we were unable to recover it. 00:30:30.146 [2024-12-09 06:29:24.561719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.146 [2024-12-09 06:29:24.561749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.146 qpair failed and we were unable to recover it. 00:30:30.146 [2024-12-09 06:29:24.562011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.146 [2024-12-09 06:29:24.562039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.146 qpair failed and we were unable to recover it. 00:30:30.146 [2024-12-09 06:29:24.562355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.146 [2024-12-09 06:29:24.562385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.146 qpair failed and we were unable to recover it. 00:30:30.146 [2024-12-09 06:29:24.562711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.146 [2024-12-09 06:29:24.562740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.146 qpair failed and we were unable to recover it. 00:30:30.146 [2024-12-09 06:29:24.563091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.146 [2024-12-09 06:29:24.563120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.146 qpair failed and we were unable to recover it. 00:30:30.146 [2024-12-09 06:29:24.563488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.146 [2024-12-09 06:29:24.563520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.146 qpair failed and we were unable to recover it. 00:30:30.146 [2024-12-09 06:29:24.563854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.146 [2024-12-09 06:29:24.563884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.146 qpair failed and we were unable to recover it. 00:30:30.146 [2024-12-09 06:29:24.564204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.146 [2024-12-09 06:29:24.564233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.146 qpair failed and we were unable to recover it. 00:30:30.146 [2024-12-09 06:29:24.564579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.146 [2024-12-09 06:29:24.564609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.146 qpair failed and we were unable to recover it. 00:30:30.146 [2024-12-09 06:29:24.564863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.146 [2024-12-09 06:29:24.564891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.146 qpair failed and we were unable to recover it. 00:30:30.146 [2024-12-09 06:29:24.565238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.146 [2024-12-09 06:29:24.565267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.146 qpair failed and we were unable to recover it. 00:30:30.146 [2024-12-09 06:29:24.565655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.146 [2024-12-09 06:29:24.565686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.146 qpair failed and we were unable to recover it. 00:30:30.146 [2024-12-09 06:29:24.566034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.146 [2024-12-09 06:29:24.566063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.146 qpair failed and we were unable to recover it. 00:30:30.146 [2024-12-09 06:29:24.566400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.146 [2024-12-09 06:29:24.566429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.146 qpair failed and we were unable to recover it. 00:30:30.146 [2024-12-09 06:29:24.566688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.146 [2024-12-09 06:29:24.566719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.146 qpair failed and we were unable to recover it. 00:30:30.146 [2024-12-09 06:29:24.567066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.146 [2024-12-09 06:29:24.567094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.146 qpair failed and we were unable to recover it. 00:30:30.146 [2024-12-09 06:29:24.567434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.146 [2024-12-09 06:29:24.567490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.146 qpair failed and we were unable to recover it. 00:30:30.146 [2024-12-09 06:29:24.567834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.146 [2024-12-09 06:29:24.567864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.146 qpair failed and we were unable to recover it. 00:30:30.146 [2024-12-09 06:29:24.568213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.146 [2024-12-09 06:29:24.568242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.146 qpair failed and we were unable to recover it. 00:30:30.146 [2024-12-09 06:29:24.568563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.146 [2024-12-09 06:29:24.568594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.146 qpair failed and we were unable to recover it. 00:30:30.146 [2024-12-09 06:29:24.568922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.146 [2024-12-09 06:29:24.568951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.146 qpair failed and we were unable to recover it. 00:30:30.146 [2024-12-09 06:29:24.569306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.146 [2024-12-09 06:29:24.569335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.146 qpair failed and we were unable to recover it. 00:30:30.146 [2024-12-09 06:29:24.569653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.146 [2024-12-09 06:29:24.569683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.146 qpair failed and we were unable to recover it. 00:30:30.146 [2024-12-09 06:29:24.570037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.146 [2024-12-09 06:29:24.570065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.146 qpair failed and we were unable to recover it. 00:30:30.146 [2024-12-09 06:29:24.570418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.146 [2024-12-09 06:29:24.570447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.146 qpair failed and we were unable to recover it. 00:30:30.146 [2024-12-09 06:29:24.570724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.146 [2024-12-09 06:29:24.570754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.146 qpair failed and we were unable to recover it. 00:30:30.146 [2024-12-09 06:29:24.571121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.146 [2024-12-09 06:29:24.571150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.146 qpair failed and we were unable to recover it. 00:30:30.146 [2024-12-09 06:29:24.571511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.146 [2024-12-09 06:29:24.571541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.146 qpair failed and we were unable to recover it. 00:30:30.146 [2024-12-09 06:29:24.571881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.146 [2024-12-09 06:29:24.571910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.146 qpair failed and we were unable to recover it. 00:30:30.146 [2024-12-09 06:29:24.572216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.146 [2024-12-09 06:29:24.572246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.146 qpair failed and we were unable to recover it. 00:30:30.146 [2024-12-09 06:29:24.572586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.146 [2024-12-09 06:29:24.572616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.146 qpair failed and we were unable to recover it. 00:30:30.146 [2024-12-09 06:29:24.572933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.146 [2024-12-09 06:29:24.572962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.146 qpair failed and we were unable to recover it. 00:30:30.146 [2024-12-09 06:29:24.573216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.146 [2024-12-09 06:29:24.573248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.146 qpair failed and we were unable to recover it. 00:30:30.146 [2024-12-09 06:29:24.573614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.146 [2024-12-09 06:29:24.573644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.146 qpair failed and we were unable to recover it. 00:30:30.146 [2024-12-09 06:29:24.574069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.146 [2024-12-09 06:29:24.574099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.146 qpair failed and we were unable to recover it. 00:30:30.146 [2024-12-09 06:29:24.574420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.146 [2024-12-09 06:29:24.574459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.146 qpair failed and we were unable to recover it. 00:30:30.146 [2024-12-09 06:29:24.574838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.146 [2024-12-09 06:29:24.574868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.146 qpair failed and we were unable to recover it. 00:30:30.147 [2024-12-09 06:29:24.575244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.147 [2024-12-09 06:29:24.575274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.147 qpair failed and we were unable to recover it. 00:30:30.147 [2024-12-09 06:29:24.575617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.147 [2024-12-09 06:29:24.575654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.147 qpair failed and we were unable to recover it. 00:30:30.147 [2024-12-09 06:29:24.575987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.147 [2024-12-09 06:29:24.576016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.147 qpair failed and we were unable to recover it. 00:30:30.147 [2024-12-09 06:29:24.576364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.147 [2024-12-09 06:29:24.576393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.147 qpair failed and we were unable to recover it. 00:30:30.147 [2024-12-09 06:29:24.576661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.147 [2024-12-09 06:29:24.576692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.147 qpair failed and we were unable to recover it. 00:30:30.147 [2024-12-09 06:29:24.577051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.147 [2024-12-09 06:29:24.577079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.147 qpair failed and we were unable to recover it. 00:30:30.147 [2024-12-09 06:29:24.577447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.147 [2024-12-09 06:29:24.577496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.147 qpair failed and we were unable to recover it. 00:30:30.147 [2024-12-09 06:29:24.577847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.147 [2024-12-09 06:29:24.577877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.147 qpair failed and we were unable to recover it. 00:30:30.147 [2024-12-09 06:29:24.578243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.147 [2024-12-09 06:29:24.578272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.147 qpair failed and we were unable to recover it. 00:30:30.147 [2024-12-09 06:29:24.578504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.147 [2024-12-09 06:29:24.578536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.147 qpair failed and we were unable to recover it. 00:30:30.147 [2024-12-09 06:29:24.578917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.147 [2024-12-09 06:29:24.578946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.147 qpair failed and we were unable to recover it. 00:30:30.147 [2024-12-09 06:29:24.579317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.147 [2024-12-09 06:29:24.579346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.147 qpair failed and we were unable to recover it. 00:30:30.147 [2024-12-09 06:29:24.579721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.147 [2024-12-09 06:29:24.579751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.147 qpair failed and we were unable to recover it. 00:30:30.147 [2024-12-09 06:29:24.579988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.147 [2024-12-09 06:29:24.580020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.147 qpair failed and we were unable to recover it. 00:30:30.147 [2024-12-09 06:29:24.580370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.147 [2024-12-09 06:29:24.580400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.147 qpair failed and we were unable to recover it. 00:30:30.147 [2024-12-09 06:29:24.580776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.147 [2024-12-09 06:29:24.580807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.147 qpair failed and we were unable to recover it. 00:30:30.147 [2024-12-09 06:29:24.581057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.147 [2024-12-09 06:29:24.581086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.147 qpair failed and we were unable to recover it. 00:30:30.147 [2024-12-09 06:29:24.581447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.147 [2024-12-09 06:29:24.581487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.147 qpair failed and we were unable to recover it. 00:30:30.147 [2024-12-09 06:29:24.581800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.147 [2024-12-09 06:29:24.581829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.147 qpair failed and we were unable to recover it. 00:30:30.147 [2024-12-09 06:29:24.582177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.147 [2024-12-09 06:29:24.582206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.147 qpair failed and we were unable to recover it. 00:30:30.147 [2024-12-09 06:29:24.582524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.147 [2024-12-09 06:29:24.582555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.147 qpair failed and we were unable to recover it. 00:30:30.147 [2024-12-09 06:29:24.582916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.147 [2024-12-09 06:29:24.582945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.147 qpair failed and we were unable to recover it. 00:30:30.147 [2024-12-09 06:29:24.583281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.147 [2024-12-09 06:29:24.583310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.147 qpair failed and we were unable to recover it. 00:30:30.147 [2024-12-09 06:29:24.583669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.147 [2024-12-09 06:29:24.583699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.147 qpair failed and we were unable to recover it. 00:30:30.147 [2024-12-09 06:29:24.584080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.147 [2024-12-09 06:29:24.584109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.147 qpair failed and we were unable to recover it. 00:30:30.147 [2024-12-09 06:29:24.584435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.147 [2024-12-09 06:29:24.584476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.147 qpair failed and we were unable to recover it. 00:30:30.147 [2024-12-09 06:29:24.584811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.147 [2024-12-09 06:29:24.584839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.147 qpair failed and we were unable to recover it. 00:30:30.147 [2024-12-09 06:29:24.585182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.147 [2024-12-09 06:29:24.585211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.147 qpair failed and we were unable to recover it. 00:30:30.147 [2024-12-09 06:29:24.585588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.147 [2024-12-09 06:29:24.585618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.147 qpair failed and we were unable to recover it. 00:30:30.147 [2024-12-09 06:29:24.585988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.147 [2024-12-09 06:29:24.586018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.147 qpair failed and we were unable to recover it. 00:30:30.147 [2024-12-09 06:29:24.586355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.147 [2024-12-09 06:29:24.586386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.147 qpair failed and we were unable to recover it. 00:30:30.147 [2024-12-09 06:29:24.586741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.147 [2024-12-09 06:29:24.586772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.147 qpair failed and we were unable to recover it. 00:30:30.147 [2024-12-09 06:29:24.587028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.147 [2024-12-09 06:29:24.587058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.147 qpair failed and we were unable to recover it. 00:30:30.147 [2024-12-09 06:29:24.587408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.147 [2024-12-09 06:29:24.587437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.147 qpair failed and we were unable to recover it. 00:30:30.147 [2024-12-09 06:29:24.587795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.148 [2024-12-09 06:29:24.587825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.148 qpair failed and we were unable to recover it. 00:30:30.148 [2024-12-09 06:29:24.588175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.148 [2024-12-09 06:29:24.588205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.148 qpair failed and we were unable to recover it. 00:30:30.148 [2024-12-09 06:29:24.588554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.148 [2024-12-09 06:29:24.588584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.148 qpair failed and we were unable to recover it. 00:30:30.148 [2024-12-09 06:29:24.588922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.148 [2024-12-09 06:29:24.588951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.148 qpair failed and we were unable to recover it. 00:30:30.148 [2024-12-09 06:29:24.589280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.148 [2024-12-09 06:29:24.589309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.148 qpair failed and we were unable to recover it. 00:30:30.148 [2024-12-09 06:29:24.589636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.148 [2024-12-09 06:29:24.589665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.148 qpair failed and we were unable to recover it. 00:30:30.148 [2024-12-09 06:29:24.590041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.148 [2024-12-09 06:29:24.590071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.148 qpair failed and we were unable to recover it. 00:30:30.148 [2024-12-09 06:29:24.590405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.148 [2024-12-09 06:29:24.590434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.148 qpair failed and we were unable to recover it. 00:30:30.148 [2024-12-09 06:29:24.590716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.148 [2024-12-09 06:29:24.590747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.148 qpair failed and we were unable to recover it. 00:30:30.148 [2024-12-09 06:29:24.590982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.148 [2024-12-09 06:29:24.591015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.148 qpair failed and we were unable to recover it. 00:30:30.148 [2024-12-09 06:29:24.591243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.148 [2024-12-09 06:29:24.591275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.148 qpair failed and we were unable to recover it. 00:30:30.148 [2024-12-09 06:29:24.591623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.148 [2024-12-09 06:29:24.591653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.148 qpair failed and we were unable to recover it. 00:30:30.148 [2024-12-09 06:29:24.591908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.148 [2024-12-09 06:29:24.591940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.148 qpair failed and we were unable to recover it. 00:30:30.148 [2024-12-09 06:29:24.592298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.148 [2024-12-09 06:29:24.592327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.148 qpair failed and we were unable to recover it. 00:30:30.148 [2024-12-09 06:29:24.592707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.148 [2024-12-09 06:29:24.592737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.148 qpair failed and we were unable to recover it. 00:30:30.148 [2024-12-09 06:29:24.593103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.148 [2024-12-09 06:29:24.593134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.148 qpair failed and we were unable to recover it. 00:30:30.148 [2024-12-09 06:29:24.593510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.148 [2024-12-09 06:29:24.593541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.148 qpair failed and we were unable to recover it. 00:30:30.148 [2024-12-09 06:29:24.593896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.148 [2024-12-09 06:29:24.593926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.148 qpair failed and we were unable to recover it. 00:30:30.148 [2024-12-09 06:29:24.594269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.148 [2024-12-09 06:29:24.594298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.148 qpair failed and we were unable to recover it. 00:30:30.148 [2024-12-09 06:29:24.594630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.148 [2024-12-09 06:29:24.594660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.148 qpair failed and we were unable to recover it. 00:30:30.148 [2024-12-09 06:29:24.594993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.148 [2024-12-09 06:29:24.595022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.148 qpair failed and we were unable to recover it. 00:30:30.148 [2024-12-09 06:29:24.595358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.148 [2024-12-09 06:29:24.595386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.148 qpair failed and we were unable to recover it. 00:30:30.148 [2024-12-09 06:29:24.595768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.148 [2024-12-09 06:29:24.595800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.148 qpair failed and we were unable to recover it. 00:30:30.148 [2024-12-09 06:29:24.596165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.148 [2024-12-09 06:29:24.596194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.148 qpair failed and we were unable to recover it. 00:30:30.148 [2024-12-09 06:29:24.596575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.148 [2024-12-09 06:29:24.596607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.148 qpair failed and we were unable to recover it. 00:30:30.148 [2024-12-09 06:29:24.596982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.148 [2024-12-09 06:29:24.597011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.148 qpair failed and we were unable to recover it. 00:30:30.148 [2024-12-09 06:29:24.597377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.148 [2024-12-09 06:29:24.597406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.148 qpair failed and we were unable to recover it. 00:30:30.148 [2024-12-09 06:29:24.597792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.148 [2024-12-09 06:29:24.597822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.148 qpair failed and we were unable to recover it. 00:30:30.148 [2024-12-09 06:29:24.598069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.148 [2024-12-09 06:29:24.598099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.148 qpair failed and we were unable to recover it. 00:30:30.148 [2024-12-09 06:29:24.598422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.148 [2024-12-09 06:29:24.598472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.148 qpair failed and we were unable to recover it. 00:30:30.148 [2024-12-09 06:29:24.598840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.148 [2024-12-09 06:29:24.598869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.148 qpair failed and we were unable to recover it. 00:30:30.148 [2024-12-09 06:29:24.599135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.148 [2024-12-09 06:29:24.599164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.148 qpair failed and we were unable to recover it. 00:30:30.148 [2024-12-09 06:29:24.599546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.148 [2024-12-09 06:29:24.599576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.148 qpair failed and we were unable to recover it. 00:30:30.148 [2024-12-09 06:29:24.599958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.148 [2024-12-09 06:29:24.599987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.148 qpair failed and we were unable to recover it. 00:30:30.148 [2024-12-09 06:29:24.600373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.148 [2024-12-09 06:29:24.600402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.148 qpair failed and we were unable to recover it. 00:30:30.148 [2024-12-09 06:29:24.600657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.148 [2024-12-09 06:29:24.600694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.148 qpair failed and we were unable to recover it. 00:30:30.148 [2024-12-09 06:29:24.601072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.148 [2024-12-09 06:29:24.601101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.148 qpair failed and we were unable to recover it. 00:30:30.148 [2024-12-09 06:29:24.601461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.148 [2024-12-09 06:29:24.601492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.148 qpair failed and we were unable to recover it. 00:30:30.148 [2024-12-09 06:29:24.601851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.148 [2024-12-09 06:29:24.601880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.149 qpair failed and we were unable to recover it. 00:30:30.149 [2024-12-09 06:29:24.602248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.149 [2024-12-09 06:29:24.602277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.149 qpair failed and we were unable to recover it. 00:30:30.149 [2024-12-09 06:29:24.602529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.149 [2024-12-09 06:29:24.602563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.149 qpair failed and we were unable to recover it. 00:30:30.149 [2024-12-09 06:29:24.602990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.149 [2024-12-09 06:29:24.603019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.149 qpair failed and we were unable to recover it. 00:30:30.149 [2024-12-09 06:29:24.603369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.149 [2024-12-09 06:29:24.603398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.149 qpair failed and we were unable to recover it. 00:30:30.149 [2024-12-09 06:29:24.603788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.149 [2024-12-09 06:29:24.603818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.149 qpair failed and we were unable to recover it. 00:30:30.149 [2024-12-09 06:29:24.604152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.149 [2024-12-09 06:29:24.604180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.149 qpair failed and we were unable to recover it. 00:30:30.149 [2024-12-09 06:29:24.604568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.149 [2024-12-09 06:29:24.604598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.149 qpair failed and we were unable to recover it. 00:30:30.149 [2024-12-09 06:29:24.604930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.149 [2024-12-09 06:29:24.604960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.149 qpair failed and we were unable to recover it. 00:30:30.149 [2024-12-09 06:29:24.605312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.149 [2024-12-09 06:29:24.605341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.149 qpair failed and we were unable to recover it. 00:30:30.149 [2024-12-09 06:29:24.605721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.149 [2024-12-09 06:29:24.605751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.149 qpair failed and we were unable to recover it. 00:30:30.149 [2024-12-09 06:29:24.606121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.149 [2024-12-09 06:29:24.606150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.149 qpair failed and we were unable to recover it. 00:30:30.149 [2024-12-09 06:29:24.606494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.149 [2024-12-09 06:29:24.606525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.149 qpair failed and we were unable to recover it. 00:30:30.149 [2024-12-09 06:29:24.606896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.149 [2024-12-09 06:29:24.606925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.149 qpair failed and we were unable to recover it. 00:30:30.149 [2024-12-09 06:29:24.607300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.149 [2024-12-09 06:29:24.607329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.149 qpair failed and we were unable to recover it. 00:30:30.149 [2024-12-09 06:29:24.607585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.149 [2024-12-09 06:29:24.607616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.149 qpair failed and we were unable to recover it. 00:30:30.149 [2024-12-09 06:29:24.607990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.149 [2024-12-09 06:29:24.608019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.149 qpair failed and we were unable to recover it. 00:30:30.149 [2024-12-09 06:29:24.608298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.149 [2024-12-09 06:29:24.608327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.149 qpair failed and we were unable to recover it. 00:30:30.149 [2024-12-09 06:29:24.608705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.149 [2024-12-09 06:29:24.608735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.149 qpair failed and we were unable to recover it. 00:30:30.149 [2024-12-09 06:29:24.609082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.149 [2024-12-09 06:29:24.609111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.149 qpair failed and we were unable to recover it. 00:30:30.149 [2024-12-09 06:29:24.609430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.149 [2024-12-09 06:29:24.609470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.149 qpair failed and we were unable to recover it. 00:30:30.149 [2024-12-09 06:29:24.609794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.149 [2024-12-09 06:29:24.609824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.149 qpair failed and we were unable to recover it. 00:30:30.149 [2024-12-09 06:29:24.610171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.149 [2024-12-09 06:29:24.610199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.149 qpair failed and we were unable to recover it. 00:30:30.149 [2024-12-09 06:29:24.610561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.149 [2024-12-09 06:29:24.610592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.149 qpair failed and we were unable to recover it. 00:30:30.149 [2024-12-09 06:29:24.610936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.149 [2024-12-09 06:29:24.610966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.149 qpair failed and we were unable to recover it. 00:30:30.149 [2024-12-09 06:29:24.611296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.149 [2024-12-09 06:29:24.611326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.149 qpair failed and we were unable to recover it. 00:30:30.149 [2024-12-09 06:29:24.611603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.149 [2024-12-09 06:29:24.611633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.149 qpair failed and we were unable to recover it. 00:30:30.149 [2024-12-09 06:29:24.611921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.149 [2024-12-09 06:29:24.611950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.149 qpair failed and we were unable to recover it. 00:30:30.149 [2024-12-09 06:29:24.612317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.149 [2024-12-09 06:29:24.612347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.149 qpair failed and we were unable to recover it. 00:30:30.149 [2024-12-09 06:29:24.612707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.149 [2024-12-09 06:29:24.612738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.149 qpair failed and we were unable to recover it. 00:30:30.149 [2024-12-09 06:29:24.613118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.149 [2024-12-09 06:29:24.613147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.149 qpair failed and we were unable to recover it. 00:30:30.149 [2024-12-09 06:29:24.613492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.149 [2024-12-09 06:29:24.613522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.149 qpair failed and we were unable to recover it. 00:30:30.149 [2024-12-09 06:29:24.613913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.149 [2024-12-09 06:29:24.613943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.149 qpair failed and we were unable to recover it. 00:30:30.149 [2024-12-09 06:29:24.614179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.149 [2024-12-09 06:29:24.614208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.149 qpair failed and we were unable to recover it. 00:30:30.149 [2024-12-09 06:29:24.614600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.149 [2024-12-09 06:29:24.614630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.149 qpair failed and we were unable to recover it. 00:30:30.149 [2024-12-09 06:29:24.614974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.149 [2024-12-09 06:29:24.615003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.149 qpair failed and we were unable to recover it. 00:30:30.149 [2024-12-09 06:29:24.615356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.149 [2024-12-09 06:29:24.615385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.149 qpair failed and we were unable to recover it. 00:30:30.149 [2024-12-09 06:29:24.615596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.149 [2024-12-09 06:29:24.615626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.149 qpair failed and we were unable to recover it. 00:30:30.149 [2024-12-09 06:29:24.615997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.149 [2024-12-09 06:29:24.616028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.149 qpair failed and we were unable to recover it. 00:30:30.149 [2024-12-09 06:29:24.616439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.150 [2024-12-09 06:29:24.616481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.150 qpair failed and we were unable to recover it. 00:30:30.150 [2024-12-09 06:29:24.616826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.150 [2024-12-09 06:29:24.616855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.150 qpair failed and we were unable to recover it. 00:30:30.150 [2024-12-09 06:29:24.617198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.150 [2024-12-09 06:29:24.617228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.150 qpair failed and we were unable to recover it. 00:30:30.150 [2024-12-09 06:29:24.617569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.150 [2024-12-09 06:29:24.617600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.150 qpair failed and we were unable to recover it. 00:30:30.150 [2024-12-09 06:29:24.617941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.150 [2024-12-09 06:29:24.617970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.150 qpair failed and we were unable to recover it. 00:30:30.150 [2024-12-09 06:29:24.618311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.150 [2024-12-09 06:29:24.618340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.150 qpair failed and we were unable to recover it. 00:30:30.150 [2024-12-09 06:29:24.618665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.150 [2024-12-09 06:29:24.618695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.150 qpair failed and we were unable to recover it. 00:30:30.150 [2024-12-09 06:29:24.619075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.150 [2024-12-09 06:29:24.619104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.150 qpair failed and we were unable to recover it. 00:30:30.150 [2024-12-09 06:29:24.619447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.150 [2024-12-09 06:29:24.619486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.150 qpair failed and we were unable to recover it. 00:30:30.150 [2024-12-09 06:29:24.619760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.150 [2024-12-09 06:29:24.619788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.150 qpair failed and we were unable to recover it. 00:30:30.150 [2024-12-09 06:29:24.619943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.150 [2024-12-09 06:29:24.619976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.150 qpair failed and we were unable to recover it. 00:30:30.150 [2024-12-09 06:29:24.620355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.150 [2024-12-09 06:29:24.620386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.150 qpair failed and we were unable to recover it. 00:30:30.150 [2024-12-09 06:29:24.620724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.150 [2024-12-09 06:29:24.620754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.150 qpair failed and we were unable to recover it. 00:30:30.150 [2024-12-09 06:29:24.621142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.150 [2024-12-09 06:29:24.621172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.150 qpair failed and we were unable to recover it. 00:30:30.150 [2024-12-09 06:29:24.621512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.150 [2024-12-09 06:29:24.621543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.150 qpair failed and we were unable to recover it. 00:30:30.150 [2024-12-09 06:29:24.621893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.150 [2024-12-09 06:29:24.621922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.150 qpair failed and we were unable to recover it. 00:30:30.150 [2024-12-09 06:29:24.622263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.150 [2024-12-09 06:29:24.622292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.150 qpair failed and we were unable to recover it. 00:30:30.150 [2024-12-09 06:29:24.622540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.150 [2024-12-09 06:29:24.622571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.150 qpair failed and we were unable to recover it. 00:30:30.150 [2024-12-09 06:29:24.622984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.150 [2024-12-09 06:29:24.623012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.150 qpair failed and we were unable to recover it. 00:30:30.150 [2024-12-09 06:29:24.623392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.150 [2024-12-09 06:29:24.623422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.150 qpair failed and we were unable to recover it. 00:30:30.150 [2024-12-09 06:29:24.623808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.150 [2024-12-09 06:29:24.623838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.150 qpair failed and we were unable to recover it. 00:30:30.150 [2024-12-09 06:29:24.624194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.150 [2024-12-09 06:29:24.624224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.150 qpair failed and we were unable to recover it. 00:30:30.150 [2024-12-09 06:29:24.624558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.150 [2024-12-09 06:29:24.624589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.150 qpair failed and we were unable to recover it. 00:30:30.150 [2024-12-09 06:29:24.624969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.150 [2024-12-09 06:29:24.624997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.150 qpair failed and we were unable to recover it. 00:30:30.150 [2024-12-09 06:29:24.625249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.150 [2024-12-09 06:29:24.625278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.150 qpair failed and we were unable to recover it. 00:30:30.150 [2024-12-09 06:29:24.625662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.150 [2024-12-09 06:29:24.625692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.150 qpair failed and we were unable to recover it. 00:30:30.150 [2024-12-09 06:29:24.625969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.150 [2024-12-09 06:29:24.626005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.150 qpair failed and we were unable to recover it. 00:30:30.150 [2024-12-09 06:29:24.626341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.150 [2024-12-09 06:29:24.626370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.150 qpair failed and we were unable to recover it. 00:30:30.150 [2024-12-09 06:29:24.626696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.150 [2024-12-09 06:29:24.626727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.150 qpair failed and we were unable to recover it. 00:30:30.150 [2024-12-09 06:29:24.627077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.150 [2024-12-09 06:29:24.627106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.150 qpair failed and we were unable to recover it. 00:30:30.150 [2024-12-09 06:29:24.627439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.150 [2024-12-09 06:29:24.627478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.150 qpair failed and we were unable to recover it. 00:30:30.150 [2024-12-09 06:29:24.627743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.150 [2024-12-09 06:29:24.627775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.150 qpair failed and we were unable to recover it. 00:30:30.150 [2024-12-09 06:29:24.628145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.150 [2024-12-09 06:29:24.628175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.150 qpair failed and we were unable to recover it. 00:30:30.150 [2024-12-09 06:29:24.628500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.150 [2024-12-09 06:29:24.628530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.150 qpair failed and we were unable to recover it. 00:30:30.150 [2024-12-09 06:29:24.628893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.150 [2024-12-09 06:29:24.628922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.150 qpair failed and we were unable to recover it. 00:30:30.150 [2024-12-09 06:29:24.629189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.150 [2024-12-09 06:29:24.629218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.150 qpair failed and we were unable to recover it. 00:30:30.150 [2024-12-09 06:29:24.629586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.150 [2024-12-09 06:29:24.629616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.150 qpair failed and we were unable to recover it. 00:30:30.150 [2024-12-09 06:29:24.630003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.150 [2024-12-09 06:29:24.630033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.150 qpair failed and we were unable to recover it. 00:30:30.150 [2024-12-09 06:29:24.630348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.150 [2024-12-09 06:29:24.630377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.150 qpair failed and we were unable to recover it. 00:30:30.151 [2024-12-09 06:29:24.630722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.151 [2024-12-09 06:29:24.630754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.151 qpair failed and we were unable to recover it. 00:30:30.151 [2024-12-09 06:29:24.631125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.151 [2024-12-09 06:29:24.631155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.151 qpair failed and we were unable to recover it. 00:30:30.151 [2024-12-09 06:29:24.631379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.151 [2024-12-09 06:29:24.631409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.151 qpair failed and we were unable to recover it. 00:30:30.151 [2024-12-09 06:29:24.631660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.151 [2024-12-09 06:29:24.631695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.151 qpair failed and we were unable to recover it. 00:30:30.151 [2024-12-09 06:29:24.632069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.151 [2024-12-09 06:29:24.632098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.151 qpair failed and we were unable to recover it. 00:30:30.151 [2024-12-09 06:29:24.632442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.151 [2024-12-09 06:29:24.632483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.151 qpair failed and we were unable to recover it. 00:30:30.151 [2024-12-09 06:29:24.632860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.151 [2024-12-09 06:29:24.632889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.151 qpair failed and we were unable to recover it. 00:30:30.151 [2024-12-09 06:29:24.633235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.151 [2024-12-09 06:29:24.633263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.151 qpair failed and we were unable to recover it. 00:30:30.151 [2024-12-09 06:29:24.633676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.151 [2024-12-09 06:29:24.633706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.151 qpair failed and we were unable to recover it. 00:30:30.151 [2024-12-09 06:29:24.634059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.151 [2024-12-09 06:29:24.634089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.151 qpair failed and we were unable to recover it. 00:30:30.151 [2024-12-09 06:29:24.634431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.151 [2024-12-09 06:29:24.634472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.151 qpair failed and we were unable to recover it. 00:30:30.151 [2024-12-09 06:29:24.634874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.151 [2024-12-09 06:29:24.634903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.151 qpair failed and we were unable to recover it. 00:30:30.151 [2024-12-09 06:29:24.635238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.151 [2024-12-09 06:29:24.635268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.151 qpair failed and we were unable to recover it. 00:30:30.151 [2024-12-09 06:29:24.635605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.151 [2024-12-09 06:29:24.635638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.151 qpair failed and we were unable to recover it. 00:30:30.151 [2024-12-09 06:29:24.635898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.151 [2024-12-09 06:29:24.635927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.151 qpair failed and we were unable to recover it. 00:30:30.151 [2024-12-09 06:29:24.636277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.151 [2024-12-09 06:29:24.636306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.151 qpair failed and we were unable to recover it. 00:30:30.151 [2024-12-09 06:29:24.636533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.151 [2024-12-09 06:29:24.636563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.151 qpair failed and we were unable to recover it. 00:30:30.151 [2024-12-09 06:29:24.636912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.151 [2024-12-09 06:29:24.636940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.151 qpair failed and we were unable to recover it. 00:30:30.151 [2024-12-09 06:29:24.637267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.151 [2024-12-09 06:29:24.637295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.151 qpair failed and we were unable to recover it. 00:30:30.151 [2024-12-09 06:29:24.637569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.151 [2024-12-09 06:29:24.637599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.151 qpair failed and we were unable to recover it. 00:30:30.151 [2024-12-09 06:29:24.637961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.151 [2024-12-09 06:29:24.637990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.151 qpair failed and we were unable to recover it. 00:30:30.151 [2024-12-09 06:29:24.638334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.151 [2024-12-09 06:29:24.638363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.151 qpair failed and we were unable to recover it. 00:30:30.151 [2024-12-09 06:29:24.638707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.151 [2024-12-09 06:29:24.638737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.151 qpair failed and we were unable to recover it. 00:30:30.151 [2024-12-09 06:29:24.639083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.151 [2024-12-09 06:29:24.639112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.151 qpair failed and we were unable to recover it. 00:30:30.151 [2024-12-09 06:29:24.639480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.151 [2024-12-09 06:29:24.639510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.151 qpair failed and we were unable to recover it. 00:30:30.151 [2024-12-09 06:29:24.639886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.151 [2024-12-09 06:29:24.639915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.151 qpair failed and we were unable to recover it. 00:30:30.151 [2024-12-09 06:29:24.640153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.151 [2024-12-09 06:29:24.640181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.151 qpair failed and we were unable to recover it. 00:30:30.151 [2024-12-09 06:29:24.640519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.151 [2024-12-09 06:29:24.640549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.151 qpair failed and we were unable to recover it. 00:30:30.151 [2024-12-09 06:29:24.640874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.151 [2024-12-09 06:29:24.640910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.151 qpair failed and we were unable to recover it. 00:30:30.151 [2024-12-09 06:29:24.641287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.151 [2024-12-09 06:29:24.641316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.151 qpair failed and we were unable to recover it. 00:30:30.151 [2024-12-09 06:29:24.641669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.151 [2024-12-09 06:29:24.641699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.151 qpair failed and we were unable to recover it. 00:30:30.151 [2024-12-09 06:29:24.642095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.151 [2024-12-09 06:29:24.642124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.151 qpair failed and we were unable to recover it. 00:30:30.151 [2024-12-09 06:29:24.642507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.151 [2024-12-09 06:29:24.642538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.151 qpair failed and we were unable to recover it. 00:30:30.151 [2024-12-09 06:29:24.642887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.151 [2024-12-09 06:29:24.642916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.151 qpair failed and we were unable to recover it. 00:30:30.151 [2024-12-09 06:29:24.643141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.151 [2024-12-09 06:29:24.643172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.151 qpair failed and we were unable to recover it. 00:30:30.151 [2024-12-09 06:29:24.643539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.151 [2024-12-09 06:29:24.643570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.151 qpair failed and we were unable to recover it. 00:30:30.151 [2024-12-09 06:29:24.643914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.151 [2024-12-09 06:29:24.643943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.151 qpair failed and we were unable to recover it. 00:30:30.151 [2024-12-09 06:29:24.644288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.151 [2024-12-09 06:29:24.644316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.151 qpair failed and we were unable to recover it. 00:30:30.151 [2024-12-09 06:29:24.644682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.152 [2024-12-09 06:29:24.644712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.152 qpair failed and we were unable to recover it. 00:30:30.152 [2024-12-09 06:29:24.645058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.152 [2024-12-09 06:29:24.645087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.152 qpair failed and we were unable to recover it. 00:30:30.152 [2024-12-09 06:29:24.645471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.152 [2024-12-09 06:29:24.645501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.152 qpair failed and we were unable to recover it. 00:30:30.152 [2024-12-09 06:29:24.645875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.152 [2024-12-09 06:29:24.645905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.152 qpair failed and we were unable to recover it. 00:30:30.152 [2024-12-09 06:29:24.646165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.152 [2024-12-09 06:29:24.646197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.152 qpair failed and we were unable to recover it. 00:30:30.152 [2024-12-09 06:29:24.646598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.152 [2024-12-09 06:29:24.646629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.152 qpair failed and we were unable to recover it. 00:30:30.152 [2024-12-09 06:29:24.647020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.152 [2024-12-09 06:29:24.647050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.152 qpair failed and we were unable to recover it. 00:30:30.152 [2024-12-09 06:29:24.647480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.152 [2024-12-09 06:29:24.647512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.152 qpair failed and we were unable to recover it. 00:30:30.152 [2024-12-09 06:29:24.647893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.152 [2024-12-09 06:29:24.647923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.152 qpair failed and we were unable to recover it. 00:30:30.152 [2024-12-09 06:29:24.648262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.152 [2024-12-09 06:29:24.648290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.152 qpair failed and we were unable to recover it. 00:30:30.152 [2024-12-09 06:29:24.648668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.152 [2024-12-09 06:29:24.648698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.152 qpair failed and we were unable to recover it. 00:30:30.152 [2024-12-09 06:29:24.648999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.152 [2024-12-09 06:29:24.649028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.152 qpair failed and we were unable to recover it. 00:30:30.152 [2024-12-09 06:29:24.649281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.152 [2024-12-09 06:29:24.649310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.152 qpair failed and we were unable to recover it. 00:30:30.152 [2024-12-09 06:29:24.649710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.152 [2024-12-09 06:29:24.649739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.152 qpair failed and we were unable to recover it. 00:30:30.152 [2024-12-09 06:29:24.650119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.152 [2024-12-09 06:29:24.650148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.152 qpair failed and we were unable to recover it. 00:30:30.152 [2024-12-09 06:29:24.650488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.152 [2024-12-09 06:29:24.650518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.152 qpair failed and we were unable to recover it. 00:30:30.152 [2024-12-09 06:29:24.650895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.152 [2024-12-09 06:29:24.650925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.152 qpair failed and we were unable to recover it. 00:30:30.152 [2024-12-09 06:29:24.651269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.152 [2024-12-09 06:29:24.651304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.152 qpair failed and we were unable to recover it. 00:30:30.152 [2024-12-09 06:29:24.651639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.152 [2024-12-09 06:29:24.651670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.152 qpair failed and we were unable to recover it. 00:30:30.152 [2024-12-09 06:29:24.651894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.152 [2024-12-09 06:29:24.651926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.152 qpair failed and we were unable to recover it. 00:30:30.152 [2024-12-09 06:29:24.652278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.152 [2024-12-09 06:29:24.652308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.152 qpair failed and we were unable to recover it. 00:30:30.152 [2024-12-09 06:29:24.652660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.152 [2024-12-09 06:29:24.652691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.152 qpair failed and we were unable to recover it. 00:30:30.152 [2024-12-09 06:29:24.652912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.152 [2024-12-09 06:29:24.652941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.152 qpair failed and we were unable to recover it. 00:30:30.152 [2024-12-09 06:29:24.653284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.152 [2024-12-09 06:29:24.653312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.152 qpair failed and we were unable to recover it. 00:30:30.152 [2024-12-09 06:29:24.653533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.152 [2024-12-09 06:29:24.653563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.152 qpair failed and we were unable to recover it. 00:30:30.152 [2024-12-09 06:29:24.653979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.152 [2024-12-09 06:29:24.654008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.152 qpair failed and we were unable to recover it. 00:30:30.152 [2024-12-09 06:29:24.654300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.152 [2024-12-09 06:29:24.654328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.152 qpair failed and we were unable to recover it. 00:30:30.152 [2024-12-09 06:29:24.654664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.152 [2024-12-09 06:29:24.654694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.152 qpair failed and we were unable to recover it. 00:30:30.152 [2024-12-09 06:29:24.654917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.152 [2024-12-09 06:29:24.654946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.152 qpair failed and we were unable to recover it. 00:30:30.152 [2024-12-09 06:29:24.655200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.152 [2024-12-09 06:29:24.655228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.152 qpair failed and we were unable to recover it. 00:30:30.152 [2024-12-09 06:29:24.655584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.152 [2024-12-09 06:29:24.655614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.152 qpair failed and we were unable to recover it. 00:30:30.152 [2024-12-09 06:29:24.655975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.152 [2024-12-09 06:29:24.656005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.152 qpair failed and we were unable to recover it. 00:30:30.152 [2024-12-09 06:29:24.656359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.152 [2024-12-09 06:29:24.656387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.152 qpair failed and we were unable to recover it. 00:30:30.152 [2024-12-09 06:29:24.656740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.152 [2024-12-09 06:29:24.656772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.152 qpair failed and we were unable to recover it. 00:30:30.153 [2024-12-09 06:29:24.657152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.153 [2024-12-09 06:29:24.657180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.153 qpair failed and we were unable to recover it. 00:30:30.153 [2024-12-09 06:29:24.657522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.153 [2024-12-09 06:29:24.657553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.153 qpair failed and we were unable to recover it. 00:30:30.153 [2024-12-09 06:29:24.657875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.153 [2024-12-09 06:29:24.657903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.153 qpair failed and we were unable to recover it. 00:30:30.153 [2024-12-09 06:29:24.658247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.153 [2024-12-09 06:29:24.658276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.153 qpair failed and we were unable to recover it. 00:30:30.153 [2024-12-09 06:29:24.658660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.153 [2024-12-09 06:29:24.658691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.153 qpair failed and we were unable to recover it. 00:30:30.153 [2024-12-09 06:29:24.659061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.153 [2024-12-09 06:29:24.659091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.153 qpair failed and we were unable to recover it. 00:30:30.153 [2024-12-09 06:29:24.659407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.153 [2024-12-09 06:29:24.659436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.153 qpair failed and we were unable to recover it. 00:30:30.153 [2024-12-09 06:29:24.659829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.153 [2024-12-09 06:29:24.659858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.153 qpair failed and we were unable to recover it. 00:30:30.153 [2024-12-09 06:29:24.660080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.153 [2024-12-09 06:29:24.660113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.153 qpair failed and we were unable to recover it. 00:30:30.153 [2024-12-09 06:29:24.660486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.153 [2024-12-09 06:29:24.660518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.153 qpair failed and we were unable to recover it. 00:30:30.153 [2024-12-09 06:29:24.660850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.153 [2024-12-09 06:29:24.660878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.153 qpair failed and we were unable to recover it. 00:30:30.153 [2024-12-09 06:29:24.661258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.153 [2024-12-09 06:29:24.661287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.153 qpair failed and we were unable to recover it. 00:30:30.153 [2024-12-09 06:29:24.661630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.153 [2024-12-09 06:29:24.661660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.153 qpair failed and we were unable to recover it. 00:30:30.153 [2024-12-09 06:29:24.662012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.153 [2024-12-09 06:29:24.662041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.153 qpair failed and we were unable to recover it. 00:30:30.153 [2024-12-09 06:29:24.662368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.153 [2024-12-09 06:29:24.662397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.153 qpair failed and we were unable to recover it. 00:30:30.153 [2024-12-09 06:29:24.662798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.153 [2024-12-09 06:29:24.662827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.153 qpair failed and we were unable to recover it. 00:30:30.153 [2024-12-09 06:29:24.663207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.153 [2024-12-09 06:29:24.663236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.153 qpair failed and we were unable to recover it. 00:30:30.153 [2024-12-09 06:29:24.663577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.153 [2024-12-09 06:29:24.663607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.153 qpair failed and we were unable to recover it. 00:30:30.153 [2024-12-09 06:29:24.663801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.153 [2024-12-09 06:29:24.663830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.153 qpair failed and we were unable to recover it. 00:30:30.153 [2024-12-09 06:29:24.664170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.153 [2024-12-09 06:29:24.664199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.153 qpair failed and we were unable to recover it. 00:30:30.153 [2024-12-09 06:29:24.664533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.153 [2024-12-09 06:29:24.664563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.153 qpair failed and we were unable to recover it. 00:30:30.153 [2024-12-09 06:29:24.664928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.153 [2024-12-09 06:29:24.664957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.153 qpair failed and we were unable to recover it. 00:30:30.153 [2024-12-09 06:29:24.665183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.153 [2024-12-09 06:29:24.665212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.153 qpair failed and we were unable to recover it. 00:30:30.153 [2024-12-09 06:29:24.665580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.153 [2024-12-09 06:29:24.665611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.153 qpair failed and we were unable to recover it. 00:30:30.153 [2024-12-09 06:29:24.665948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.153 [2024-12-09 06:29:24.665983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.153 qpair failed and we were unable to recover it. 00:30:30.153 [2024-12-09 06:29:24.666364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.153 [2024-12-09 06:29:24.666393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.153 qpair failed and we were unable to recover it. 00:30:30.153 [2024-12-09 06:29:24.666776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.153 [2024-12-09 06:29:24.666806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.153 qpair failed and we were unable to recover it. 00:30:30.153 [2024-12-09 06:29:24.667168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.153 [2024-12-09 06:29:24.667197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.153 qpair failed and we were unable to recover it. 00:30:30.153 [2024-12-09 06:29:24.667591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.153 [2024-12-09 06:29:24.667623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.153 qpair failed and we were unable to recover it. 00:30:30.153 [2024-12-09 06:29:24.667943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.153 [2024-12-09 06:29:24.667972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.153 qpair failed and we were unable to recover it. 00:30:30.153 [2024-12-09 06:29:24.668202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.153 [2024-12-09 06:29:24.668231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.153 qpair failed and we were unable to recover it. 00:30:30.153 [2024-12-09 06:29:24.668584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.153 [2024-12-09 06:29:24.668615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.153 qpair failed and we were unable to recover it. 00:30:30.153 [2024-12-09 06:29:24.668996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.153 [2024-12-09 06:29:24.669026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.153 qpair failed and we were unable to recover it. 00:30:30.153 [2024-12-09 06:29:24.669361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.153 [2024-12-09 06:29:24.669390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.153 qpair failed and we were unable to recover it. 00:30:30.153 [2024-12-09 06:29:24.669771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.153 [2024-12-09 06:29:24.669801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.153 qpair failed and we were unable to recover it. 00:30:30.153 [2024-12-09 06:29:24.670146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.153 [2024-12-09 06:29:24.670176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.153 qpair failed and we were unable to recover it. 00:30:30.153 [2024-12-09 06:29:24.670555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.153 [2024-12-09 06:29:24.670585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.153 qpair failed and we were unable to recover it. 00:30:30.153 [2024-12-09 06:29:24.670933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.153 [2024-12-09 06:29:24.670962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.153 qpair failed and we were unable to recover it. 00:30:30.153 [2024-12-09 06:29:24.671189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.153 [2024-12-09 06:29:24.671219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.154 qpair failed and we were unable to recover it. 00:30:30.154 [2024-12-09 06:29:24.671484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.154 [2024-12-09 06:29:24.671516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.154 qpair failed and we were unable to recover it. 00:30:30.154 [2024-12-09 06:29:24.671718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.154 [2024-12-09 06:29:24.671749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.154 qpair failed and we were unable to recover it. 00:30:30.154 [2024-12-09 06:29:24.672098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.154 [2024-12-09 06:29:24.672128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.154 qpair failed and we were unable to recover it. 00:30:30.154 [2024-12-09 06:29:24.672443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.154 [2024-12-09 06:29:24.672487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.154 qpair failed and we were unable to recover it. 00:30:30.154 [2024-12-09 06:29:24.672849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.154 [2024-12-09 06:29:24.672878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.154 qpair failed and we were unable to recover it. 00:30:30.154 [2024-12-09 06:29:24.673134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.154 [2024-12-09 06:29:24.673163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.154 qpair failed and we were unable to recover it. 00:30:30.154 [2024-12-09 06:29:24.673499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.154 [2024-12-09 06:29:24.673531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.154 qpair failed and we were unable to recover it. 00:30:30.154 [2024-12-09 06:29:24.673877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.154 [2024-12-09 06:29:24.673906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.154 qpair failed and we were unable to recover it. 00:30:30.154 [2024-12-09 06:29:24.674126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.154 [2024-12-09 06:29:24.674155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.154 qpair failed and we were unable to recover it. 00:30:30.154 [2024-12-09 06:29:24.674392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.154 [2024-12-09 06:29:24.674425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.154 qpair failed and we were unable to recover it. 00:30:30.154 [2024-12-09 06:29:24.674760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.154 [2024-12-09 06:29:24.674790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.154 qpair failed and we were unable to recover it. 00:30:30.154 [2024-12-09 06:29:24.675140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.154 [2024-12-09 06:29:24.675170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.154 qpair failed and we were unable to recover it. 00:30:30.154 [2024-12-09 06:29:24.675512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.154 [2024-12-09 06:29:24.675550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.154 qpair failed and we were unable to recover it. 00:30:30.154 [2024-12-09 06:29:24.675897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.154 [2024-12-09 06:29:24.675926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.154 qpair failed and we were unable to recover it. 00:30:30.154 [2024-12-09 06:29:24.676276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.154 [2024-12-09 06:29:24.676305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.154 qpair failed and we were unable to recover it. 00:30:30.154 [2024-12-09 06:29:24.676661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.154 [2024-12-09 06:29:24.676692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.154 qpair failed and we were unable to recover it. 00:30:30.154 [2024-12-09 06:29:24.677028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.154 [2024-12-09 06:29:24.677057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.154 qpair failed and we were unable to recover it. 00:30:30.154 [2024-12-09 06:29:24.677435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.154 [2024-12-09 06:29:24.677477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.154 qpair failed and we were unable to recover it. 00:30:30.154 [2024-12-09 06:29:24.677838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.154 [2024-12-09 06:29:24.677868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.154 qpair failed and we were unable to recover it. 00:30:30.154 [2024-12-09 06:29:24.678229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.154 [2024-12-09 06:29:24.678259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.154 qpair failed and we were unable to recover it. 00:30:30.154 [2024-12-09 06:29:24.678601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.154 [2024-12-09 06:29:24.678631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.154 qpair failed and we were unable to recover it. 00:30:30.154 [2024-12-09 06:29:24.678983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.154 [2024-12-09 06:29:24.679013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.154 qpair failed and we were unable to recover it. 00:30:30.154 [2024-12-09 06:29:24.679355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.154 [2024-12-09 06:29:24.679385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.154 qpair failed and we were unable to recover it. 00:30:30.154 [2024-12-09 06:29:24.679764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.154 [2024-12-09 06:29:24.679794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.154 qpair failed and we were unable to recover it. 00:30:30.154 [2024-12-09 06:29:24.680105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.154 [2024-12-09 06:29:24.680134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.154 qpair failed and we were unable to recover it. 00:30:30.154 [2024-12-09 06:29:24.680514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.154 [2024-12-09 06:29:24.680545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.154 qpair failed and we were unable to recover it. 00:30:30.154 [2024-12-09 06:29:24.680932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.154 [2024-12-09 06:29:24.680963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.154 qpair failed and we were unable to recover it. 00:30:30.154 [2024-12-09 06:29:24.681132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.154 [2024-12-09 06:29:24.681163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.154 qpair failed and we were unable to recover it. 00:30:30.154 [2024-12-09 06:29:24.681518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.154 [2024-12-09 06:29:24.681548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.154 qpair failed and we were unable to recover it. 00:30:30.154 [2024-12-09 06:29:24.681894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.154 [2024-12-09 06:29:24.681924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.154 qpair failed and we were unable to recover it. 00:30:30.154 [2024-12-09 06:29:24.682269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.154 [2024-12-09 06:29:24.682298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.154 qpair failed and we were unable to recover it. 00:30:30.154 [2024-12-09 06:29:24.682654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.154 [2024-12-09 06:29:24.682685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.154 qpair failed and we were unable to recover it. 00:30:30.154 [2024-12-09 06:29:24.683050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.154 [2024-12-09 06:29:24.683080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.154 qpair failed and we were unable to recover it. 00:30:30.154 [2024-12-09 06:29:24.683423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.154 [2024-12-09 06:29:24.683470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.154 qpair failed and we were unable to recover it. 00:30:30.154 [2024-12-09 06:29:24.683740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.154 [2024-12-09 06:29:24.683770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.154 qpair failed and we were unable to recover it. 00:30:30.154 [2024-12-09 06:29:24.684017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.154 [2024-12-09 06:29:24.684047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.154 qpair failed and we were unable to recover it. 00:30:30.154 [2024-12-09 06:29:24.684395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.154 [2024-12-09 06:29:24.684424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.154 qpair failed and we were unable to recover it. 00:30:30.154 [2024-12-09 06:29:24.684780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.154 [2024-12-09 06:29:24.684809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.154 qpair failed and we were unable to recover it. 00:30:30.154 [2024-12-09 06:29:24.685153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.155 [2024-12-09 06:29:24.685181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-12-09 06:29:24.685503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.155 [2024-12-09 06:29:24.685532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-12-09 06:29:24.685883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.155 [2024-12-09 06:29:24.685912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-12-09 06:29:24.686288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.155 [2024-12-09 06:29:24.686316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-12-09 06:29:24.686667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.155 [2024-12-09 06:29:24.686698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-12-09 06:29:24.687061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.155 [2024-12-09 06:29:24.687091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-12-09 06:29:24.687474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.155 [2024-12-09 06:29:24.687504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-12-09 06:29:24.687872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.155 [2024-12-09 06:29:24.687902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-12-09 06:29:24.688230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.155 [2024-12-09 06:29:24.688260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-12-09 06:29:24.688495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.155 [2024-12-09 06:29:24.688525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-12-09 06:29:24.688765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.155 [2024-12-09 06:29:24.688796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-12-09 06:29:24.689143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.155 [2024-12-09 06:29:24.689174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-12-09 06:29:24.689404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.155 [2024-12-09 06:29:24.689433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-12-09 06:29:24.689828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.155 [2024-12-09 06:29:24.689858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-12-09 06:29:24.690201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.155 [2024-12-09 06:29:24.690231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-12-09 06:29:24.690619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.155 [2024-12-09 06:29:24.690662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-12-09 06:29:24.691018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.155 [2024-12-09 06:29:24.691047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-12-09 06:29:24.691368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.155 [2024-12-09 06:29:24.691397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-12-09 06:29:24.691797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.155 [2024-12-09 06:29:24.691828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-12-09 06:29:24.692209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.155 [2024-12-09 06:29:24.692237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-12-09 06:29:24.692556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.155 [2024-12-09 06:29:24.692587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-12-09 06:29:24.692908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.155 [2024-12-09 06:29:24.692937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-12-09 06:29:24.693167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.155 [2024-12-09 06:29:24.693197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-12-09 06:29:24.693589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.155 [2024-12-09 06:29:24.693620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-12-09 06:29:24.693963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.155 [2024-12-09 06:29:24.693992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-12-09 06:29:24.694305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.155 [2024-12-09 06:29:24.694334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-12-09 06:29:24.694665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.155 [2024-12-09 06:29:24.694695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-12-09 06:29:24.695032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.155 [2024-12-09 06:29:24.695061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-12-09 06:29:24.695257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.155 [2024-12-09 06:29:24.695289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-12-09 06:29:24.695625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.155 [2024-12-09 06:29:24.695656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-12-09 06:29:24.695877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.155 [2024-12-09 06:29:24.695909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-12-09 06:29:24.696299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.155 [2024-12-09 06:29:24.696328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-12-09 06:29:24.696648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.155 [2024-12-09 06:29:24.696678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-12-09 06:29:24.697002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.155 [2024-12-09 06:29:24.697032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-12-09 06:29:24.697369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.155 [2024-12-09 06:29:24.697398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-12-09 06:29:24.697746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.155 [2024-12-09 06:29:24.697777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-12-09 06:29:24.698105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.155 [2024-12-09 06:29:24.698134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-12-09 06:29:24.698475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.155 [2024-12-09 06:29:24.698506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-12-09 06:29:24.698883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.155 [2024-12-09 06:29:24.698912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.156 [2024-12-09 06:29:24.699298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.156 [2024-12-09 06:29:24.699328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.156 qpair failed and we were unable to recover it. 00:30:30.156 [2024-12-09 06:29:24.699661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.156 [2024-12-09 06:29:24.699692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.156 qpair failed and we were unable to recover it. 00:30:30.156 [2024-12-09 06:29:24.700013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.156 [2024-12-09 06:29:24.700042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.156 qpair failed and we were unable to recover it. 00:30:30.156 [2024-12-09 06:29:24.700386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.156 [2024-12-09 06:29:24.700422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.156 qpair failed and we were unable to recover it. 00:30:30.156 [2024-12-09 06:29:24.700818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.156 [2024-12-09 06:29:24.700848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.156 qpair failed and we were unable to recover it. 00:30:30.156 [2024-12-09 06:29:24.701096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.156 [2024-12-09 06:29:24.701128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.156 qpair failed and we were unable to recover it. 00:30:30.156 [2024-12-09 06:29:24.701510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.156 [2024-12-09 06:29:24.701540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.156 qpair failed and we were unable to recover it. 00:30:30.156 [2024-12-09 06:29:24.701890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.156 [2024-12-09 06:29:24.701920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.156 qpair failed and we were unable to recover it. 00:30:30.156 [2024-12-09 06:29:24.702300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.156 [2024-12-09 06:29:24.702329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.156 qpair failed and we were unable to recover it. 00:30:30.156 [2024-12-09 06:29:24.702695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.156 [2024-12-09 06:29:24.702725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.156 qpair failed and we were unable to recover it. 00:30:30.156 [2024-12-09 06:29:24.703104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.156 [2024-12-09 06:29:24.703134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.156 qpair failed and we were unable to recover it. 00:30:30.430 [2024-12-09 06:29:24.703504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.431 [2024-12-09 06:29:24.703537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.431 qpair failed and we were unable to recover it. 00:30:30.431 [2024-12-09 06:29:24.703919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.431 [2024-12-09 06:29:24.703950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.431 qpair failed and we were unable to recover it. 00:30:30.431 [2024-12-09 06:29:24.704298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.431 [2024-12-09 06:29:24.704328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.431 qpair failed and we were unable to recover it. 00:30:30.431 [2024-12-09 06:29:24.704684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.431 [2024-12-09 06:29:24.704714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.431 qpair failed and we were unable to recover it. 00:30:30.431 [2024-12-09 06:29:24.704956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.431 [2024-12-09 06:29:24.704986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.431 qpair failed and we were unable to recover it. 00:30:30.431 [2024-12-09 06:29:24.705367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.431 [2024-12-09 06:29:24.705398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.431 qpair failed and we were unable to recover it. 00:30:30.431 [2024-12-09 06:29:24.705756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.431 [2024-12-09 06:29:24.705787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.431 qpair failed and we were unable to recover it. 00:30:30.431 [2024-12-09 06:29:24.706147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.431 [2024-12-09 06:29:24.706176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.431 qpair failed and we were unable to recover it. 00:30:30.431 [2024-12-09 06:29:24.706422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.431 [2024-12-09 06:29:24.706461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.431 qpair failed and we were unable to recover it. 00:30:30.431 [2024-12-09 06:29:24.706832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.431 [2024-12-09 06:29:24.706862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.431 qpair failed and we were unable to recover it. 00:30:30.431 [2024-12-09 06:29:24.707217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.431 [2024-12-09 06:29:24.707246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.431 qpair failed and we were unable to recover it. 00:30:30.431 [2024-12-09 06:29:24.707569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.431 [2024-12-09 06:29:24.707601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.431 qpair failed and we were unable to recover it. 00:30:30.431 [2024-12-09 06:29:24.707957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.431 [2024-12-09 06:29:24.707987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.431 qpair failed and we were unable to recover it. 00:30:30.431 [2024-12-09 06:29:24.708366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.431 [2024-12-09 06:29:24.708396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.431 qpair failed and we were unable to recover it. 00:30:30.431 [2024-12-09 06:29:24.708806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.431 [2024-12-09 06:29:24.708836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.431 qpair failed and we were unable to recover it. 00:30:30.431 [2024-12-09 06:29:24.709085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.431 [2024-12-09 06:29:24.709114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.431 qpair failed and we were unable to recover it. 00:30:30.431 [2024-12-09 06:29:24.709469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.431 [2024-12-09 06:29:24.709499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.431 qpair failed and we were unable to recover it. 00:30:30.431 [2024-12-09 06:29:24.709864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.431 [2024-12-09 06:29:24.709893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.431 qpair failed and we were unable to recover it. 00:30:30.431 [2024-12-09 06:29:24.710136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.431 [2024-12-09 06:29:24.710165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.431 qpair failed and we were unable to recover it. 00:30:30.431 [2024-12-09 06:29:24.710549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.431 [2024-12-09 06:29:24.710579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.431 qpair failed and we were unable to recover it. 00:30:30.431 [2024-12-09 06:29:24.710920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.431 [2024-12-09 06:29:24.710948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.431 qpair failed and we were unable to recover it. 00:30:30.431 [2024-12-09 06:29:24.711132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.431 [2024-12-09 06:29:24.711164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.431 qpair failed and we were unable to recover it. 00:30:30.431 [2024-12-09 06:29:24.711529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.431 [2024-12-09 06:29:24.711560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.431 qpair failed and we were unable to recover it. 00:30:30.431 [2024-12-09 06:29:24.711906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.431 [2024-12-09 06:29:24.711937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.431 qpair failed and we were unable to recover it. 00:30:30.431 [2024-12-09 06:29:24.712167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.431 [2024-12-09 06:29:24.712196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.431 qpair failed and we were unable to recover it. 00:30:30.431 [2024-12-09 06:29:24.712588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.431 [2024-12-09 06:29:24.712618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.431 qpair failed and we were unable to recover it. 00:30:30.431 [2024-12-09 06:29:24.712961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.431 [2024-12-09 06:29:24.712990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.431 qpair failed and we were unable to recover it. 00:30:30.431 [2024-12-09 06:29:24.713372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.431 [2024-12-09 06:29:24.713401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.431 qpair failed and we were unable to recover it. 00:30:30.431 [2024-12-09 06:29:24.713680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.431 [2024-12-09 06:29:24.713710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.431 qpair failed and we were unable to recover it. 00:30:30.431 [2024-12-09 06:29:24.713938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.431 [2024-12-09 06:29:24.713968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.431 qpair failed and we were unable to recover it. 00:30:30.431 [2024-12-09 06:29:24.714316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.431 [2024-12-09 06:29:24.714345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.431 qpair failed and we were unable to recover it. 00:30:30.431 [2024-12-09 06:29:24.714699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.431 [2024-12-09 06:29:24.714730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.431 qpair failed and we were unable to recover it. 00:30:30.431 [2024-12-09 06:29:24.715059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.431 [2024-12-09 06:29:24.715089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.431 qpair failed and we were unable to recover it. 00:30:30.431 [2024-12-09 06:29:24.715338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.431 [2024-12-09 06:29:24.715373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.431 qpair failed and we were unable to recover it. 00:30:30.431 [2024-12-09 06:29:24.715735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.431 [2024-12-09 06:29:24.715766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.431 qpair failed and we were unable to recover it. 00:30:30.431 [2024-12-09 06:29:24.716119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.431 [2024-12-09 06:29:24.716150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.431 qpair failed and we were unable to recover it. 00:30:30.431 [2024-12-09 06:29:24.716487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.431 [2024-12-09 06:29:24.716517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.431 qpair failed and we were unable to recover it. 00:30:30.431 [2024-12-09 06:29:24.716894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.432 [2024-12-09 06:29:24.716923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.432 qpair failed and we were unable to recover it. 00:30:30.432 [2024-12-09 06:29:24.717267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.432 [2024-12-09 06:29:24.717297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.432 qpair failed and we were unable to recover it. 00:30:30.432 [2024-12-09 06:29:24.717666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.432 [2024-12-09 06:29:24.717697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.432 qpair failed and we were unable to recover it. 00:30:30.432 [2024-12-09 06:29:24.718020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.432 [2024-12-09 06:29:24.718050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.432 qpair failed and we were unable to recover it. 00:30:30.432 [2024-12-09 06:29:24.718380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.432 [2024-12-09 06:29:24.718408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.432 qpair failed and we were unable to recover it. 00:30:30.432 [2024-12-09 06:29:24.718680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.432 [2024-12-09 06:29:24.718711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.432 qpair failed and we were unable to recover it. 00:30:30.432 [2024-12-09 06:29:24.718861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.432 [2024-12-09 06:29:24.718889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.432 qpair failed and we were unable to recover it. 00:30:30.432 [2024-12-09 06:29:24.719274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.432 [2024-12-09 06:29:24.719303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.432 qpair failed and we were unable to recover it. 00:30:30.432 [2024-12-09 06:29:24.719529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.432 [2024-12-09 06:29:24.719561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.432 qpair failed and we were unable to recover it. 00:30:30.432 [2024-12-09 06:29:24.719917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.432 [2024-12-09 06:29:24.719946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.432 qpair failed and we were unable to recover it. 00:30:30.432 [2024-12-09 06:29:24.720337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.432 [2024-12-09 06:29:24.720367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.432 qpair failed and we were unable to recover it. 00:30:30.432 [2024-12-09 06:29:24.720750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.432 [2024-12-09 06:29:24.720781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.432 qpair failed and we were unable to recover it. 00:30:30.432 [2024-12-09 06:29:24.721099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.432 [2024-12-09 06:29:24.721128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.432 qpair failed and we were unable to recover it. 00:30:30.432 [2024-12-09 06:29:24.721355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.432 [2024-12-09 06:29:24.721384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.432 qpair failed and we were unable to recover it. 00:30:30.432 [2024-12-09 06:29:24.721773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.432 [2024-12-09 06:29:24.721804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.432 qpair failed and we were unable to recover it. 00:30:30.432 [2024-12-09 06:29:24.722183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.432 [2024-12-09 06:29:24.722212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.432 qpair failed and we were unable to recover it. 00:30:30.432 [2024-12-09 06:29:24.722446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.432 [2024-12-09 06:29:24.722486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.432 qpair failed and we were unable to recover it. 00:30:30.432 [2024-12-09 06:29:24.722885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.432 [2024-12-09 06:29:24.722915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.432 qpair failed and we were unable to recover it. 00:30:30.432 [2024-12-09 06:29:24.723161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.432 [2024-12-09 06:29:24.723190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.432 qpair failed and we were unable to recover it. 00:30:30.432 [2024-12-09 06:29:24.723538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.432 [2024-12-09 06:29:24.723568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.432 qpair failed and we were unable to recover it. 00:30:30.432 [2024-12-09 06:29:24.723956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.432 [2024-12-09 06:29:24.723985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.432 qpair failed and we were unable to recover it. 00:30:30.432 [2024-12-09 06:29:24.724357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.432 [2024-12-09 06:29:24.724386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.432 qpair failed and we were unable to recover it. 00:30:30.432 [2024-12-09 06:29:24.724709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.432 [2024-12-09 06:29:24.724740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.432 qpair failed and we were unable to recover it. 00:30:30.432 [2024-12-09 06:29:24.725124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.432 [2024-12-09 06:29:24.725153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.432 qpair failed and we were unable to recover it. 00:30:30.432 [2024-12-09 06:29:24.725567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.432 [2024-12-09 06:29:24.725598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.432 qpair failed and we were unable to recover it. 00:30:30.432 [2024-12-09 06:29:24.725962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.432 [2024-12-09 06:29:24.725991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.432 qpair failed and we were unable to recover it. 00:30:30.432 [2024-12-09 06:29:24.726372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.432 [2024-12-09 06:29:24.726402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.432 qpair failed and we were unable to recover it. 00:30:30.432 [2024-12-09 06:29:24.726781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.432 [2024-12-09 06:29:24.726811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.432 qpair failed and we were unable to recover it. 00:30:30.432 [2024-12-09 06:29:24.727162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.432 [2024-12-09 06:29:24.727191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.432 qpair failed and we were unable to recover it. 00:30:30.432 [2024-12-09 06:29:24.727435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.432 [2024-12-09 06:29:24.727484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.432 qpair failed and we were unable to recover it. 00:30:30.432 [2024-12-09 06:29:24.727843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.432 [2024-12-09 06:29:24.727872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.432 qpair failed and we were unable to recover it. 00:30:30.432 [2024-12-09 06:29:24.728209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.432 [2024-12-09 06:29:24.728238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.432 qpair failed and we were unable to recover it. 00:30:30.432 [2024-12-09 06:29:24.728587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.432 [2024-12-09 06:29:24.728617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.432 qpair failed and we were unable to recover it. 00:30:30.432 [2024-12-09 06:29:24.728961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.432 [2024-12-09 06:29:24.728990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.432 qpair failed and we were unable to recover it. 00:30:30.432 [2024-12-09 06:29:24.729354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.432 [2024-12-09 06:29:24.729383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.432 qpair failed and we were unable to recover it. 00:30:30.432 [2024-12-09 06:29:24.729740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.432 [2024-12-09 06:29:24.729770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.432 qpair failed and we were unable to recover it. 00:30:30.432 [2024-12-09 06:29:24.730119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.432 [2024-12-09 06:29:24.730148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.432 qpair failed and we were unable to recover it. 00:30:30.432 [2024-12-09 06:29:24.730476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.432 [2024-12-09 06:29:24.730509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.432 qpair failed and we were unable to recover it. 00:30:30.432 [2024-12-09 06:29:24.730850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.432 [2024-12-09 06:29:24.730879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.432 qpair failed and we were unable to recover it. 00:30:30.433 [2024-12-09 06:29:24.731270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.433 [2024-12-09 06:29:24.731300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.433 qpair failed and we were unable to recover it. 00:30:30.433 [2024-12-09 06:29:24.731623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.433 [2024-12-09 06:29:24.731653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.433 qpair failed and we were unable to recover it. 00:30:30.433 [2024-12-09 06:29:24.731968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.433 [2024-12-09 06:29:24.731997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.433 qpair failed and we were unable to recover it. 00:30:30.433 [2024-12-09 06:29:24.732349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.433 [2024-12-09 06:29:24.732379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.433 qpair failed and we were unable to recover it. 00:30:30.433 [2024-12-09 06:29:24.732743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.433 [2024-12-09 06:29:24.732773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.433 qpair failed and we were unable to recover it. 00:30:30.433 [2024-12-09 06:29:24.733124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.433 [2024-12-09 06:29:24.733153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.433 qpair failed and we were unable to recover it. 00:30:30.433 [2024-12-09 06:29:24.733585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.433 [2024-12-09 06:29:24.733614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.433 qpair failed and we were unable to recover it. 00:30:30.433 [2024-12-09 06:29:24.734001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.433 [2024-12-09 06:29:24.734029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.433 qpair failed and we were unable to recover it. 00:30:30.433 [2024-12-09 06:29:24.734367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.433 [2024-12-09 06:29:24.734396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.433 qpair failed and we were unable to recover it. 00:30:30.433 [2024-12-09 06:29:24.734831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.433 [2024-12-09 06:29:24.734862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.433 qpair failed and we were unable to recover it. 00:30:30.433 [2024-12-09 06:29:24.735207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.433 [2024-12-09 06:29:24.735235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.433 qpair failed and we were unable to recover it. 00:30:30.433 [2024-12-09 06:29:24.735504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.433 [2024-12-09 06:29:24.735535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.433 qpair failed and we were unable to recover it. 00:30:30.433 [2024-12-09 06:29:24.735893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.433 [2024-12-09 06:29:24.735923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.433 qpair failed and we were unable to recover it. 00:30:30.433 [2024-12-09 06:29:24.736158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.433 [2024-12-09 06:29:24.736187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.433 qpair failed and we were unable to recover it. 00:30:30.433 [2024-12-09 06:29:24.736552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.433 [2024-12-09 06:29:24.736582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.433 qpair failed and we were unable to recover it. 00:30:30.433 [2024-12-09 06:29:24.736878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.433 [2024-12-09 06:29:24.736908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.433 qpair failed and we were unable to recover it. 00:30:30.433 [2024-12-09 06:29:24.737221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.433 [2024-12-09 06:29:24.737251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.433 qpair failed and we were unable to recover it. 00:30:30.433 [2024-12-09 06:29:24.737629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.433 [2024-12-09 06:29:24.737659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.433 qpair failed and we were unable to recover it. 00:30:30.433 [2024-12-09 06:29:24.737988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.433 [2024-12-09 06:29:24.738017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.433 qpair failed and we were unable to recover it. 00:30:30.433 [2024-12-09 06:29:24.738364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.433 [2024-12-09 06:29:24.738393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.433 qpair failed and we were unable to recover it. 00:30:30.433 [2024-12-09 06:29:24.738812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.433 [2024-12-09 06:29:24.738842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.433 qpair failed and we were unable to recover it. 00:30:30.433 [2024-12-09 06:29:24.738984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.433 [2024-12-09 06:29:24.739014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.433 qpair failed and we were unable to recover it. 00:30:30.433 [2024-12-09 06:29:24.739366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.433 [2024-12-09 06:29:24.739395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.433 qpair failed and we were unable to recover it. 00:30:30.433 [2024-12-09 06:29:24.739787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.433 [2024-12-09 06:29:24.739818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.433 qpair failed and we were unable to recover it. 00:30:30.433 [2024-12-09 06:29:24.740152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.433 [2024-12-09 06:29:24.740181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.433 qpair failed and we were unable to recover it. 00:30:30.433 [2024-12-09 06:29:24.740525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.433 [2024-12-09 06:29:24.740563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.433 qpair failed and we were unable to recover it. 00:30:30.433 [2024-12-09 06:29:24.740984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.433 [2024-12-09 06:29:24.741013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.433 qpair failed and we were unable to recover it. 00:30:30.433 [2024-12-09 06:29:24.741251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.433 [2024-12-09 06:29:24.741282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.433 qpair failed and we were unable to recover it. 00:30:30.433 [2024-12-09 06:29:24.741591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.433 [2024-12-09 06:29:24.741621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.433 qpair failed and we were unable to recover it. 00:30:30.433 [2024-12-09 06:29:24.741841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.433 [2024-12-09 06:29:24.741870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.433 qpair failed and we were unable to recover it. 00:30:30.433 [2024-12-09 06:29:24.742199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.433 [2024-12-09 06:29:24.742227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.433 qpair failed and we were unable to recover it. 00:30:30.433 [2024-12-09 06:29:24.742556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.433 [2024-12-09 06:29:24.742586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.433 qpair failed and we were unable to recover it. 00:30:30.433 [2024-12-09 06:29:24.742974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.433 [2024-12-09 06:29:24.743004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.433 qpair failed and we were unable to recover it. 00:30:30.433 [2024-12-09 06:29:24.743132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.433 [2024-12-09 06:29:24.743160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.433 qpair failed and we were unable to recover it. 00:30:30.433 [2024-12-09 06:29:24.743503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.433 [2024-12-09 06:29:24.743533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.433 qpair failed and we were unable to recover it. 00:30:30.433 [2024-12-09 06:29:24.743898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.433 [2024-12-09 06:29:24.743927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.433 qpair failed and we were unable to recover it. 00:30:30.433 [2024-12-09 06:29:24.744304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.433 [2024-12-09 06:29:24.744333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.433 qpair failed and we were unable to recover it. 00:30:30.433 [2024-12-09 06:29:24.744443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.433 [2024-12-09 06:29:24.744486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.433 qpair failed and we were unable to recover it. 00:30:30.434 [2024-12-09 06:29:24.744725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.434 [2024-12-09 06:29:24.744754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.434 qpair failed and we were unable to recover it. 00:30:30.434 [2024-12-09 06:29:24.745122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.434 [2024-12-09 06:29:24.745151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.434 qpair failed and we were unable to recover it. 00:30:30.434 [2024-12-09 06:29:24.745494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.434 [2024-12-09 06:29:24.745525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.434 qpair failed and we were unable to recover it. 00:30:30.434 [2024-12-09 06:29:24.745911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.434 [2024-12-09 06:29:24.745941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.434 qpair failed and we were unable to recover it. 00:30:30.434 [2024-12-09 06:29:24.746288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.434 [2024-12-09 06:29:24.746317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.434 qpair failed and we were unable to recover it. 00:30:30.434 [2024-12-09 06:29:24.746668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.434 [2024-12-09 06:29:24.746698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.434 qpair failed and we were unable to recover it. 00:30:30.434 [2024-12-09 06:29:24.747065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.434 [2024-12-09 06:29:24.747094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.434 qpair failed and we were unable to recover it. 00:30:30.434 [2024-12-09 06:29:24.747355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.434 [2024-12-09 06:29:24.747384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.434 qpair failed and we were unable to recover it. 00:30:30.434 [2024-12-09 06:29:24.747611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.434 [2024-12-09 06:29:24.747641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.434 qpair failed and we were unable to recover it. 00:30:30.434 [2024-12-09 06:29:24.747986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.434 [2024-12-09 06:29:24.748015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.434 qpair failed and we were unable to recover it. 00:30:30.434 [2024-12-09 06:29:24.748410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.434 [2024-12-09 06:29:24.748439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.434 qpair failed and we were unable to recover it. 00:30:30.434 [2024-12-09 06:29:24.748780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.434 [2024-12-09 06:29:24.748810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.434 qpair failed and we were unable to recover it. 00:30:30.434 [2024-12-09 06:29:24.748962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.434 [2024-12-09 06:29:24.748990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.434 qpair failed and we were unable to recover it. 00:30:30.434 [2024-12-09 06:29:24.749318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.434 [2024-12-09 06:29:24.749347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.434 qpair failed and we were unable to recover it. 00:30:30.434 [2024-12-09 06:29:24.749682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.434 [2024-12-09 06:29:24.749713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.434 qpair failed and we were unable to recover it. 00:30:30.434 [2024-12-09 06:29:24.750071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.434 [2024-12-09 06:29:24.750100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.434 qpair failed and we were unable to recover it. 00:30:30.434 [2024-12-09 06:29:24.750425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.434 [2024-12-09 06:29:24.750465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.434 qpair failed and we were unable to recover it. 00:30:30.434 [2024-12-09 06:29:24.750816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.434 [2024-12-09 06:29:24.750845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.434 qpair failed and we were unable to recover it. 00:30:30.434 [2024-12-09 06:29:24.751188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.434 [2024-12-09 06:29:24.751218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.434 qpair failed and we were unable to recover it. 00:30:30.434 [2024-12-09 06:29:24.751545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.434 [2024-12-09 06:29:24.751577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.434 qpair failed and we were unable to recover it. 00:30:30.434 [2024-12-09 06:29:24.751930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.434 [2024-12-09 06:29:24.751960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.434 qpair failed and we were unable to recover it. 00:30:30.434 [2024-12-09 06:29:24.752316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.434 [2024-12-09 06:29:24.752344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.434 qpair failed and we were unable to recover it. 00:30:30.434 [2024-12-09 06:29:24.752728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.434 [2024-12-09 06:29:24.752758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.434 qpair failed and we were unable to recover it. 00:30:30.434 [2024-12-09 06:29:24.753093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.434 [2024-12-09 06:29:24.753122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.434 qpair failed and we were unable to recover it. 00:30:30.434 [2024-12-09 06:29:24.753363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.434 [2024-12-09 06:29:24.753392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.434 qpair failed and we were unable to recover it. 00:30:30.434 [2024-12-09 06:29:24.753533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.434 [2024-12-09 06:29:24.753562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.434 qpair failed and we were unable to recover it. 00:30:30.434 [2024-12-09 06:29:24.754021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.434 [2024-12-09 06:29:24.754050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.434 qpair failed and we were unable to recover it. 00:30:30.434 [2024-12-09 06:29:24.754410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.434 [2024-12-09 06:29:24.754439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.434 qpair failed and we were unable to recover it. 00:30:30.434 [2024-12-09 06:29:24.754809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.434 [2024-12-09 06:29:24.754846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.434 qpair failed and we were unable to recover it. 00:30:30.434 [2024-12-09 06:29:24.755200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.434 [2024-12-09 06:29:24.755229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.434 qpair failed and we were unable to recover it. 00:30:30.434 [2024-12-09 06:29:24.755576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.434 [2024-12-09 06:29:24.755608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.434 qpair failed and we were unable to recover it. 00:30:30.434 [2024-12-09 06:29:24.755945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.434 [2024-12-09 06:29:24.755976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.434 qpair failed and we were unable to recover it. 00:30:30.434 [2024-12-09 06:29:24.756216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.434 [2024-12-09 06:29:24.756245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.434 qpair failed and we were unable to recover it. 00:30:30.434 [2024-12-09 06:29:24.756491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.434 [2024-12-09 06:29:24.756521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.434 qpair failed and we were unable to recover it. 00:30:30.434 [2024-12-09 06:29:24.756920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.434 [2024-12-09 06:29:24.756949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.434 qpair failed and we were unable to recover it. 00:30:30.434 [2024-12-09 06:29:24.757315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.434 [2024-12-09 06:29:24.757345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.434 qpair failed and we were unable to recover it. 00:30:30.434 [2024-12-09 06:29:24.757588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.434 [2024-12-09 06:29:24.757618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.434 qpair failed and we were unable to recover it. 00:30:30.434 [2024-12-09 06:29:24.757966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.434 [2024-12-09 06:29:24.757995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.435 qpair failed and we were unable to recover it. 00:30:30.435 [2024-12-09 06:29:24.758345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.435 [2024-12-09 06:29:24.758375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.435 qpair failed and we were unable to recover it. 00:30:30.435 [2024-12-09 06:29:24.758744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.435 [2024-12-09 06:29:24.758775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.435 qpair failed and we were unable to recover it. 00:30:30.435 [2024-12-09 06:29:24.759123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.435 [2024-12-09 06:29:24.759152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.435 qpair failed and we were unable to recover it. 00:30:30.435 [2024-12-09 06:29:24.759506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.435 [2024-12-09 06:29:24.759537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.435 qpair failed and we were unable to recover it. 00:30:30.435 [2024-12-09 06:29:24.759910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.435 [2024-12-09 06:29:24.759940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.435 qpair failed and we were unable to recover it. 00:30:30.435 [2024-12-09 06:29:24.760315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.435 [2024-12-09 06:29:24.760344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.435 qpair failed and we were unable to recover it. 00:30:30.435 [2024-12-09 06:29:24.760750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.435 [2024-12-09 06:29:24.760780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.435 qpair failed and we were unable to recover it. 00:30:30.435 [2024-12-09 06:29:24.761030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.435 [2024-12-09 06:29:24.761063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.435 qpair failed and we were unable to recover it. 00:30:30.435 [2024-12-09 06:29:24.761418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.435 [2024-12-09 06:29:24.761467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.435 qpair failed and we were unable to recover it. 00:30:30.435 [2024-12-09 06:29:24.761809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.435 [2024-12-09 06:29:24.761838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.435 qpair failed and we were unable to recover it. 00:30:30.435 [2024-12-09 06:29:24.762081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.435 [2024-12-09 06:29:24.762110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.435 qpair failed and we were unable to recover it. 00:30:30.435 [2024-12-09 06:29:24.762461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.435 [2024-12-09 06:29:24.762491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.435 qpair failed and we were unable to recover it. 00:30:30.435 [2024-12-09 06:29:24.762831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.435 [2024-12-09 06:29:24.762862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.435 qpair failed and we were unable to recover it. 00:30:30.435 [2024-12-09 06:29:24.763094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.435 [2024-12-09 06:29:24.763123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.435 qpair failed and we were unable to recover it. 00:30:30.435 [2024-12-09 06:29:24.763447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.435 [2024-12-09 06:29:24.763491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.435 qpair failed and we were unable to recover it. 00:30:30.435 [2024-12-09 06:29:24.763799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.435 [2024-12-09 06:29:24.763828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.435 qpair failed and we were unable to recover it. 00:30:30.435 [2024-12-09 06:29:24.764181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.435 [2024-12-09 06:29:24.764210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.435 qpair failed and we were unable to recover it. 00:30:30.435 [2024-12-09 06:29:24.764557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.435 [2024-12-09 06:29:24.764600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.435 qpair failed and we were unable to recover it. 00:30:30.435 [2024-12-09 06:29:24.764974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.435 [2024-12-09 06:29:24.765003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.435 qpair failed and we were unable to recover it. 00:30:30.435 [2024-12-09 06:29:24.765234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.435 [2024-12-09 06:29:24.765263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.435 qpair failed and we were unable to recover it. 00:30:30.435 [2024-12-09 06:29:24.765571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.435 [2024-12-09 06:29:24.765603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.435 qpair failed and we were unable to recover it. 00:30:30.435 [2024-12-09 06:29:24.765977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.435 [2024-12-09 06:29:24.766006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.435 qpair failed and we were unable to recover it. 00:30:30.435 [2024-12-09 06:29:24.766218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.435 [2024-12-09 06:29:24.766246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.435 qpair failed and we were unable to recover it. 00:30:30.435 [2024-12-09 06:29:24.766588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.435 [2024-12-09 06:29:24.766617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.435 qpair failed and we were unable to recover it. 00:30:30.435 [2024-12-09 06:29:24.766938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.435 [2024-12-09 06:29:24.766967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.435 qpair failed and we were unable to recover it. 00:30:30.435 [2024-12-09 06:29:24.767208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.435 [2024-12-09 06:29:24.767241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.435 qpair failed and we were unable to recover it. 00:30:30.435 [2024-12-09 06:29:24.767600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.435 [2024-12-09 06:29:24.767633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.435 qpair failed and we were unable to recover it. 00:30:30.435 [2024-12-09 06:29:24.768012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.435 [2024-12-09 06:29:24.768041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.435 qpair failed and we were unable to recover it. 00:30:30.435 [2024-12-09 06:29:24.768373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.435 [2024-12-09 06:29:24.768402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.435 qpair failed and we were unable to recover it. 00:30:30.435 [2024-12-09 06:29:24.768657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.435 [2024-12-09 06:29:24.768687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.435 qpair failed and we were unable to recover it. 00:30:30.435 [2024-12-09 06:29:24.768894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.435 [2024-12-09 06:29:24.768923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.435 qpair failed and we were unable to recover it. 00:30:30.435 [2024-12-09 06:29:24.769268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.435 [2024-12-09 06:29:24.769298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.435 qpair failed and we were unable to recover it. 00:30:30.435 [2024-12-09 06:29:24.769671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.435 [2024-12-09 06:29:24.769701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.436 qpair failed and we were unable to recover it. 00:30:30.436 [2024-12-09 06:29:24.770049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.436 [2024-12-09 06:29:24.770078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.436 qpair failed and we were unable to recover it. 00:30:30.436 [2024-12-09 06:29:24.770417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.436 [2024-12-09 06:29:24.770446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.436 qpair failed and we were unable to recover it. 00:30:30.436 [2024-12-09 06:29:24.770814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.436 [2024-12-09 06:29:24.770842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.436 qpair failed and we were unable to recover it. 00:30:30.436 [2024-12-09 06:29:24.771187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.436 [2024-12-09 06:29:24.771216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.436 qpair failed and we were unable to recover it. 00:30:30.436 [2024-12-09 06:29:24.771432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.436 [2024-12-09 06:29:24.771473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.436 qpair failed and we were unable to recover it. 00:30:30.436 [2024-12-09 06:29:24.771819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.436 [2024-12-09 06:29:24.771848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.436 qpair failed and we were unable to recover it. 00:30:30.436 [2024-12-09 06:29:24.772065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.436 [2024-12-09 06:29:24.772094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.436 qpair failed and we were unable to recover it. 00:30:30.436 [2024-12-09 06:29:24.772424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.436 [2024-12-09 06:29:24.772463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.436 qpair failed and we were unable to recover it. 00:30:30.436 [2024-12-09 06:29:24.772855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.436 [2024-12-09 06:29:24.772884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.436 qpair failed and we were unable to recover it. 00:30:30.436 [2024-12-09 06:29:24.773115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.436 [2024-12-09 06:29:24.773144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.436 qpair failed and we were unable to recover it. 00:30:30.436 [2024-12-09 06:29:24.773497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.436 [2024-12-09 06:29:24.773527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.436 qpair failed and we were unable to recover it. 00:30:30.436 [2024-12-09 06:29:24.773876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.436 [2024-12-09 06:29:24.773906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.436 qpair failed and we were unable to recover it. 00:30:30.436 [2024-12-09 06:29:24.774299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.436 [2024-12-09 06:29:24.774329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.436 qpair failed and we were unable to recover it. 00:30:30.436 [2024-12-09 06:29:24.774684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.436 [2024-12-09 06:29:24.774713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.436 qpair failed and we were unable to recover it. 00:30:30.436 [2024-12-09 06:29:24.775045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.436 [2024-12-09 06:29:24.775075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.436 qpair failed and we were unable to recover it. 00:30:30.436 [2024-12-09 06:29:24.775422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.436 [2024-12-09 06:29:24.775467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.436 qpair failed and we were unable to recover it. 00:30:30.436 [2024-12-09 06:29:24.775812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.436 [2024-12-09 06:29:24.775842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.436 qpair failed and we were unable to recover it. 00:30:30.436 [2024-12-09 06:29:24.776082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.436 [2024-12-09 06:29:24.776111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.436 qpair failed and we were unable to recover it. 00:30:30.436 [2024-12-09 06:29:24.776464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.436 [2024-12-09 06:29:24.776496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.436 qpair failed and we were unable to recover it. 00:30:30.436 [2024-12-09 06:29:24.776879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.436 [2024-12-09 06:29:24.776909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.436 qpair failed and we were unable to recover it. 00:30:30.436 [2024-12-09 06:29:24.777237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.436 [2024-12-09 06:29:24.777265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.436 qpair failed and we were unable to recover it. 00:30:30.436 [2024-12-09 06:29:24.777601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.436 [2024-12-09 06:29:24.777632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.436 qpair failed and we were unable to recover it. 00:30:30.436 [2024-12-09 06:29:24.777980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.436 [2024-12-09 06:29:24.778009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.436 qpair failed and we were unable to recover it. 00:30:30.436 [2024-12-09 06:29:24.778353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.436 [2024-12-09 06:29:24.778382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.436 qpair failed and we were unable to recover it. 00:30:30.436 [2024-12-09 06:29:24.778739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.436 [2024-12-09 06:29:24.778770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.436 qpair failed and we were unable to recover it. 00:30:30.436 [2024-12-09 06:29:24.779007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.436 [2024-12-09 06:29:24.779043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.436 qpair failed and we were unable to recover it. 00:30:30.436 [2024-12-09 06:29:24.779410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.436 [2024-12-09 06:29:24.779440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.436 qpair failed and we were unable to recover it. 00:30:30.436 [2024-12-09 06:29:24.779793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.436 [2024-12-09 06:29:24.779825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.436 qpair failed and we were unable to recover it. 00:30:30.436 [2024-12-09 06:29:24.780168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.436 [2024-12-09 06:29:24.780196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.436 qpair failed and we were unable to recover it. 00:30:30.436 [2024-12-09 06:29:24.780532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.436 [2024-12-09 06:29:24.780563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.436 qpair failed and we were unable to recover it. 00:30:30.436 [2024-12-09 06:29:24.780911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.436 [2024-12-09 06:29:24.780941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.436 qpair failed and we were unable to recover it. 00:30:30.436 [2024-12-09 06:29:24.781317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.436 [2024-12-09 06:29:24.781346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.436 qpair failed and we were unable to recover it. 00:30:30.436 [2024-12-09 06:29:24.781700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.436 [2024-12-09 06:29:24.781731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.436 qpair failed and we were unable to recover it. 00:30:30.436 [2024-12-09 06:29:24.782059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.436 [2024-12-09 06:29:24.782088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.436 qpair failed and we were unable to recover it. 00:30:30.436 [2024-12-09 06:29:24.782228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.436 [2024-12-09 06:29:24.782256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.436 qpair failed and we were unable to recover it. 00:30:30.436 [2024-12-09 06:29:24.782633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.436 [2024-12-09 06:29:24.782664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.436 qpair failed and we were unable to recover it. 00:30:30.436 [2024-12-09 06:29:24.783007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.436 [2024-12-09 06:29:24.783036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.436 qpair failed and we were unable to recover it. 00:30:30.436 [2024-12-09 06:29:24.783438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.436 [2024-12-09 06:29:24.783487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.436 qpair failed and we were unable to recover it. 00:30:30.437 [2024-12-09 06:29:24.783867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.437 [2024-12-09 06:29:24.783897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.437 qpair failed and we were unable to recover it. 00:30:30.437 [2024-12-09 06:29:24.784294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.437 [2024-12-09 06:29:24.784324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.437 qpair failed and we were unable to recover it. 00:30:30.437 [2024-12-09 06:29:24.784657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.437 [2024-12-09 06:29:24.784688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.437 qpair failed and we were unable to recover it. 00:30:30.437 [2024-12-09 06:29:24.785045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.437 [2024-12-09 06:29:24.785074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.437 qpair failed and we were unable to recover it. 00:30:30.437 [2024-12-09 06:29:24.785432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.437 [2024-12-09 06:29:24.785475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.437 qpair failed and we were unable to recover it. 00:30:30.437 [2024-12-09 06:29:24.785867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.437 [2024-12-09 06:29:24.785897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.437 qpair failed and we were unable to recover it. 00:30:30.437 [2024-12-09 06:29:24.786131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.437 [2024-12-09 06:29:24.786160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.437 qpair failed and we were unable to recover it. 00:30:30.437 [2024-12-09 06:29:24.786507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.437 [2024-12-09 06:29:24.786538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.437 qpair failed and we were unable to recover it. 00:30:30.437 [2024-12-09 06:29:24.786929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.437 [2024-12-09 06:29:24.786959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.437 qpair failed and we were unable to recover it. 00:30:30.437 [2024-12-09 06:29:24.787315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.437 [2024-12-09 06:29:24.787345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.437 qpair failed and we were unable to recover it. 00:30:30.437 [2024-12-09 06:29:24.787677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.437 [2024-12-09 06:29:24.787708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.437 qpair failed and we were unable to recover it. 00:30:30.437 [2024-12-09 06:29:24.788057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.437 [2024-12-09 06:29:24.788087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.437 qpair failed and we were unable to recover it. 00:30:30.437 [2024-12-09 06:29:24.788428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.437 [2024-12-09 06:29:24.788471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.437 qpair failed and we were unable to recover it. 00:30:30.437 [2024-12-09 06:29:24.788824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.437 [2024-12-09 06:29:24.788855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.437 qpair failed and we were unable to recover it. 00:30:30.437 [2024-12-09 06:29:24.789218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.437 [2024-12-09 06:29:24.789255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.437 qpair failed and we were unable to recover it. 00:30:30.437 [2024-12-09 06:29:24.789630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.437 [2024-12-09 06:29:24.789661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.437 qpair failed and we were unable to recover it. 00:30:30.437 [2024-12-09 06:29:24.790004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.437 [2024-12-09 06:29:24.790036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.437 qpair failed and we were unable to recover it. 00:30:30.437 [2024-12-09 06:29:24.790404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.437 [2024-12-09 06:29:24.790436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.437 qpair failed and we were unable to recover it. 00:30:30.437 [2024-12-09 06:29:24.790730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.437 [2024-12-09 06:29:24.790760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.437 qpair failed and we were unable to recover it. 00:30:30.437 [2024-12-09 06:29:24.791091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.437 [2024-12-09 06:29:24.791121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.437 qpair failed and we were unable to recover it. 00:30:30.437 [2024-12-09 06:29:24.791296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.437 [2024-12-09 06:29:24.791328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.437 qpair failed and we were unable to recover it. 00:30:30.437 [2024-12-09 06:29:24.791547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.437 [2024-12-09 06:29:24.791580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.437 qpair failed and we were unable to recover it. 00:30:30.437 [2024-12-09 06:29:24.791948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.437 [2024-12-09 06:29:24.791979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.437 qpair failed and we were unable to recover it. 00:30:30.437 [2024-12-09 06:29:24.792389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.437 [2024-12-09 06:29:24.792420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.437 qpair failed and we were unable to recover it. 00:30:30.437 [2024-12-09 06:29:24.792805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.437 [2024-12-09 06:29:24.792836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.437 qpair failed and we were unable to recover it. 00:30:30.437 [2024-12-09 06:29:24.793213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.437 [2024-12-09 06:29:24.793242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.437 qpair failed and we were unable to recover it. 00:30:30.437 [2024-12-09 06:29:24.793580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.437 [2024-12-09 06:29:24.793610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.437 qpair failed and we were unable to recover it. 00:30:30.437 [2024-12-09 06:29:24.793958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.437 [2024-12-09 06:29:24.793987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.437 qpair failed and we were unable to recover it. 00:30:30.437 [2024-12-09 06:29:24.794341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.437 [2024-12-09 06:29:24.794371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.437 qpair failed and we were unable to recover it. 00:30:30.437 [2024-12-09 06:29:24.794644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.437 [2024-12-09 06:29:24.794674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.437 qpair failed and we were unable to recover it. 00:30:30.437 [2024-12-09 06:29:24.795019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.437 [2024-12-09 06:29:24.795048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.437 qpair failed and we were unable to recover it. 00:30:30.437 [2024-12-09 06:29:24.795388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.437 [2024-12-09 06:29:24.795417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.437 qpair failed and we were unable to recover it. 00:30:30.437 [2024-12-09 06:29:24.795819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.437 [2024-12-09 06:29:24.795850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.437 qpair failed and we were unable to recover it. 00:30:30.437 [2024-12-09 06:29:24.796220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.437 [2024-12-09 06:29:24.796251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.437 qpair failed and we were unable to recover it. 00:30:30.437 [2024-12-09 06:29:24.796641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.437 [2024-12-09 06:29:24.796672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.437 qpair failed and we were unable to recover it. 00:30:30.437 [2024-12-09 06:29:24.797030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.437 [2024-12-09 06:29:24.797060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.437 qpair failed and we were unable to recover it. 00:30:30.437 [2024-12-09 06:29:24.797412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.437 [2024-12-09 06:29:24.797442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.437 qpair failed and we were unable to recover it. 00:30:30.437 [2024-12-09 06:29:24.797830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.437 [2024-12-09 06:29:24.797860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.437 qpair failed and we were unable to recover it. 00:30:30.437 [2024-12-09 06:29:24.798197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.438 [2024-12-09 06:29:24.798227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.438 qpair failed and we were unable to recover it. 00:30:30.438 [2024-12-09 06:29:24.798565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.438 [2024-12-09 06:29:24.798595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.438 qpair failed and we were unable to recover it. 00:30:30.438 [2024-12-09 06:29:24.798928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.438 [2024-12-09 06:29:24.798957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.438 qpair failed and we were unable to recover it. 00:30:30.438 [2024-12-09 06:29:24.799225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.438 [2024-12-09 06:29:24.799255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.438 qpair failed and we were unable to recover it. 00:30:30.438 [2024-12-09 06:29:24.799581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.438 [2024-12-09 06:29:24.799613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.438 qpair failed and we were unable to recover it. 00:30:30.438 [2024-12-09 06:29:24.799987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.438 [2024-12-09 06:29:24.800016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.438 qpair failed and we were unable to recover it. 00:30:30.438 [2024-12-09 06:29:24.800357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.438 [2024-12-09 06:29:24.800387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.438 qpair failed and we were unable to recover it. 00:30:30.438 [2024-12-09 06:29:24.800809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.438 [2024-12-09 06:29:24.800840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.438 qpair failed and we were unable to recover it. 00:30:30.438 [2024-12-09 06:29:24.801074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.438 [2024-12-09 06:29:24.801103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.438 qpair failed and we were unable to recover it. 00:30:30.438 [2024-12-09 06:29:24.801478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.438 [2024-12-09 06:29:24.801510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.438 qpair failed and we were unable to recover it. 00:30:30.438 [2024-12-09 06:29:24.801846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.438 [2024-12-09 06:29:24.801876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.438 qpair failed and we were unable to recover it. 00:30:30.438 [2024-12-09 06:29:24.802257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.438 [2024-12-09 06:29:24.802287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.438 qpair failed and we were unable to recover it. 00:30:30.438 [2024-12-09 06:29:24.802533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.438 [2024-12-09 06:29:24.802563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.438 qpair failed and we were unable to recover it. 00:30:30.438 [2024-12-09 06:29:24.802932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.438 [2024-12-09 06:29:24.802961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.438 qpair failed and we were unable to recover it. 00:30:30.438 [2024-12-09 06:29:24.803302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.438 [2024-12-09 06:29:24.803330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.438 qpair failed and we were unable to recover it. 00:30:30.438 [2024-12-09 06:29:24.803658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.438 [2024-12-09 06:29:24.803688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.438 qpair failed and we were unable to recover it. 00:30:30.438 [2024-12-09 06:29:24.804035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.438 [2024-12-09 06:29:24.804064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.438 qpair failed and we were unable to recover it. 00:30:30.438 [2024-12-09 06:29:24.804408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.438 [2024-12-09 06:29:24.804444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.438 qpair failed and we were unable to recover it. 00:30:30.438 [2024-12-09 06:29:24.804829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.438 [2024-12-09 06:29:24.804858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.438 qpair failed and we were unable to recover it. 00:30:30.438 [2024-12-09 06:29:24.805212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.438 [2024-12-09 06:29:24.805240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.438 qpair failed and we were unable to recover it. 00:30:30.438 [2024-12-09 06:29:24.805566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.438 [2024-12-09 06:29:24.805597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.438 qpair failed and we were unable to recover it. 00:30:30.438 [2024-12-09 06:29:24.805935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.438 [2024-12-09 06:29:24.805963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.438 qpair failed and we were unable to recover it. 00:30:30.438 [2024-12-09 06:29:24.806289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.438 [2024-12-09 06:29:24.806317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.438 qpair failed and we were unable to recover it. 00:30:30.438 [2024-12-09 06:29:24.806704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.438 [2024-12-09 06:29:24.806734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.438 qpair failed and we were unable to recover it. 00:30:30.438 [2024-12-09 06:29:24.807074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.438 [2024-12-09 06:29:24.807103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.438 qpair failed and we were unable to recover it. 00:30:30.438 [2024-12-09 06:29:24.807232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.438 [2024-12-09 06:29:24.807260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.438 qpair failed and we were unable to recover it. 00:30:30.438 [2024-12-09 06:29:24.807546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.438 [2024-12-09 06:29:24.807576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.438 qpair failed and we were unable to recover it. 00:30:30.438 [2024-12-09 06:29:24.807953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.438 [2024-12-09 06:29:24.807981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.438 qpair failed and we were unable to recover it. 00:30:30.438 [2024-12-09 06:29:24.808324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.438 [2024-12-09 06:29:24.808353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.438 qpair failed and we were unable to recover it. 00:30:30.438 [2024-12-09 06:29:24.808679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.438 [2024-12-09 06:29:24.808710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.438 qpair failed and we were unable to recover it. 00:30:30.438 [2024-12-09 06:29:24.809061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.438 [2024-12-09 06:29:24.809090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.438 qpair failed and we were unable to recover it. 00:30:30.438 [2024-12-09 06:29:24.809440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.438 [2024-12-09 06:29:24.809485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.438 qpair failed and we were unable to recover it. 00:30:30.438 [2024-12-09 06:29:24.809836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.438 [2024-12-09 06:29:24.809866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.438 qpair failed and we were unable to recover it. 00:30:30.438 [2024-12-09 06:29:24.810089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.438 [2024-12-09 06:29:24.810118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.438 qpair failed and we were unable to recover it. 00:30:30.438 [2024-12-09 06:29:24.810513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.438 [2024-12-09 06:29:24.810544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.438 qpair failed and we were unable to recover it. 00:30:30.438 [2024-12-09 06:29:24.810933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.438 [2024-12-09 06:29:24.810963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.438 qpair failed and we were unable to recover it. 00:30:30.438 [2024-12-09 06:29:24.811335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.438 [2024-12-09 06:29:24.811364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.438 qpair failed and we were unable to recover it. 00:30:30.438 [2024-12-09 06:29:24.811599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.438 [2024-12-09 06:29:24.811630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.438 qpair failed and we were unable to recover it. 00:30:30.438 [2024-12-09 06:29:24.811868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.438 [2024-12-09 06:29:24.811896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.438 qpair failed and we were unable to recover it. 00:30:30.438 [2024-12-09 06:29:24.812265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.439 [2024-12-09 06:29:24.812295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.439 qpair failed and we were unable to recover it. 00:30:30.439 [2024-12-09 06:29:24.812552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.439 [2024-12-09 06:29:24.812582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.439 qpair failed and we were unable to recover it. 00:30:30.439 [2024-12-09 06:29:24.812945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.439 [2024-12-09 06:29:24.812973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.439 qpair failed and we were unable to recover it. 00:30:30.439 [2024-12-09 06:29:24.813327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.439 [2024-12-09 06:29:24.813355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.439 qpair failed and we were unable to recover it. 00:30:30.439 [2024-12-09 06:29:24.813659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.439 [2024-12-09 06:29:24.813689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.439 qpair failed and we were unable to recover it. 00:30:30.439 [2024-12-09 06:29:24.813900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.439 [2024-12-09 06:29:24.813938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.439 qpair failed and we were unable to recover it. 00:30:30.439 [2024-12-09 06:29:24.814312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.439 [2024-12-09 06:29:24.814341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.439 qpair failed and we were unable to recover it. 00:30:30.439 [2024-12-09 06:29:24.814705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.439 [2024-12-09 06:29:24.814736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.439 qpair failed and we were unable to recover it. 00:30:30.439 [2024-12-09 06:29:24.814958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.439 [2024-12-09 06:29:24.814991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.439 qpair failed and we were unable to recover it. 00:30:30.439 [2024-12-09 06:29:24.815276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.439 [2024-12-09 06:29:24.815305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.439 qpair failed and we were unable to recover it. 00:30:30.439 [2024-12-09 06:29:24.815667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.439 [2024-12-09 06:29:24.815697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.439 qpair failed and we were unable to recover it. 00:30:30.439 [2024-12-09 06:29:24.816035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.439 [2024-12-09 06:29:24.816064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.439 qpair failed and we were unable to recover it. 00:30:30.439 [2024-12-09 06:29:24.816474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.439 [2024-12-09 06:29:24.816506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.439 qpair failed and we were unable to recover it. 00:30:30.439 [2024-12-09 06:29:24.816863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.439 [2024-12-09 06:29:24.816893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.439 qpair failed and we were unable to recover it. 00:30:30.439 [2024-12-09 06:29:24.817119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.439 [2024-12-09 06:29:24.817148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.439 qpair failed and we were unable to recover it. 00:30:30.439 [2024-12-09 06:29:24.817508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.439 [2024-12-09 06:29:24.817538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.439 qpair failed and we were unable to recover it. 00:30:30.439 [2024-12-09 06:29:24.817916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.439 [2024-12-09 06:29:24.817944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.439 qpair failed and we were unable to recover it. 00:30:30.439 [2024-12-09 06:29:24.818291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.439 [2024-12-09 06:29:24.818320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.439 qpair failed and we were unable to recover it. 00:30:30.439 [2024-12-09 06:29:24.818643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.439 [2024-12-09 06:29:24.818674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.439 qpair failed and we were unable to recover it. 00:30:30.439 [2024-12-09 06:29:24.819015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.439 [2024-12-09 06:29:24.819045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.439 qpair failed and we were unable to recover it. 00:30:30.439 [2024-12-09 06:29:24.819267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.439 [2024-12-09 06:29:24.819297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.439 qpair failed and we were unable to recover it. 00:30:30.439 [2024-12-09 06:29:24.819611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.439 [2024-12-09 06:29:24.819642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.439 qpair failed and we were unable to recover it. 00:30:30.439 [2024-12-09 06:29:24.819985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.439 [2024-12-09 06:29:24.820014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.439 qpair failed and we were unable to recover it. 00:30:30.439 [2024-12-09 06:29:24.820361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.439 [2024-12-09 06:29:24.820390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.439 qpair failed and we were unable to recover it. 00:30:30.439 [2024-12-09 06:29:24.820766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.439 [2024-12-09 06:29:24.820798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.439 qpair failed and we were unable to recover it. 00:30:30.439 [2024-12-09 06:29:24.821132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.439 [2024-12-09 06:29:24.821162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.439 qpair failed and we were unable to recover it. 00:30:30.439 [2024-12-09 06:29:24.821508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.439 [2024-12-09 06:29:24.821539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.439 qpair failed and we were unable to recover it. 00:30:30.439 [2024-12-09 06:29:24.821893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.439 [2024-12-09 06:29:24.821922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.439 qpair failed and we were unable to recover it. 00:30:30.439 [2024-12-09 06:29:24.822295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.439 [2024-12-09 06:29:24.822324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.439 qpair failed and we were unable to recover it. 00:30:30.439 [2024-12-09 06:29:24.822642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.439 [2024-12-09 06:29:24.822672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.439 qpair failed and we were unable to recover it. 00:30:30.439 [2024-12-09 06:29:24.823017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.439 [2024-12-09 06:29:24.823047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.439 qpair failed and we were unable to recover it. 00:30:30.439 [2024-12-09 06:29:24.823391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.439 [2024-12-09 06:29:24.823420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.439 qpair failed and we were unable to recover it. 00:30:30.439 [2024-12-09 06:29:24.823789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.439 [2024-12-09 06:29:24.823819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.439 qpair failed and we were unable to recover it. 00:30:30.439 [2024-12-09 06:29:24.824168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.439 [2024-12-09 06:29:24.824197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.439 qpair failed and we were unable to recover it. 00:30:30.439 [2024-12-09 06:29:24.824540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.439 [2024-12-09 06:29:24.824572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.439 qpair failed and we were unable to recover it. 00:30:30.439 [2024-12-09 06:29:24.824925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.439 [2024-12-09 06:29:24.824954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.439 qpair failed and we were unable to recover it. 00:30:30.439 [2024-12-09 06:29:24.825332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.439 [2024-12-09 06:29:24.825361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.439 qpair failed and we were unable to recover it. 00:30:30.439 [2024-12-09 06:29:24.825706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.439 [2024-12-09 06:29:24.825736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.439 qpair failed and we were unable to recover it. 00:30:30.439 [2024-12-09 06:29:24.826078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.439 [2024-12-09 06:29:24.826107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.439 qpair failed and we were unable to recover it. 00:30:30.440 [2024-12-09 06:29:24.826465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.440 [2024-12-09 06:29:24.826497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.440 qpair failed and we were unable to recover it. 00:30:30.440 [2024-12-09 06:29:24.826844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.440 [2024-12-09 06:29:24.826874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.440 qpair failed and we were unable to recover it. 00:30:30.440 [2024-12-09 06:29:24.827211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.440 [2024-12-09 06:29:24.827240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.440 qpair failed and we were unable to recover it. 00:30:30.440 [2024-12-09 06:29:24.827515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.440 [2024-12-09 06:29:24.827546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.440 qpair failed and we were unable to recover it. 00:30:30.440 [2024-12-09 06:29:24.827901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.440 [2024-12-09 06:29:24.827930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.440 qpair failed and we were unable to recover it. 00:30:30.440 [2024-12-09 06:29:24.828277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.440 [2024-12-09 06:29:24.828307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.440 qpair failed and we were unable to recover it. 00:30:30.440 [2024-12-09 06:29:24.828655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.440 [2024-12-09 06:29:24.828686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.440 qpair failed and we were unable to recover it. 00:30:30.440 [2024-12-09 06:29:24.829017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.440 [2024-12-09 06:29:24.829052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.440 qpair failed and we were unable to recover it. 00:30:30.440 [2024-12-09 06:29:24.829381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.440 [2024-12-09 06:29:24.829411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.440 qpair failed and we were unable to recover it. 00:30:30.440 [2024-12-09 06:29:24.829784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.440 [2024-12-09 06:29:24.829815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.440 qpair failed and we were unable to recover it. 00:30:30.440 [2024-12-09 06:29:24.830149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.440 [2024-12-09 06:29:24.830178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.440 qpair failed and we were unable to recover it. 00:30:30.440 [2024-12-09 06:29:24.830511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.440 [2024-12-09 06:29:24.830541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.440 qpair failed and we were unable to recover it. 00:30:30.440 [2024-12-09 06:29:24.830875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.440 [2024-12-09 06:29:24.830904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.440 qpair failed and we were unable to recover it. 00:30:30.440 [2024-12-09 06:29:24.831248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.440 [2024-12-09 06:29:24.831277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.440 qpair failed and we were unable to recover it. 00:30:30.440 [2024-12-09 06:29:24.831598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.440 [2024-12-09 06:29:24.831630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.440 qpair failed and we were unable to recover it. 00:30:30.440 [2024-12-09 06:29:24.831973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.440 [2024-12-09 06:29:24.832003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.440 qpair failed and we were unable to recover it. 00:30:30.440 [2024-12-09 06:29:24.832249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.440 [2024-12-09 06:29:24.832282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.440 qpair failed and we were unable to recover it. 00:30:30.440 [2024-12-09 06:29:24.832846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.440 [2024-12-09 06:29:24.832885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.440 qpair failed and we were unable to recover it. 00:30:30.440 [2024-12-09 06:29:24.833242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.440 [2024-12-09 06:29:24.833277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.440 qpair failed and we were unable to recover it. 00:30:30.440 [2024-12-09 06:29:24.833634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.440 [2024-12-09 06:29:24.833665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.440 qpair failed and we were unable to recover it. 00:30:30.440 [2024-12-09 06:29:24.834000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.440 [2024-12-09 06:29:24.834029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.440 qpair failed and we were unable to recover it. 00:30:30.440 [2024-12-09 06:29:24.834416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.440 [2024-12-09 06:29:24.834446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.440 qpair failed and we were unable to recover it. 00:30:30.440 [2024-12-09 06:29:24.834856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.440 [2024-12-09 06:29:24.834885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.440 qpair failed and we were unable to recover it. 00:30:30.440 [2024-12-09 06:29:24.835124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.440 [2024-12-09 06:29:24.835157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.440 qpair failed and we were unable to recover it. 00:30:30.440 [2024-12-09 06:29:24.835474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.440 [2024-12-09 06:29:24.835506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.440 qpair failed and we were unable to recover it. 00:30:30.440 [2024-12-09 06:29:24.835882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.440 [2024-12-09 06:29:24.835911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.440 qpair failed and we were unable to recover it. 00:30:30.440 [2024-12-09 06:29:24.836256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.440 [2024-12-09 06:29:24.836285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.440 qpair failed and we were unable to recover it. 00:30:30.440 [2024-12-09 06:29:24.836619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.440 [2024-12-09 06:29:24.836650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.440 qpair failed and we were unable to recover it. 00:30:30.440 [2024-12-09 06:29:24.836986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.440 [2024-12-09 06:29:24.837015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.440 qpair failed and we were unable to recover it. 00:30:30.440 [2024-12-09 06:29:24.837395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.440 [2024-12-09 06:29:24.837424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.440 qpair failed and we were unable to recover it. 00:30:30.440 [2024-12-09 06:29:24.837815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.440 [2024-12-09 06:29:24.837845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.440 qpair failed and we were unable to recover it. 00:30:30.440 [2024-12-09 06:29:24.838178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.440 [2024-12-09 06:29:24.838207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.440 qpair failed and we were unable to recover it. 00:30:30.440 [2024-12-09 06:29:24.838541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.440 [2024-12-09 06:29:24.838572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.440 qpair failed and we were unable to recover it. 00:30:30.440 [2024-12-09 06:29:24.838917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.440 [2024-12-09 06:29:24.838946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.440 qpair failed and we were unable to recover it. 00:30:30.440 [2024-12-09 06:29:24.839297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.441 [2024-12-09 06:29:24.839326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.441 qpair failed and we were unable to recover it. 00:30:30.441 [2024-12-09 06:29:24.839555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.441 [2024-12-09 06:29:24.839586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.441 qpair failed and we were unable to recover it. 00:30:30.441 [2024-12-09 06:29:24.839876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.441 [2024-12-09 06:29:24.839906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.441 qpair failed and we were unable to recover it. 00:30:30.441 [2024-12-09 06:29:24.840289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.441 [2024-12-09 06:29:24.840319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.441 qpair failed and we were unable to recover it. 00:30:30.441 [2024-12-09 06:29:24.840668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.441 [2024-12-09 06:29:24.840697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.441 qpair failed and we were unable to recover it. 00:30:30.441 [2024-12-09 06:29:24.841034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.441 [2024-12-09 06:29:24.841062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.441 qpair failed and we were unable to recover it. 00:30:30.441 [2024-12-09 06:29:24.841413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.441 [2024-12-09 06:29:24.841442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.441 qpair failed and we were unable to recover it. 00:30:30.441 [2024-12-09 06:29:24.841734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.441 [2024-12-09 06:29:24.841765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.441 qpair failed and we were unable to recover it. 00:30:30.441 [2024-12-09 06:29:24.842108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.441 [2024-12-09 06:29:24.842137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.441 qpair failed and we were unable to recover it. 00:30:30.441 [2024-12-09 06:29:24.842471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.441 [2024-12-09 06:29:24.842501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.441 qpair failed and we were unable to recover it. 00:30:30.441 [2024-12-09 06:29:24.842867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.441 [2024-12-09 06:29:24.842896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.441 qpair failed and we were unable to recover it. 00:30:30.441 [2024-12-09 06:29:24.843277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.441 [2024-12-09 06:29:24.843306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.441 qpair failed and we were unable to recover it. 00:30:30.441 [2024-12-09 06:29:24.843625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.441 [2024-12-09 06:29:24.843656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.441 qpair failed and we were unable to recover it. 00:30:30.441 [2024-12-09 06:29:24.844003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.441 [2024-12-09 06:29:24.844032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.441 qpair failed and we were unable to recover it. 00:30:30.441 [2024-12-09 06:29:24.844417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.441 [2024-12-09 06:29:24.844447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.441 qpair failed and we were unable to recover it. 00:30:30.441 [2024-12-09 06:29:24.844803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.441 [2024-12-09 06:29:24.844833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.441 qpair failed and we were unable to recover it. 00:30:30.441 [2024-12-09 06:29:24.845060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.441 [2024-12-09 06:29:24.845093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.441 qpair failed and we were unable to recover it. 00:30:30.441 [2024-12-09 06:29:24.845467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.441 [2024-12-09 06:29:24.845497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.441 qpair failed and we were unable to recover it. 00:30:30.441 [2024-12-09 06:29:24.845878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.441 [2024-12-09 06:29:24.845907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.441 qpair failed and we were unable to recover it. 00:30:30.441 [2024-12-09 06:29:24.846270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.441 [2024-12-09 06:29:24.846299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.441 qpair failed and we were unable to recover it. 00:30:30.441 [2024-12-09 06:29:24.846669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.441 [2024-12-09 06:29:24.846699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.441 qpair failed and we were unable to recover it. 00:30:30.441 [2024-12-09 06:29:24.847034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.441 [2024-12-09 06:29:24.847063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.441 qpair failed and we were unable to recover it. 00:30:30.441 [2024-12-09 06:29:24.847410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.441 [2024-12-09 06:29:24.847439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.441 qpair failed and we were unable to recover it. 00:30:30.441 [2024-12-09 06:29:24.847789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.441 [2024-12-09 06:29:24.847820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.441 qpair failed and we were unable to recover it. 00:30:30.441 [2024-12-09 06:29:24.848226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.441 [2024-12-09 06:29:24.848255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.441 qpair failed and we were unable to recover it. 00:30:30.441 [2024-12-09 06:29:24.848586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.441 [2024-12-09 06:29:24.848617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.441 qpair failed and we were unable to recover it. 00:30:30.441 [2024-12-09 06:29:24.848838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.441 [2024-12-09 06:29:24.848867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.441 qpair failed and we were unable to recover it. 00:30:30.441 [2024-12-09 06:29:24.849093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.441 [2024-12-09 06:29:24.849125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.441 qpair failed and we were unable to recover it. 00:30:30.441 [2024-12-09 06:29:24.849507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.441 [2024-12-09 06:29:24.849538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.441 qpair failed and we were unable to recover it. 00:30:30.441 [2024-12-09 06:29:24.849786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.441 [2024-12-09 06:29:24.849814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.441 qpair failed and we were unable to recover it. 00:30:30.441 [2024-12-09 06:29:24.850142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.441 [2024-12-09 06:29:24.850171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.441 qpair failed and we were unable to recover it. 00:30:30.441 [2024-12-09 06:29:24.850511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.441 [2024-12-09 06:29:24.850541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.441 qpair failed and we were unable to recover it. 00:30:30.441 [2024-12-09 06:29:24.850889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.441 [2024-12-09 06:29:24.850917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.441 qpair failed and we were unable to recover it. 00:30:30.441 [2024-12-09 06:29:24.851266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.441 [2024-12-09 06:29:24.851295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.441 qpair failed and we were unable to recover it. 00:30:30.441 [2024-12-09 06:29:24.851641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.441 [2024-12-09 06:29:24.851671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.441 qpair failed and we were unable to recover it. 00:30:30.441 [2024-12-09 06:29:24.851999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.441 [2024-12-09 06:29:24.852028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.441 qpair failed and we were unable to recover it. 00:30:30.441 [2024-12-09 06:29:24.852245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.441 [2024-12-09 06:29:24.852274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.441 qpair failed and we were unable to recover it. 00:30:30.441 [2024-12-09 06:29:24.852631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.441 [2024-12-09 06:29:24.852663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.441 qpair failed and we were unable to recover it. 00:30:30.441 [2024-12-09 06:29:24.853038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.441 [2024-12-09 06:29:24.853067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.442 qpair failed and we were unable to recover it. 00:30:30.442 [2024-12-09 06:29:24.853407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.442 [2024-12-09 06:29:24.853436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.442 qpair failed and we were unable to recover it. 00:30:30.442 [2024-12-09 06:29:24.853822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.442 [2024-12-09 06:29:24.853851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.442 qpair failed and we were unable to recover it. 00:30:30.442 [2024-12-09 06:29:24.854197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.442 [2024-12-09 06:29:24.854233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.442 qpair failed and we were unable to recover it. 00:30:30.442 [2024-12-09 06:29:24.854494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.442 [2024-12-09 06:29:24.854529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.442 qpair failed and we were unable to recover it. 00:30:30.442 [2024-12-09 06:29:24.854880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.442 [2024-12-09 06:29:24.854909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.442 qpair failed and we were unable to recover it. 00:30:30.442 [2024-12-09 06:29:24.855229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.442 [2024-12-09 06:29:24.855258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.442 qpair failed and we were unable to recover it. 00:30:30.442 [2024-12-09 06:29:24.855570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.442 [2024-12-09 06:29:24.855600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.442 qpair failed and we were unable to recover it. 00:30:30.442 [2024-12-09 06:29:24.855944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.442 [2024-12-09 06:29:24.855974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.442 qpair failed and we were unable to recover it. 00:30:30.442 [2024-12-09 06:29:24.856319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.442 [2024-12-09 06:29:24.856348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.442 qpair failed and we were unable to recover it. 00:30:30.442 [2024-12-09 06:29:24.856700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.442 [2024-12-09 06:29:24.856732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.442 qpair failed and we were unable to recover it. 00:30:30.442 [2024-12-09 06:29:24.857071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.442 [2024-12-09 06:29:24.857100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.442 qpair failed and we were unable to recover it. 00:30:30.442 [2024-12-09 06:29:24.857501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.442 [2024-12-09 06:29:24.857533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.442 qpair failed and we were unable to recover it. 00:30:30.442 [2024-12-09 06:29:24.857882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.442 [2024-12-09 06:29:24.857910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.442 qpair failed and we were unable to recover it. 00:30:30.442 [2024-12-09 06:29:24.858274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.442 [2024-12-09 06:29:24.858303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.442 qpair failed and we were unable to recover it. 00:30:30.442 [2024-12-09 06:29:24.858655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.442 [2024-12-09 06:29:24.858685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.442 qpair failed and we were unable to recover it. 00:30:30.442 [2024-12-09 06:29:24.858936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.442 [2024-12-09 06:29:24.858964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.442 qpair failed and we were unable to recover it. 00:30:30.442 [2024-12-09 06:29:24.859326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.442 [2024-12-09 06:29:24.859355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.442 qpair failed and we were unable to recover it. 00:30:30.442 [2024-12-09 06:29:24.859720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.442 [2024-12-09 06:29:24.859750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.442 qpair failed and we were unable to recover it. 00:30:30.442 [2024-12-09 06:29:24.860089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.442 [2024-12-09 06:29:24.860118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.442 qpair failed and we were unable to recover it. 00:30:30.442 [2024-12-09 06:29:24.860381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.442 [2024-12-09 06:29:24.860411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.442 qpair failed and we were unable to recover it. 00:30:30.442 [2024-12-09 06:29:24.860658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.442 [2024-12-09 06:29:24.860689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.442 qpair failed and we were unable to recover it. 00:30:30.442 [2024-12-09 06:29:24.861041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.442 [2024-12-09 06:29:24.861071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.442 qpair failed and we were unable to recover it. 00:30:30.442 [2024-12-09 06:29:24.861415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.442 [2024-12-09 06:29:24.861445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.442 qpair failed and we were unable to recover it. 00:30:30.442 [2024-12-09 06:29:24.861812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.442 [2024-12-09 06:29:24.861842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.442 qpair failed and we were unable to recover it. 00:30:30.442 [2024-12-09 06:29:24.862175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.442 [2024-12-09 06:29:24.862204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.442 qpair failed and we were unable to recover it. 00:30:30.442 [2024-12-09 06:29:24.862570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.442 [2024-12-09 06:29:24.862601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.442 qpair failed and we were unable to recover it. 00:30:30.442 [2024-12-09 06:29:24.862947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.442 [2024-12-09 06:29:24.862975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.442 qpair failed and we were unable to recover it. 00:30:30.442 [2024-12-09 06:29:24.863366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.442 [2024-12-09 06:29:24.863394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.442 qpair failed and we were unable to recover it. 00:30:30.442 [2024-12-09 06:29:24.863724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.442 [2024-12-09 06:29:24.863753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.442 qpair failed and we were unable to recover it. 00:30:30.442 [2024-12-09 06:29:24.864107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.442 [2024-12-09 06:29:24.864136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.442 qpair failed and we were unable to recover it. 00:30:30.442 [2024-12-09 06:29:24.864490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.442 [2024-12-09 06:29:24.864521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.442 qpair failed and we were unable to recover it. 00:30:30.442 [2024-12-09 06:29:24.864872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.442 [2024-12-09 06:29:24.864901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.442 qpair failed and we were unable to recover it. 00:30:30.442 [2024-12-09 06:29:24.865145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.442 [2024-12-09 06:29:24.865175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.442 qpair failed and we were unable to recover it. 00:30:30.442 [2024-12-09 06:29:24.865523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.442 [2024-12-09 06:29:24.865553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.442 qpair failed and we were unable to recover it. 00:30:30.442 [2024-12-09 06:29:24.865901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.442 [2024-12-09 06:29:24.865930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.442 qpair failed and we were unable to recover it. 00:30:30.442 [2024-12-09 06:29:24.866308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.442 [2024-12-09 06:29:24.866337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.442 qpair failed and we were unable to recover it. 00:30:30.442 [2024-12-09 06:29:24.866693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.442 [2024-12-09 06:29:24.866724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.442 qpair failed and we were unable to recover it. 00:30:30.442 [2024-12-09 06:29:24.867078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.443 [2024-12-09 06:29:24.867106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.443 qpair failed and we were unable to recover it. 00:30:30.443 [2024-12-09 06:29:24.867461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.443 [2024-12-09 06:29:24.867491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.443 qpair failed and we were unable to recover it. 00:30:30.443 [2024-12-09 06:29:24.867723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.443 [2024-12-09 06:29:24.867755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.443 qpair failed and we were unable to recover it. 00:30:30.443 [2024-12-09 06:29:24.868114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.443 [2024-12-09 06:29:24.868143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.443 qpair failed and we were unable to recover it. 00:30:30.443 [2024-12-09 06:29:24.868509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.443 [2024-12-09 06:29:24.868539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.443 qpair failed and we were unable to recover it. 00:30:30.443 [2024-12-09 06:29:24.868884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.443 [2024-12-09 06:29:24.868912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.443 qpair failed and we were unable to recover it. 00:30:30.443 [2024-12-09 06:29:24.869262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.443 [2024-12-09 06:29:24.869297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.443 qpair failed and we were unable to recover it. 00:30:30.443 [2024-12-09 06:29:24.869654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.443 [2024-12-09 06:29:24.869685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.443 qpair failed and we were unable to recover it. 00:30:30.443 [2024-12-09 06:29:24.870019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.443 [2024-12-09 06:29:24.870048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.443 qpair failed and we were unable to recover it. 00:30:30.443 [2024-12-09 06:29:24.870287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.443 [2024-12-09 06:29:24.870317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.443 qpair failed and we were unable to recover it. 00:30:30.443 [2024-12-09 06:29:24.870576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.443 [2024-12-09 06:29:24.870606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.443 qpair failed and we were unable to recover it. 00:30:30.443 [2024-12-09 06:29:24.870989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.443 [2024-12-09 06:29:24.871017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.443 qpair failed and we were unable to recover it. 00:30:30.443 [2024-12-09 06:29:24.871356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.443 [2024-12-09 06:29:24.871385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.443 qpair failed and we were unable to recover it. 00:30:30.443 [2024-12-09 06:29:24.871626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.443 [2024-12-09 06:29:24.871656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.443 qpair failed and we were unable to recover it. 00:30:30.443 [2024-12-09 06:29:24.871988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.443 [2024-12-09 06:29:24.872017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.443 qpair failed and we were unable to recover it. 00:30:30.443 [2024-12-09 06:29:24.872372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.443 [2024-12-09 06:29:24.872400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.443 qpair failed and we were unable to recover it. 00:30:30.443 [2024-12-09 06:29:24.872773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.443 [2024-12-09 06:29:24.872803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.443 qpair failed and we were unable to recover it. 00:30:30.443 [2024-12-09 06:29:24.873052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.443 [2024-12-09 06:29:24.873082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.443 qpair failed and we were unable to recover it. 00:30:30.443 [2024-12-09 06:29:24.873468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.443 [2024-12-09 06:29:24.873498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.443 qpair failed and we were unable to recover it. 00:30:30.443 [2024-12-09 06:29:24.873841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.443 [2024-12-09 06:29:24.873871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.443 qpair failed and we were unable to recover it. 00:30:30.443 [2024-12-09 06:29:24.874253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.443 [2024-12-09 06:29:24.874283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.443 qpair failed and we were unable to recover it. 00:30:30.443 [2024-12-09 06:29:24.874609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.443 [2024-12-09 06:29:24.874640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.443 qpair failed and we were unable to recover it. 00:30:30.443 [2024-12-09 06:29:24.875048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.443 [2024-12-09 06:29:24.875077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.443 qpair failed and we were unable to recover it. 00:30:30.443 [2024-12-09 06:29:24.875401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.443 [2024-12-09 06:29:24.875430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.443 qpair failed and we were unable to recover it. 00:30:30.443 [2024-12-09 06:29:24.875775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.443 [2024-12-09 06:29:24.875805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.443 qpair failed and we were unable to recover it. 00:30:30.443 [2024-12-09 06:29:24.876124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.443 [2024-12-09 06:29:24.876153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.443 qpair failed and we were unable to recover it. 00:30:30.443 [2024-12-09 06:29:24.876517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.443 [2024-12-09 06:29:24.876546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.443 qpair failed and we were unable to recover it. 00:30:30.443 [2024-12-09 06:29:24.876893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.443 [2024-12-09 06:29:24.876921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.443 qpair failed and we were unable to recover it. 00:30:30.443 [2024-12-09 06:29:24.877257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.443 [2024-12-09 06:29:24.877285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.443 qpair failed and we were unable to recover it. 00:30:30.443 [2024-12-09 06:29:24.877608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.443 [2024-12-09 06:29:24.877637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.443 qpair failed and we were unable to recover it. 00:30:30.443 [2024-12-09 06:29:24.877950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.443 [2024-12-09 06:29:24.877979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.443 qpair failed and we were unable to recover it. 00:30:30.443 [2024-12-09 06:29:24.878330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.443 [2024-12-09 06:29:24.878358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.443 qpair failed and we were unable to recover it. 00:30:30.443 [2024-12-09 06:29:24.878739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.443 [2024-12-09 06:29:24.878769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.443 qpair failed and we were unable to recover it. 00:30:30.443 [2024-12-09 06:29:24.879148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.443 [2024-12-09 06:29:24.879184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.443 qpair failed and we were unable to recover it. 00:30:30.443 [2024-12-09 06:29:24.879422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.443 [2024-12-09 06:29:24.879462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.443 qpair failed and we were unable to recover it. 00:30:30.443 [2024-12-09 06:29:24.879788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.443 [2024-12-09 06:29:24.879818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.443 qpair failed and we were unable to recover it. 00:30:30.443 [2024-12-09 06:29:24.880191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.443 [2024-12-09 06:29:24.880222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.443 qpair failed and we were unable to recover it. 00:30:30.443 [2024-12-09 06:29:24.880557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.443 [2024-12-09 06:29:24.880586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.443 qpair failed and we were unable to recover it. 00:30:30.443 [2024-12-09 06:29:24.880899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.443 [2024-12-09 06:29:24.880927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.443 qpair failed and we were unable to recover it. 00:30:30.443 [2024-12-09 06:29:24.881275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.444 [2024-12-09 06:29:24.881304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.444 qpair failed and we were unable to recover it. 00:30:30.444 [2024-12-09 06:29:24.881670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.444 [2024-12-09 06:29:24.881699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.444 qpair failed and we were unable to recover it. 00:30:30.444 [2024-12-09 06:29:24.882042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.444 [2024-12-09 06:29:24.882071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.444 qpair failed and we were unable to recover it. 00:30:30.444 [2024-12-09 06:29:24.882426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.444 [2024-12-09 06:29:24.882466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.444 qpair failed and we were unable to recover it. 00:30:30.444 [2024-12-09 06:29:24.882847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.444 [2024-12-09 06:29:24.882876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.444 qpair failed and we were unable to recover it. 00:30:30.444 [2024-12-09 06:29:24.883263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.444 [2024-12-09 06:29:24.883292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.444 qpair failed and we were unable to recover it. 00:30:30.444 [2024-12-09 06:29:24.883625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.444 [2024-12-09 06:29:24.883656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.444 qpair failed and we were unable to recover it. 00:30:30.444 [2024-12-09 06:29:24.884062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.444 [2024-12-09 06:29:24.884090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.444 qpair failed and we were unable to recover it. 00:30:30.444 [2024-12-09 06:29:24.884429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.444 [2024-12-09 06:29:24.884467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.444 qpair failed and we were unable to recover it. 00:30:30.444 [2024-12-09 06:29:24.884803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.444 [2024-12-09 06:29:24.884832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.444 qpair failed and we were unable to recover it. 00:30:30.444 [2024-12-09 06:29:24.885139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.444 [2024-12-09 06:29:24.885167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.444 qpair failed and we were unable to recover it. 00:30:30.444 [2024-12-09 06:29:24.885535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.444 [2024-12-09 06:29:24.885565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.444 qpair failed and we were unable to recover it. 00:30:30.444 [2024-12-09 06:29:24.885937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.444 [2024-12-09 06:29:24.885966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.444 qpair failed and we were unable to recover it. 00:30:30.444 [2024-12-09 06:29:24.886324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.444 [2024-12-09 06:29:24.886353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.444 qpair failed and we were unable to recover it. 00:30:30.444 [2024-12-09 06:29:24.886603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.444 [2024-12-09 06:29:24.886637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.444 qpair failed and we were unable to recover it. 00:30:30.444 [2024-12-09 06:29:24.886863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.444 [2024-12-09 06:29:24.886896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.444 qpair failed and we were unable to recover it. 00:30:30.444 [2024-12-09 06:29:24.887184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.444 [2024-12-09 06:29:24.887213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.444 qpair failed and we were unable to recover it. 00:30:30.444 [2024-12-09 06:29:24.887561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.444 [2024-12-09 06:29:24.887591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.444 qpair failed and we were unable to recover it. 00:30:30.444 [2024-12-09 06:29:24.887953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.444 [2024-12-09 06:29:24.887982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.444 qpair failed and we were unable to recover it. 00:30:30.444 [2024-12-09 06:29:24.888350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.444 [2024-12-09 06:29:24.888379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.444 qpair failed and we were unable to recover it. 00:30:30.444 [2024-12-09 06:29:24.888760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.444 [2024-12-09 06:29:24.888790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.444 qpair failed and we were unable to recover it. 00:30:30.444 [2024-12-09 06:29:24.889130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.444 [2024-12-09 06:29:24.889159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.444 qpair failed and we were unable to recover it. 00:30:30.444 [2024-12-09 06:29:24.889429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.444 [2024-12-09 06:29:24.889471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.444 qpair failed and we were unable to recover it. 00:30:30.444 [2024-12-09 06:29:24.889819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.444 [2024-12-09 06:29:24.889847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.444 qpair failed and we were unable to recover it. 00:30:30.444 [2024-12-09 06:29:24.890096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.444 [2024-12-09 06:29:24.890124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.444 qpair failed and we were unable to recover it. 00:30:30.444 [2024-12-09 06:29:24.890357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.444 [2024-12-09 06:29:24.890387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.444 qpair failed and we were unable to recover it. 00:30:30.444 [2024-12-09 06:29:24.890745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.444 [2024-12-09 06:29:24.890775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.444 qpair failed and we were unable to recover it. 00:30:30.444 [2024-12-09 06:29:24.891131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.444 [2024-12-09 06:29:24.891160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.444 qpair failed and we were unable to recover it. 00:30:30.444 [2024-12-09 06:29:24.891503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.444 [2024-12-09 06:29:24.891533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.444 qpair failed and we were unable to recover it. 00:30:30.444 [2024-12-09 06:29:24.891877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.444 [2024-12-09 06:29:24.891907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.444 qpair failed and we were unable to recover it. 00:30:30.444 [2024-12-09 06:29:24.892249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.444 [2024-12-09 06:29:24.892280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.444 qpair failed and we were unable to recover it. 00:30:30.444 [2024-12-09 06:29:24.892561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.444 [2024-12-09 06:29:24.892590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.444 qpair failed and we were unable to recover it. 00:30:30.444 [2024-12-09 06:29:24.892949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.444 [2024-12-09 06:29:24.892978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.444 qpair failed and we were unable to recover it. 00:30:30.444 [2024-12-09 06:29:24.893309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.444 [2024-12-09 06:29:24.893339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.444 qpair failed and we were unable to recover it. 00:30:30.444 [2024-12-09 06:29:24.893729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.445 [2024-12-09 06:29:24.893759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.445 qpair failed and we were unable to recover it. 00:30:30.445 [2024-12-09 06:29:24.894082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.445 [2024-12-09 06:29:24.894117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.445 qpair failed and we were unable to recover it. 00:30:30.445 [2024-12-09 06:29:24.894483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.445 [2024-12-09 06:29:24.894515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.445 qpair failed and we were unable to recover it. 00:30:30.445 [2024-12-09 06:29:24.894922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.445 [2024-12-09 06:29:24.894951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.445 qpair failed and we were unable to recover it. 00:30:30.445 [2024-12-09 06:29:24.895284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.445 [2024-12-09 06:29:24.895314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.445 qpair failed and we were unable to recover it. 00:30:30.445 [2024-12-09 06:29:24.895663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.445 [2024-12-09 06:29:24.895694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.445 qpair failed and we were unable to recover it. 00:30:30.445 [2024-12-09 06:29:24.896044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.445 [2024-12-09 06:29:24.896073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.445 qpair failed and we were unable to recover it. 00:30:30.445 [2024-12-09 06:29:24.896411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.445 [2024-12-09 06:29:24.896440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.445 qpair failed and we were unable to recover it. 00:30:30.445 [2024-12-09 06:29:24.896714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.445 [2024-12-09 06:29:24.896744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.445 qpair failed and we were unable to recover it. 00:30:30.445 [2024-12-09 06:29:24.897086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.445 [2024-12-09 06:29:24.897114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.445 qpair failed and we were unable to recover it. 00:30:30.445 [2024-12-09 06:29:24.897468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.445 [2024-12-09 06:29:24.897498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.445 qpair failed and we were unable to recover it. 00:30:30.445 [2024-12-09 06:29:24.897717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.445 [2024-12-09 06:29:24.897749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.445 qpair failed and we were unable to recover it. 00:30:30.445 [2024-12-09 06:29:24.898105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.445 [2024-12-09 06:29:24.898133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.445 qpair failed and we were unable to recover it. 00:30:30.445 [2024-12-09 06:29:24.898491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.445 [2024-12-09 06:29:24.898522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.445 qpair failed and we were unable to recover it. 00:30:30.445 [2024-12-09 06:29:24.898867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.445 [2024-12-09 06:29:24.898895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.445 qpair failed and we were unable to recover it. 00:30:30.445 [2024-12-09 06:29:24.899282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.445 [2024-12-09 06:29:24.899311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.445 qpair failed and we were unable to recover it. 00:30:30.445 [2024-12-09 06:29:24.899729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.445 [2024-12-09 06:29:24.899760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.445 qpair failed and we were unable to recover it. 00:30:30.445 [2024-12-09 06:29:24.900103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.445 [2024-12-09 06:29:24.900131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.445 qpair failed and we were unable to recover it. 00:30:30.445 [2024-12-09 06:29:24.900478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.445 [2024-12-09 06:29:24.900507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.445 qpair failed and we were unable to recover it. 00:30:30.445 [2024-12-09 06:29:24.900894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.445 [2024-12-09 06:29:24.900923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.445 qpair failed and we were unable to recover it. 00:30:30.445 [2024-12-09 06:29:24.901261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.445 [2024-12-09 06:29:24.901290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.445 qpair failed and we were unable to recover it. 00:30:30.445 [2024-12-09 06:29:24.901700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.445 [2024-12-09 06:29:24.901730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.445 qpair failed and we were unable to recover it. 00:30:30.445 [2024-12-09 06:29:24.902071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.445 [2024-12-09 06:29:24.902101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.445 qpair failed and we were unable to recover it. 00:30:30.445 [2024-12-09 06:29:24.902469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.445 [2024-12-09 06:29:24.902500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.445 qpair failed and we were unable to recover it. 00:30:30.445 [2024-12-09 06:29:24.902843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.445 [2024-12-09 06:29:24.902872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.445 qpair failed and we were unable to recover it. 00:30:30.445 [2024-12-09 06:29:24.903125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.445 [2024-12-09 06:29:24.903153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.445 qpair failed and we were unable to recover it. 00:30:30.445 [2024-12-09 06:29:24.903516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.445 [2024-12-09 06:29:24.903546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.445 qpair failed and we were unable to recover it. 00:30:30.445 [2024-12-09 06:29:24.903898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.445 [2024-12-09 06:29:24.903928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.445 qpair failed and we were unable to recover it. 00:30:30.445 [2024-12-09 06:29:24.904267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.445 [2024-12-09 06:29:24.904301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.445 qpair failed and we were unable to recover it. 00:30:30.445 [2024-12-09 06:29:24.904680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.445 [2024-12-09 06:29:24.904710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.445 qpair failed and we were unable to recover it. 00:30:30.445 [2024-12-09 06:29:24.905059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.445 [2024-12-09 06:29:24.905088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.445 qpair failed and we were unable to recover it. 00:30:30.445 [2024-12-09 06:29:24.905431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.445 [2024-12-09 06:29:24.905471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.445 qpair failed and we were unable to recover it. 00:30:30.445 [2024-12-09 06:29:24.905718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.445 [2024-12-09 06:29:24.905747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.445 qpair failed and we were unable to recover it. 00:30:30.445 [2024-12-09 06:29:24.906134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.445 [2024-12-09 06:29:24.906163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.445 qpair failed and we were unable to recover it. 00:30:30.445 [2024-12-09 06:29:24.906469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.445 [2024-12-09 06:29:24.906499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.445 qpair failed and we were unable to recover it. 00:30:30.445 [2024-12-09 06:29:24.906813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.445 [2024-12-09 06:29:24.906842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.445 qpair failed and we were unable to recover it. 00:30:30.445 [2024-12-09 06:29:24.907182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.445 [2024-12-09 06:29:24.907210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.445 qpair failed and we were unable to recover it. 00:30:30.445 [2024-12-09 06:29:24.907590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.445 [2024-12-09 06:29:24.907621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.445 qpair failed and we were unable to recover it. 00:30:30.445 [2024-12-09 06:29:24.907964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.445 [2024-12-09 06:29:24.907992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.446 qpair failed and we were unable to recover it. 00:30:30.446 [2024-12-09 06:29:24.908344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.446 [2024-12-09 06:29:24.908373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.446 qpair failed and we were unable to recover it. 00:30:30.446 [2024-12-09 06:29:24.908724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.446 [2024-12-09 06:29:24.908754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.446 qpair failed and we were unable to recover it. 00:30:30.446 [2024-12-09 06:29:24.909137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.446 [2024-12-09 06:29:24.909167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.446 qpair failed and we were unable to recover it. 00:30:30.446 [2024-12-09 06:29:24.909539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.446 [2024-12-09 06:29:24.909570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.446 qpair failed and we were unable to recover it. 00:30:30.446 [2024-12-09 06:29:24.909912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.446 [2024-12-09 06:29:24.909941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.446 qpair failed and we were unable to recover it. 00:30:30.446 [2024-12-09 06:29:24.910289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.446 [2024-12-09 06:29:24.910318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.446 qpair failed and we were unable to recover it. 00:30:30.446 [2024-12-09 06:29:24.910695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.446 [2024-12-09 06:29:24.910725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.446 qpair failed and we were unable to recover it. 00:30:30.446 [2024-12-09 06:29:24.910980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.446 [2024-12-09 06:29:24.911011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.446 qpair failed and we were unable to recover it. 00:30:30.446 [2024-12-09 06:29:24.911403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.446 [2024-12-09 06:29:24.911432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.446 qpair failed and we were unable to recover it. 00:30:30.446 [2024-12-09 06:29:24.911817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.446 [2024-12-09 06:29:24.911847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.446 qpair failed and we were unable to recover it. 00:30:30.446 [2024-12-09 06:29:24.912184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.446 [2024-12-09 06:29:24.912212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.446 qpair failed and we were unable to recover it. 00:30:30.446 [2024-12-09 06:29:24.912563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.446 [2024-12-09 06:29:24.912592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.446 qpair failed and we were unable to recover it. 00:30:30.446 [2024-12-09 06:29:24.912950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.446 [2024-12-09 06:29:24.912978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.446 qpair failed and we were unable to recover it. 00:30:30.446 [2024-12-09 06:29:24.913323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.446 [2024-12-09 06:29:24.913352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.446 qpair failed and we were unable to recover it. 00:30:30.446 [2024-12-09 06:29:24.913690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.446 [2024-12-09 06:29:24.913721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.446 qpair failed and we were unable to recover it. 00:30:30.446 [2024-12-09 06:29:24.913939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.446 [2024-12-09 06:29:24.913970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.446 qpair failed and we were unable to recover it. 00:30:30.446 [2024-12-09 06:29:24.914386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.446 [2024-12-09 06:29:24.914415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.446 qpair failed and we were unable to recover it. 00:30:30.446 [2024-12-09 06:29:24.914655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.446 [2024-12-09 06:29:24.914685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.446 qpair failed and we were unable to recover it. 00:30:30.446 [2024-12-09 06:29:24.915050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.446 [2024-12-09 06:29:24.915078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.446 qpair failed and we were unable to recover it. 00:30:30.446 [2024-12-09 06:29:24.915425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.446 [2024-12-09 06:29:24.915463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.446 qpair failed and we were unable to recover it. 00:30:30.446 [2024-12-09 06:29:24.915815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.446 [2024-12-09 06:29:24.915844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.446 qpair failed and we were unable to recover it. 00:30:30.446 [2024-12-09 06:29:24.916186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.446 [2024-12-09 06:29:24.916215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.446 qpair failed and we were unable to recover it. 00:30:30.446 [2024-12-09 06:29:24.916565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.446 [2024-12-09 06:29:24.916596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.446 qpair failed and we were unable to recover it. 00:30:30.446 [2024-12-09 06:29:24.916940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.446 [2024-12-09 06:29:24.916969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.446 qpair failed and we were unable to recover it. 00:30:30.446 [2024-12-09 06:29:24.917215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.446 [2024-12-09 06:29:24.917244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.446 qpair failed and we were unable to recover it. 00:30:30.446 [2024-12-09 06:29:24.917578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.446 [2024-12-09 06:29:24.917607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.446 qpair failed and we were unable to recover it. 00:30:30.446 [2024-12-09 06:29:24.917915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.446 [2024-12-09 06:29:24.917944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.446 qpair failed and we were unable to recover it. 00:30:30.446 [2024-12-09 06:29:24.918294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.446 [2024-12-09 06:29:24.918322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.446 qpair failed and we were unable to recover it. 00:30:30.446 [2024-12-09 06:29:24.918640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.446 [2024-12-09 06:29:24.918670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.446 qpair failed and we were unable to recover it. 00:30:30.446 [2024-12-09 06:29:24.919002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.446 [2024-12-09 06:29:24.919032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.446 qpair failed and we were unable to recover it. 00:30:30.446 [2024-12-09 06:29:24.919423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.446 [2024-12-09 06:29:24.919469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.446 qpair failed and we were unable to recover it. 00:30:30.446 [2024-12-09 06:29:24.919842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.446 [2024-12-09 06:29:24.919871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.446 qpair failed and we were unable to recover it. 00:30:30.446 [2024-12-09 06:29:24.920198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.446 [2024-12-09 06:29:24.920227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.446 qpair failed and we were unable to recover it. 00:30:30.446 [2024-12-09 06:29:24.920574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.446 [2024-12-09 06:29:24.920604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.446 qpair failed and we were unable to recover it. 00:30:30.446 [2024-12-09 06:29:24.920985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.446 [2024-12-09 06:29:24.921014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.446 qpair failed and we were unable to recover it. 00:30:30.446 [2024-12-09 06:29:24.921360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.446 [2024-12-09 06:29:24.921388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.446 qpair failed and we were unable to recover it. 00:30:30.446 [2024-12-09 06:29:24.921731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.446 [2024-12-09 06:29:24.921760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.446 qpair failed and we were unable to recover it. 00:30:30.447 [2024-12-09 06:29:24.922100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.447 [2024-12-09 06:29:24.922129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.447 qpair failed and we were unable to recover it. 00:30:30.447 [2024-12-09 06:29:24.922505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.447 [2024-12-09 06:29:24.922534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.447 qpair failed and we were unable to recover it. 00:30:30.447 [2024-12-09 06:29:24.922883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.447 [2024-12-09 06:29:24.922911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.447 qpair failed and we were unable to recover it. 00:30:30.447 [2024-12-09 06:29:24.923257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.447 [2024-12-09 06:29:24.923286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.447 qpair failed and we were unable to recover it. 00:30:30.447 [2024-12-09 06:29:24.923622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.447 [2024-12-09 06:29:24.923652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.447 qpair failed and we were unable to recover it. 00:30:30.447 [2024-12-09 06:29:24.923994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.447 [2024-12-09 06:29:24.924023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.447 qpair failed and we were unable to recover it. 00:30:30.447 [2024-12-09 06:29:24.924349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.447 [2024-12-09 06:29:24.924377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.447 qpair failed and we were unable to recover it. 00:30:30.447 [2024-12-09 06:29:24.924684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.447 [2024-12-09 06:29:24.924715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.447 qpair failed and we were unable to recover it. 00:30:30.447 [2024-12-09 06:29:24.925102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.447 [2024-12-09 06:29:24.925131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.447 qpair failed and we were unable to recover it. 00:30:30.447 [2024-12-09 06:29:24.925483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.447 [2024-12-09 06:29:24.925513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.447 qpair failed and we were unable to recover it. 00:30:30.447 [2024-12-09 06:29:24.925890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.447 [2024-12-09 06:29:24.925919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.447 qpair failed and we were unable to recover it. 00:30:30.447 [2024-12-09 06:29:24.926253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.447 [2024-12-09 06:29:24.926282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.447 qpair failed and we were unable to recover it. 00:30:30.447 [2024-12-09 06:29:24.926653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.447 [2024-12-09 06:29:24.926683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.447 qpair failed and we were unable to recover it. 00:30:30.447 [2024-12-09 06:29:24.927063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.447 [2024-12-09 06:29:24.927092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.447 qpair failed and we were unable to recover it. 00:30:30.447 [2024-12-09 06:29:24.927427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.447 [2024-12-09 06:29:24.927466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.447 qpair failed and we were unable to recover it. 00:30:30.447 [2024-12-09 06:29:24.927841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.447 [2024-12-09 06:29:24.927871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.447 qpair failed and we were unable to recover it. 00:30:30.447 [2024-12-09 06:29:24.928275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.447 [2024-12-09 06:29:24.928304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.447 qpair failed and we were unable to recover it. 00:30:30.447 [2024-12-09 06:29:24.928654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.447 [2024-12-09 06:29:24.928685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.447 qpair failed and we were unable to recover it. 00:30:30.447 [2024-12-09 06:29:24.928922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.447 [2024-12-09 06:29:24.928951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.447 qpair failed and we were unable to recover it. 00:30:30.447 [2024-12-09 06:29:24.929173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.447 [2024-12-09 06:29:24.929202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.447 qpair failed and we were unable to recover it. 00:30:30.447 [2024-12-09 06:29:24.929562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.447 [2024-12-09 06:29:24.929598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.447 qpair failed and we were unable to recover it. 00:30:30.447 [2024-12-09 06:29:24.929833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.447 [2024-12-09 06:29:24.929863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.447 qpair failed and we were unable to recover it. 00:30:30.447 [2024-12-09 06:29:24.930159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.447 [2024-12-09 06:29:24.930188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.447 qpair failed and we were unable to recover it. 00:30:30.447 [2024-12-09 06:29:24.930525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.447 [2024-12-09 06:29:24.930555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.447 qpair failed and we were unable to recover it. 00:30:30.447 [2024-12-09 06:29:24.930897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.447 [2024-12-09 06:29:24.930926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.447 qpair failed and we were unable to recover it. 00:30:30.447 [2024-12-09 06:29:24.931262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.447 [2024-12-09 06:29:24.931290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.447 qpair failed and we were unable to recover it. 00:30:30.447 [2024-12-09 06:29:24.931624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.447 [2024-12-09 06:29:24.931654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.447 qpair failed and we were unable to recover it. 00:30:30.447 [2024-12-09 06:29:24.931888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.447 [2024-12-09 06:29:24.931918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.447 qpair failed and we were unable to recover it. 00:30:30.447 [2024-12-09 06:29:24.932258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.447 [2024-12-09 06:29:24.932286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.447 qpair failed and we were unable to recover it. 00:30:30.447 [2024-12-09 06:29:24.932638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.447 [2024-12-09 06:29:24.932668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.447 qpair failed and we were unable to recover it. 00:30:30.447 [2024-12-09 06:29:24.933011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.447 [2024-12-09 06:29:24.933041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.447 qpair failed and we were unable to recover it. 00:30:30.447 [2024-12-09 06:29:24.933371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.447 [2024-12-09 06:29:24.933400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.447 qpair failed and we were unable to recover it. 00:30:30.447 [2024-12-09 06:29:24.933788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.447 [2024-12-09 06:29:24.933818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.447 qpair failed and we were unable to recover it. 00:30:30.447 [2024-12-09 06:29:24.934200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.447 [2024-12-09 06:29:24.934229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.447 qpair failed and we were unable to recover it. 00:30:30.447 [2024-12-09 06:29:24.934583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.447 [2024-12-09 06:29:24.934614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.447 qpair failed and we were unable to recover it. 00:30:30.447 [2024-12-09 06:29:24.934963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.447 [2024-12-09 06:29:24.934993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.447 qpair failed and we were unable to recover it. 00:30:30.447 [2024-12-09 06:29:24.935328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.448 [2024-12-09 06:29:24.935356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.448 qpair failed and we were unable to recover it. 00:30:30.448 [2024-12-09 06:29:24.935705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.448 [2024-12-09 06:29:24.935735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.448 qpair failed and we were unable to recover it. 00:30:30.448 [2024-12-09 06:29:24.936079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.448 [2024-12-09 06:29:24.936108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.448 qpair failed and we were unable to recover it. 00:30:30.448 [2024-12-09 06:29:24.936441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.448 [2024-12-09 06:29:24.936483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.448 qpair failed and we were unable to recover it. 00:30:30.448 [2024-12-09 06:29:24.936848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.448 [2024-12-09 06:29:24.936877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.448 qpair failed and we were unable to recover it. 00:30:30.448 [2024-12-09 06:29:24.937096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.448 [2024-12-09 06:29:24.937125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.448 qpair failed and we were unable to recover it. 00:30:30.448 [2024-12-09 06:29:24.937482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.448 [2024-12-09 06:29:24.937513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.448 qpair failed and we were unable to recover it. 00:30:30.448 [2024-12-09 06:29:24.937903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.448 [2024-12-09 06:29:24.937932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.448 qpair failed and we were unable to recover it. 00:30:30.448 [2024-12-09 06:29:24.938278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.448 [2024-12-09 06:29:24.938307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.448 qpair failed and we were unable to recover it. 00:30:30.448 [2024-12-09 06:29:24.938680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.448 [2024-12-09 06:29:24.938710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.448 qpair failed and we were unable to recover it. 00:30:30.448 [2024-12-09 06:29:24.939056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.448 [2024-12-09 06:29:24.939085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.448 qpair failed and we were unable to recover it. 00:30:30.448 [2024-12-09 06:29:24.939409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.448 [2024-12-09 06:29:24.939437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.448 qpair failed and we were unable to recover it. 00:30:30.448 [2024-12-09 06:29:24.939824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.448 [2024-12-09 06:29:24.939854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.448 qpair failed and we were unable to recover it. 00:30:30.448 [2024-12-09 06:29:24.940246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.448 [2024-12-09 06:29:24.940275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.448 qpair failed and we were unable to recover it. 00:30:30.448 [2024-12-09 06:29:24.940614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.448 [2024-12-09 06:29:24.940643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.448 qpair failed and we were unable to recover it. 00:30:30.448 [2024-12-09 06:29:24.940857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.448 [2024-12-09 06:29:24.940886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.448 qpair failed and we were unable to recover it. 00:30:30.448 [2024-12-09 06:29:24.941277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.448 [2024-12-09 06:29:24.941305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.448 qpair failed and we were unable to recover it. 00:30:30.448 [2024-12-09 06:29:24.941675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.448 [2024-12-09 06:29:24.941705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.448 qpair failed and we were unable to recover it. 00:30:30.448 [2024-12-09 06:29:24.942048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.448 [2024-12-09 06:29:24.942077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.448 qpair failed and we were unable to recover it. 00:30:30.448 [2024-12-09 06:29:24.942394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.448 [2024-12-09 06:29:24.942424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.448 qpair failed and we were unable to recover it. 00:30:30.448 [2024-12-09 06:29:24.942797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.448 [2024-12-09 06:29:24.942827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.448 qpair failed and we were unable to recover it. 00:30:30.448 [2024-12-09 06:29:24.943184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.448 [2024-12-09 06:29:24.943214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.448 qpair failed and we were unable to recover it. 00:30:30.448 [2024-12-09 06:29:24.943549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.448 [2024-12-09 06:29:24.943579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.448 qpair failed and we were unable to recover it. 00:30:30.448 [2024-12-09 06:29:24.943935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.448 [2024-12-09 06:29:24.943964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.448 qpair failed and we were unable to recover it. 00:30:30.448 [2024-12-09 06:29:24.944220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.448 [2024-12-09 06:29:24.944252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.448 qpair failed and we were unable to recover it. 00:30:30.448 [2024-12-09 06:29:24.944587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.448 [2024-12-09 06:29:24.944624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.448 qpair failed and we were unable to recover it. 00:30:30.448 [2024-12-09 06:29:24.944967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.448 [2024-12-09 06:29:24.944996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.448 qpair failed and we were unable to recover it. 00:30:30.448 [2024-12-09 06:29:24.945318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.448 [2024-12-09 06:29:24.945348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.448 qpair failed and we were unable to recover it. 00:30:30.448 [2024-12-09 06:29:24.945726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.448 [2024-12-09 06:29:24.945757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.448 qpair failed and we were unable to recover it. 00:30:30.448 [2024-12-09 06:29:24.946086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.448 [2024-12-09 06:29:24.946115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.448 qpair failed and we were unable to recover it. 00:30:30.448 [2024-12-09 06:29:24.946467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.448 [2024-12-09 06:29:24.946497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.448 qpair failed and we were unable to recover it. 00:30:30.448 [2024-12-09 06:29:24.946848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.448 [2024-12-09 06:29:24.946876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.448 qpair failed and we were unable to recover it. 00:30:30.448 [2024-12-09 06:29:24.947263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.448 [2024-12-09 06:29:24.947292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.448 qpair failed and we were unable to recover it. 00:30:30.448 [2024-12-09 06:29:24.947621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.448 [2024-12-09 06:29:24.947651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.448 qpair failed and we were unable to recover it. 00:30:30.448 [2024-12-09 06:29:24.948005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.448 [2024-12-09 06:29:24.948034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.448 qpair failed and we were unable to recover it. 00:30:30.449 [2024-12-09 06:29:24.948365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.449 [2024-12-09 06:29:24.948395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.449 qpair failed and we were unable to recover it. 00:30:30.449 [2024-12-09 06:29:24.948746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.449 [2024-12-09 06:29:24.948775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.449 qpair failed and we were unable to recover it. 00:30:30.449 [2024-12-09 06:29:24.949150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.449 [2024-12-09 06:29:24.949179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.449 qpair failed and we were unable to recover it. 00:30:30.449 [2024-12-09 06:29:24.949402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.449 [2024-12-09 06:29:24.949435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.449 qpair failed and we were unable to recover it. 00:30:30.449 [2024-12-09 06:29:24.949815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.449 [2024-12-09 06:29:24.949845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.449 qpair failed and we were unable to recover it. 00:30:30.449 [2024-12-09 06:29:24.950193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.449 [2024-12-09 06:29:24.950222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.449 qpair failed and we were unable to recover it. 00:30:30.449 [2024-12-09 06:29:24.950563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.449 [2024-12-09 06:29:24.950594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.449 qpair failed and we were unable to recover it. 00:30:30.449 [2024-12-09 06:29:24.950945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.449 [2024-12-09 06:29:24.950974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.449 qpair failed and we were unable to recover it. 00:30:30.449 [2024-12-09 06:29:24.951292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.449 [2024-12-09 06:29:24.951322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.449 qpair failed and we were unable to recover it. 00:30:30.449 [2024-12-09 06:29:24.951678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.449 [2024-12-09 06:29:24.951708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.449 qpair failed and we were unable to recover it. 00:30:30.449 [2024-12-09 06:29:24.952082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.449 [2024-12-09 06:29:24.952111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.449 qpair failed and we were unable to recover it. 00:30:30.449 [2024-12-09 06:29:24.952469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.449 [2024-12-09 06:29:24.952499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.449 qpair failed and we were unable to recover it. 00:30:30.449 [2024-12-09 06:29:24.952880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.449 [2024-12-09 06:29:24.952909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.449 qpair failed and we were unable to recover it. 00:30:30.449 [2024-12-09 06:29:24.953186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.449 [2024-12-09 06:29:24.953216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.449 qpair failed and we were unable to recover it. 00:30:30.449 [2024-12-09 06:29:24.953557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.449 [2024-12-09 06:29:24.953588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.449 qpair failed and we were unable to recover it. 00:30:30.449 [2024-12-09 06:29:24.953803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.449 [2024-12-09 06:29:24.953832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.449 qpair failed and we were unable to recover it. 00:30:30.449 [2024-12-09 06:29:24.954153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.449 [2024-12-09 06:29:24.954182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.449 qpair failed and we were unable to recover it. 00:30:30.449 [2024-12-09 06:29:24.954579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.449 [2024-12-09 06:29:24.954610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.449 qpair failed and we were unable to recover it. 00:30:30.449 [2024-12-09 06:29:24.955001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.449 [2024-12-09 06:29:24.955030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.449 qpair failed and we were unable to recover it. 00:30:30.449 [2024-12-09 06:29:24.955363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.449 [2024-12-09 06:29:24.955392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.449 qpair failed and we were unable to recover it. 00:30:30.449 [2024-12-09 06:29:24.955785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.449 [2024-12-09 06:29:24.955817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.449 qpair failed and we were unable to recover it. 00:30:30.449 [2024-12-09 06:29:24.956166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.449 [2024-12-09 06:29:24.956195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.449 qpair failed and we were unable to recover it. 00:30:30.449 [2024-12-09 06:29:24.956582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.449 [2024-12-09 06:29:24.956612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.449 qpair failed and we were unable to recover it. 00:30:30.449 [2024-12-09 06:29:24.956964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.449 [2024-12-09 06:29:24.956993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.449 qpair failed and we were unable to recover it. 00:30:30.449 [2024-12-09 06:29:24.957337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.449 [2024-12-09 06:29:24.957366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.449 qpair failed and we were unable to recover it. 00:30:30.449 [2024-12-09 06:29:24.957726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.449 [2024-12-09 06:29:24.957757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.449 qpair failed and we were unable to recover it. 00:30:30.449 [2024-12-09 06:29:24.958101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.449 [2024-12-09 06:29:24.958131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.449 qpair failed and we were unable to recover it. 00:30:30.449 [2024-12-09 06:29:24.958470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.449 [2024-12-09 06:29:24.958501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.449 qpair failed and we were unable to recover it. 00:30:30.449 [2024-12-09 06:29:24.958897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.449 [2024-12-09 06:29:24.958926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.449 qpair failed and we were unable to recover it. 00:30:30.449 [2024-12-09 06:29:24.959271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.449 [2024-12-09 06:29:24.959300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.449 qpair failed and we were unable to recover it. 00:30:30.449 [2024-12-09 06:29:24.959681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.449 [2024-12-09 06:29:24.959711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.449 qpair failed and we were unable to recover it. 00:30:30.449 [2024-12-09 06:29:24.959969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.449 [2024-12-09 06:29:24.959999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.449 qpair failed and we were unable to recover it. 00:30:30.449 [2024-12-09 06:29:24.960341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.449 [2024-12-09 06:29:24.960370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.450 qpair failed and we were unable to recover it. 00:30:30.450 [2024-12-09 06:29:24.960724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.450 [2024-12-09 06:29:24.960754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.450 qpair failed and we were unable to recover it. 00:30:30.450 [2024-12-09 06:29:24.961102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.450 [2024-12-09 06:29:24.961131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.450 qpair failed and we were unable to recover it. 00:30:30.450 [2024-12-09 06:29:24.961476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.450 [2024-12-09 06:29:24.961508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.450 qpair failed and we were unable to recover it. 00:30:30.450 [2024-12-09 06:29:24.961856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.450 [2024-12-09 06:29:24.961886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.450 qpair failed and we were unable to recover it. 00:30:30.450 [2024-12-09 06:29:24.962227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.450 [2024-12-09 06:29:24.962257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.450 qpair failed and we were unable to recover it. 00:30:30.450 [2024-12-09 06:29:24.962644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.450 [2024-12-09 06:29:24.962675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.450 qpair failed and we were unable to recover it. 00:30:30.450 [2024-12-09 06:29:24.963025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.450 [2024-12-09 06:29:24.963054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.450 qpair failed and we were unable to recover it. 00:30:30.450 [2024-12-09 06:29:24.963385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.450 [2024-12-09 06:29:24.963414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.450 qpair failed and we were unable to recover it. 00:30:30.450 [2024-12-09 06:29:24.963664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.450 [2024-12-09 06:29:24.963698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.450 qpair failed and we were unable to recover it. 00:30:30.450 [2024-12-09 06:29:24.963933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.450 [2024-12-09 06:29:24.963966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.450 qpair failed and we were unable to recover it. 00:30:30.450 [2024-12-09 06:29:24.964310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.450 [2024-12-09 06:29:24.964339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.450 qpair failed and we were unable to recover it. 00:30:30.450 [2024-12-09 06:29:24.964663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.450 [2024-12-09 06:29:24.964694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.450 qpair failed and we were unable to recover it. 00:30:30.450 [2024-12-09 06:29:24.964950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.450 [2024-12-09 06:29:24.964980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.450 qpair failed and we were unable to recover it. 00:30:30.450 [2024-12-09 06:29:24.965341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.450 [2024-12-09 06:29:24.965370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.450 qpair failed and we were unable to recover it. 00:30:30.450 [2024-12-09 06:29:24.965611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.450 [2024-12-09 06:29:24.965641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.450 qpair failed and we were unable to recover it. 00:30:30.450 [2024-12-09 06:29:24.965984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.450 [2024-12-09 06:29:24.966013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.450 qpair failed and we were unable to recover it. 00:30:30.450 [2024-12-09 06:29:24.966355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.450 [2024-12-09 06:29:24.966384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.450 qpair failed and we were unable to recover it. 00:30:30.450 [2024-12-09 06:29:24.966732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.450 [2024-12-09 06:29:24.966764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.450 qpair failed and we were unable to recover it. 00:30:30.450 [2024-12-09 06:29:24.966999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.450 [2024-12-09 06:29:24.967027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.450 qpair failed and we were unable to recover it. 00:30:30.450 [2024-12-09 06:29:24.967250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.450 [2024-12-09 06:29:24.967279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.450 qpair failed and we were unable to recover it. 00:30:30.450 [2024-12-09 06:29:24.967551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.450 [2024-12-09 06:29:24.967581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.450 qpair failed and we were unable to recover it. 00:30:30.450 [2024-12-09 06:29:24.967883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.450 [2024-12-09 06:29:24.967912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.450 qpair failed and we were unable to recover it. 00:30:30.450 [2024-12-09 06:29:24.968163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.450 [2024-12-09 06:29:24.968193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.450 qpair failed and we were unable to recover it. 00:30:30.450 [2024-12-09 06:29:24.968426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.450 [2024-12-09 06:29:24.968467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.450 qpair failed and we were unable to recover it. 00:30:30.450 [2024-12-09 06:29:24.968817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.450 [2024-12-09 06:29:24.968845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.450 qpair failed and we were unable to recover it. 00:30:30.450 [2024-12-09 06:29:24.969226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.450 [2024-12-09 06:29:24.969262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.450 qpair failed and we were unable to recover it. 00:30:30.450 [2024-12-09 06:29:24.969495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.450 [2024-12-09 06:29:24.969525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.450 qpair failed and we were unable to recover it. 00:30:30.450 [2024-12-09 06:29:24.969761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.450 [2024-12-09 06:29:24.969790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.450 qpair failed and we were unable to recover it. 00:30:30.450 [2024-12-09 06:29:24.970148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.450 [2024-12-09 06:29:24.970176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.450 qpair failed and we were unable to recover it. 00:30:30.450 [2024-12-09 06:29:24.970514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.450 [2024-12-09 06:29:24.970544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.450 qpair failed and we were unable to recover it. 00:30:30.450 [2024-12-09 06:29:24.970889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.450 [2024-12-09 06:29:24.970917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.450 qpair failed and we were unable to recover it. 00:30:30.450 [2024-12-09 06:29:24.971262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.450 [2024-12-09 06:29:24.971291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.450 qpair failed and we were unable to recover it. 00:30:30.450 [2024-12-09 06:29:24.971627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.450 [2024-12-09 06:29:24.971658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.450 qpair failed and we were unable to recover it. 00:30:30.450 [2024-12-09 06:29:24.972057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.450 [2024-12-09 06:29:24.972087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.450 qpair failed and we were unable to recover it. 00:30:30.450 [2024-12-09 06:29:24.972307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.450 [2024-12-09 06:29:24.972337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.450 qpair failed and we were unable to recover it. 00:30:30.450 [2024-12-09 06:29:24.972570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.450 [2024-12-09 06:29:24.972601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.450 qpair failed and we were unable to recover it. 00:30:30.450 [2024-12-09 06:29:24.972943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.450 [2024-12-09 06:29:24.972973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.450 qpair failed and we were unable to recover it. 00:30:30.450 [2024-12-09 06:29:24.973309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.450 [2024-12-09 06:29:24.973338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.450 qpair failed and we were unable to recover it. 00:30:30.450 [2024-12-09 06:29:24.973580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.451 [2024-12-09 06:29:24.973610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.451 qpair failed and we were unable to recover it. 00:30:30.451 [2024-12-09 06:29:24.973988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.451 [2024-12-09 06:29:24.974018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.451 qpair failed and we were unable to recover it. 00:30:30.451 [2024-12-09 06:29:24.974364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.451 [2024-12-09 06:29:24.974393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.451 qpair failed and we were unable to recover it. 00:30:30.451 [2024-12-09 06:29:24.974796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.451 [2024-12-09 06:29:24.974828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.451 qpair failed and we were unable to recover it. 00:30:30.451 [2024-12-09 06:29:24.975178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.451 [2024-12-09 06:29:24.975207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.451 qpair failed and we were unable to recover it. 00:30:30.451 [2024-12-09 06:29:24.975430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.451 [2024-12-09 06:29:24.975473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.451 qpair failed and we were unable to recover it. 00:30:30.451 [2024-12-09 06:29:24.975594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.451 [2024-12-09 06:29:24.975623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.451 qpair failed and we were unable to recover it. 00:30:30.451 [2024-12-09 06:29:24.976019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.451 [2024-12-09 06:29:24.976048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.451 qpair failed and we were unable to recover it. 00:30:30.451 [2024-12-09 06:29:24.976382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.451 [2024-12-09 06:29:24.976411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.451 qpair failed and we were unable to recover it. 00:30:30.451 [2024-12-09 06:29:24.976742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.451 [2024-12-09 06:29:24.976772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.451 qpair failed and we were unable to recover it. 00:30:30.451 [2024-12-09 06:29:24.977127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.451 [2024-12-09 06:29:24.977157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.451 qpair failed and we were unable to recover it. 00:30:30.451 [2024-12-09 06:29:24.977542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.451 [2024-12-09 06:29:24.977573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.451 qpair failed and we were unable to recover it. 00:30:30.451 [2024-12-09 06:29:24.977936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.451 [2024-12-09 06:29:24.977965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.451 qpair failed and we were unable to recover it. 00:30:30.451 [2024-12-09 06:29:24.978185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.451 [2024-12-09 06:29:24.978214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.451 qpair failed and we were unable to recover it. 00:30:30.451 [2024-12-09 06:29:24.978437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.451 [2024-12-09 06:29:24.978478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.451 qpair failed and we were unable to recover it. 00:30:30.451 [2024-12-09 06:29:24.978723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.451 [2024-12-09 06:29:24.978754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.451 qpair failed and we were unable to recover it. 00:30:30.451 [2024-12-09 06:29:24.979105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.451 [2024-12-09 06:29:24.979134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.451 qpair failed and we were unable to recover it. 00:30:30.451 [2024-12-09 06:29:24.979506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.451 [2024-12-09 06:29:24.979536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.451 qpair failed and we were unable to recover it. 00:30:30.451 [2024-12-09 06:29:24.979827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.451 [2024-12-09 06:29:24.979857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.451 qpair failed and we were unable to recover it. 00:30:30.451 [2024-12-09 06:29:24.980224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.451 [2024-12-09 06:29:24.980253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.451 qpair failed and we were unable to recover it. 00:30:30.451 [2024-12-09 06:29:24.980577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.451 [2024-12-09 06:29:24.980608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.451 qpair failed and we were unable to recover it. 00:30:30.451 [2024-12-09 06:29:24.980832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.451 [2024-12-09 06:29:24.980861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.451 qpair failed and we were unable to recover it. 00:30:30.451 [2024-12-09 06:29:24.981216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.451 [2024-12-09 06:29:24.981246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.451 qpair failed and we were unable to recover it. 00:30:30.451 [2024-12-09 06:29:24.981598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.451 [2024-12-09 06:29:24.981629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.451 qpair failed and we were unable to recover it. 00:30:30.451 [2024-12-09 06:29:24.981956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.451 [2024-12-09 06:29:24.981985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.451 qpair failed and we were unable to recover it. 00:30:30.451 [2024-12-09 06:29:24.982341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.451 [2024-12-09 06:29:24.982370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.451 qpair failed and we were unable to recover it. 00:30:30.451 [2024-12-09 06:29:24.982720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.451 [2024-12-09 06:29:24.982750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.451 qpair failed and we were unable to recover it. 00:30:30.451 [2024-12-09 06:29:24.983072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.451 [2024-12-09 06:29:24.983101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.451 qpair failed and we were unable to recover it. 00:30:30.451 [2024-12-09 06:29:24.983459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.451 [2024-12-09 06:29:24.983489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.451 qpair failed and we were unable to recover it. 00:30:30.451 [2024-12-09 06:29:24.983636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.451 [2024-12-09 06:29:24.983668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.451 qpair failed and we were unable to recover it. 00:30:30.451 [2024-12-09 06:29:24.984021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.451 [2024-12-09 06:29:24.984050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.451 qpair failed and we were unable to recover it. 00:30:30.451 [2024-12-09 06:29:24.984391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.451 [2024-12-09 06:29:24.984420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.451 qpair failed and we were unable to recover it. 00:30:30.451 [2024-12-09 06:29:24.984648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.451 [2024-12-09 06:29:24.984680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.451 qpair failed and we were unable to recover it. 00:30:30.451 [2024-12-09 06:29:24.985056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.451 [2024-12-09 06:29:24.985085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.451 qpair failed and we were unable to recover it. 00:30:30.451 [2024-12-09 06:29:24.985440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.451 [2024-12-09 06:29:24.985482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.451 qpair failed and we were unable to recover it. 00:30:30.451 [2024-12-09 06:29:24.985878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.451 [2024-12-09 06:29:24.985907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.451 qpair failed and we were unable to recover it. 00:30:30.451 [2024-12-09 06:29:24.986242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.451 [2024-12-09 06:29:24.986272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.451 qpair failed and we were unable to recover it. 00:30:30.451 [2024-12-09 06:29:24.986597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.451 [2024-12-09 06:29:24.986627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.451 qpair failed and we were unable to recover it. 00:30:30.451 [2024-12-09 06:29:24.986972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.451 [2024-12-09 06:29:24.987002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.452 qpair failed and we were unable to recover it. 00:30:30.452 [2024-12-09 06:29:24.987232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.452 [2024-12-09 06:29:24.987262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.452 qpair failed and we were unable to recover it. 00:30:30.452 [2024-12-09 06:29:24.987532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.452 [2024-12-09 06:29:24.987562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.452 qpair failed and we were unable to recover it. 00:30:30.452 [2024-12-09 06:29:24.987817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.452 [2024-12-09 06:29:24.987849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.452 qpair failed and we were unable to recover it. 00:30:30.452 [2024-12-09 06:29:24.988217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.452 [2024-12-09 06:29:24.988247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.452 qpair failed and we were unable to recover it. 00:30:30.452 [2024-12-09 06:29:24.988586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.452 [2024-12-09 06:29:24.988616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.452 qpair failed and we were unable to recover it. 00:30:30.452 [2024-12-09 06:29:24.988965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.452 [2024-12-09 06:29:24.988994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.452 qpair failed and we were unable to recover it. 00:30:30.452 [2024-12-09 06:29:24.989355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.452 [2024-12-09 06:29:24.989384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.452 qpair failed and we were unable to recover it. 00:30:30.452 [2024-12-09 06:29:24.989743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.452 [2024-12-09 06:29:24.989773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.452 qpair failed and we were unable to recover it. 00:30:30.452 [2024-12-09 06:29:24.990191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.452 [2024-12-09 06:29:24.990220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.452 qpair failed and we were unable to recover it. 00:30:30.452 [2024-12-09 06:29:24.990569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.452 [2024-12-09 06:29:24.990599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.452 qpair failed and we were unable to recover it. 00:30:30.452 [2024-12-09 06:29:24.990983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.452 [2024-12-09 06:29:24.991013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.452 qpair failed and we were unable to recover it. 00:30:30.452 [2024-12-09 06:29:24.991313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.452 [2024-12-09 06:29:24.991342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.452 qpair failed and we were unable to recover it. 00:30:30.452 [2024-12-09 06:29:24.991726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.452 [2024-12-09 06:29:24.991757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.452 qpair failed and we were unable to recover it. 00:30:30.452 [2024-12-09 06:29:24.992121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.452 [2024-12-09 06:29:24.992150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.452 qpair failed and we were unable to recover it. 00:30:30.452 [2024-12-09 06:29:24.992512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.452 [2024-12-09 06:29:24.992542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.452 qpair failed and we were unable to recover it. 00:30:30.452 [2024-12-09 06:29:24.992892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.452 [2024-12-09 06:29:24.992921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.452 qpair failed and we were unable to recover it. 00:30:30.452 [2024-12-09 06:29:24.993166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.452 [2024-12-09 06:29:24.993201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.452 qpair failed and we were unable to recover it. 00:30:30.452 [2024-12-09 06:29:24.993489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.452 [2024-12-09 06:29:24.993519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.452 qpair failed and we were unable to recover it. 00:30:30.452 [2024-12-09 06:29:24.993702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.452 [2024-12-09 06:29:24.993731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.452 qpair failed and we were unable to recover it. 00:30:30.452 [2024-12-09 06:29:24.993989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.452 [2024-12-09 06:29:24.994018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.452 qpair failed and we were unable to recover it. 00:30:30.452 [2024-12-09 06:29:24.994271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.452 [2024-12-09 06:29:24.994301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.452 qpair failed and we were unable to recover it. 00:30:30.452 [2024-12-09 06:29:24.994647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.452 [2024-12-09 06:29:24.994678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.452 qpair failed and we were unable to recover it. 00:30:30.452 [2024-12-09 06:29:24.995035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.452 [2024-12-09 06:29:24.995065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.452 qpair failed and we were unable to recover it. 00:30:30.452 [2024-12-09 06:29:24.995400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.452 [2024-12-09 06:29:24.995429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.452 qpair failed and we were unable to recover it. 00:30:30.452 [2024-12-09 06:29:24.995767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.452 [2024-12-09 06:29:24.995797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.452 qpair failed and we were unable to recover it. 00:30:30.452 [2024-12-09 06:29:24.996148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.452 [2024-12-09 06:29:24.996178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.452 qpair failed and we were unable to recover it. 00:30:30.452 [2024-12-09 06:29:24.996525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.452 [2024-12-09 06:29:24.996555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.452 qpair failed and we were unable to recover it. 00:30:30.452 [2024-12-09 06:29:24.996781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.452 [2024-12-09 06:29:24.996811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.452 qpair failed and we were unable to recover it. 00:30:30.452 [2024-12-09 06:29:24.997183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.452 [2024-12-09 06:29:24.997213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.452 qpair failed and we were unable to recover it. 00:30:30.452 [2024-12-09 06:29:24.997502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.452 [2024-12-09 06:29:24.997533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.452 qpair failed and we were unable to recover it. 00:30:30.452 [2024-12-09 06:29:24.997859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.452 [2024-12-09 06:29:24.997889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.452 qpair failed and we were unable to recover it. 00:30:30.452 [2024-12-09 06:29:24.998232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.452 [2024-12-09 06:29:24.998261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.452 qpair failed and we were unable to recover it. 00:30:30.452 [2024-12-09 06:29:24.998656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.452 [2024-12-09 06:29:24.998686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.452 qpair failed and we were unable to recover it. 00:30:30.452 [2024-12-09 06:29:24.998932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.452 [2024-12-09 06:29:24.998961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.452 qpair failed and we were unable to recover it. 00:30:30.452 [2024-12-09 06:29:24.999292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.452 [2024-12-09 06:29:24.999322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.452 qpair failed and we were unable to recover it. 00:30:30.452 [2024-12-09 06:29:24.999545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.452 [2024-12-09 06:29:24.999576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.452 qpair failed and we were unable to recover it. 00:30:30.452 [2024-12-09 06:29:24.999914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.452 [2024-12-09 06:29:24.999943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.452 qpair failed and we were unable to recover it. 00:30:30.452 [2024-12-09 06:29:25.000150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.452 [2024-12-09 06:29:25.000180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.452 qpair failed and we were unable to recover it. 00:30:30.452 [2024-12-09 06:29:25.000578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.453 [2024-12-09 06:29:25.000609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.453 qpair failed and we were unable to recover it. 00:30:30.453 [2024-12-09 06:29:25.000974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.453 [2024-12-09 06:29:25.001003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.453 qpair failed and we were unable to recover it. 00:30:30.453 [2024-12-09 06:29:25.001351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.453 [2024-12-09 06:29:25.001380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.453 qpair failed and we were unable to recover it. 00:30:30.453 [2024-12-09 06:29:25.001711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.453 [2024-12-09 06:29:25.001741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.453 qpair failed and we were unable to recover it. 00:30:30.453 [2024-12-09 06:29:25.002093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.453 [2024-12-09 06:29:25.002122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.453 qpair failed and we were unable to recover it. 00:30:30.453 [2024-12-09 06:29:25.002363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.453 [2024-12-09 06:29:25.002391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.453 qpair failed and we were unable to recover it. 00:30:30.453 [2024-12-09 06:29:25.002767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.453 [2024-12-09 06:29:25.002799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.453 qpair failed and we were unable to recover it. 00:30:30.453 [2024-12-09 06:29:25.003026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.737 [2024-12-09 06:29:25.003056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.737 qpair failed and we were unable to recover it. 00:30:30.738 [2024-12-09 06:29:25.003414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.738 [2024-12-09 06:29:25.003445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.738 qpair failed and we were unable to recover it. 00:30:30.738 [2024-12-09 06:29:25.003804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.738 [2024-12-09 06:29:25.003834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.738 qpair failed and we were unable to recover it. 00:30:30.738 [2024-12-09 06:29:25.004165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.738 [2024-12-09 06:29:25.004195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.738 qpair failed and we were unable to recover it. 00:30:30.738 [2024-12-09 06:29:25.004528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.738 [2024-12-09 06:29:25.004560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.738 qpair failed and we were unable to recover it. 00:30:30.738 [2024-12-09 06:29:25.004776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.738 [2024-12-09 06:29:25.004806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.738 qpair failed and we were unable to recover it. 00:30:30.738 [2024-12-09 06:29:25.005211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.738 [2024-12-09 06:29:25.005241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.738 qpair failed and we were unable to recover it. 00:30:30.738 [2024-12-09 06:29:25.005591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.738 [2024-12-09 06:29:25.005621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.738 qpair failed and we were unable to recover it. 00:30:30.738 [2024-12-09 06:29:25.005879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.738 [2024-12-09 06:29:25.005909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.738 qpair failed and we were unable to recover it. 00:30:30.738 [2024-12-09 06:29:25.006264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.738 [2024-12-09 06:29:25.006293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.738 qpair failed and we were unable to recover it. 00:30:30.738 [2024-12-09 06:29:25.006647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.738 [2024-12-09 06:29:25.006677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.738 qpair failed and we were unable to recover it. 00:30:30.738 [2024-12-09 06:29:25.006978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.738 [2024-12-09 06:29:25.007007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.738 qpair failed and we were unable to recover it. 00:30:30.738 [2024-12-09 06:29:25.007259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.738 [2024-12-09 06:29:25.007294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.738 qpair failed and we were unable to recover it. 00:30:30.738 [2024-12-09 06:29:25.007660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.738 [2024-12-09 06:29:25.007692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.738 qpair failed and we were unable to recover it. 00:30:30.738 [2024-12-09 06:29:25.007939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.738 [2024-12-09 06:29:25.007968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.738 qpair failed and we were unable to recover it. 00:30:30.738 [2024-12-09 06:29:25.008301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.738 [2024-12-09 06:29:25.008330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.738 qpair failed and we were unable to recover it. 00:30:30.738 [2024-12-09 06:29:25.008657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.738 [2024-12-09 06:29:25.008688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.738 qpair failed and we were unable to recover it. 00:30:30.738 [2024-12-09 06:29:25.009042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.738 [2024-12-09 06:29:25.009071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.738 qpair failed and we were unable to recover it. 00:30:30.738 [2024-12-09 06:29:25.009447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.738 [2024-12-09 06:29:25.009488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.738 qpair failed and we were unable to recover it. 00:30:30.738 [2024-12-09 06:29:25.009875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.738 [2024-12-09 06:29:25.009905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.738 qpair failed and we were unable to recover it. 00:30:30.738 [2024-12-09 06:29:25.010244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.738 [2024-12-09 06:29:25.010273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.738 qpair failed and we were unable to recover it. 00:30:30.738 [2024-12-09 06:29:25.010603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.738 [2024-12-09 06:29:25.010633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.738 qpair failed and we were unable to recover it. 00:30:30.738 [2024-12-09 06:29:25.010945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.738 [2024-12-09 06:29:25.010974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.738 qpair failed and we were unable to recover it. 00:30:30.738 [2024-12-09 06:29:25.011316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.738 [2024-12-09 06:29:25.011345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.738 qpair failed and we were unable to recover it. 00:30:30.738 [2024-12-09 06:29:25.011740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.738 [2024-12-09 06:29:25.011771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.738 qpair failed and we were unable to recover it. 00:30:30.738 [2024-12-09 06:29:25.012110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.738 [2024-12-09 06:29:25.012140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.738 qpair failed and we were unable to recover it. 00:30:30.738 [2024-12-09 06:29:25.012492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.738 [2024-12-09 06:29:25.012523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.738 qpair failed and we were unable to recover it. 00:30:30.738 [2024-12-09 06:29:25.012890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.738 [2024-12-09 06:29:25.012919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.738 qpair failed and we were unable to recover it. 00:30:30.738 [2024-12-09 06:29:25.013262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.738 [2024-12-09 06:29:25.013292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.738 qpair failed and we were unable to recover it. 00:30:30.738 [2024-12-09 06:29:25.013421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.738 [2024-12-09 06:29:25.013462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.738 qpair failed and we were unable to recover it. 00:30:30.738 [2024-12-09 06:29:25.013687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.738 [2024-12-09 06:29:25.013716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.738 qpair failed and we were unable to recover it. 00:30:30.738 [2024-12-09 06:29:25.013964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.738 [2024-12-09 06:29:25.013993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.738 qpair failed and we were unable to recover it. 00:30:30.738 [2024-12-09 06:29:25.014330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.738 [2024-12-09 06:29:25.014358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.738 qpair failed and we were unable to recover it. 00:30:30.738 [2024-12-09 06:29:25.014716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.738 [2024-12-09 06:29:25.014746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.738 qpair failed and we were unable to recover it. 00:30:30.738 [2024-12-09 06:29:25.015092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.738 [2024-12-09 06:29:25.015122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.738 qpair failed and we were unable to recover it. 00:30:30.738 [2024-12-09 06:29:25.015250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.738 [2024-12-09 06:29:25.015280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.738 qpair failed and we were unable to recover it. 00:30:30.738 [2024-12-09 06:29:25.015665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.738 [2024-12-09 06:29:25.015696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.738 qpair failed and we were unable to recover it. 00:30:30.738 [2024-12-09 06:29:25.015935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.738 [2024-12-09 06:29:25.015964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.738 qpair failed and we were unable to recover it. 00:30:30.738 [2024-12-09 06:29:25.016317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.738 [2024-12-09 06:29:25.016346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.738 qpair failed and we were unable to recover it. 00:30:30.738 [2024-12-09 06:29:25.016702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.738 [2024-12-09 06:29:25.016741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.738 qpair failed and we were unable to recover it. 00:30:30.738 [2024-12-09 06:29:25.017088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.738 [2024-12-09 06:29:25.017117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.738 qpair failed and we were unable to recover it. 00:30:30.738 [2024-12-09 06:29:25.017483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.738 [2024-12-09 06:29:25.017513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.738 qpair failed and we were unable to recover it. 00:30:30.738 [2024-12-09 06:29:25.017856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.738 [2024-12-09 06:29:25.017886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.738 qpair failed and we were unable to recover it. 00:30:30.738 [2024-12-09 06:29:25.018265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.738 [2024-12-09 06:29:25.018294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.738 qpair failed and we were unable to recover it. 00:30:30.738 [2024-12-09 06:29:25.018634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.738 [2024-12-09 06:29:25.018664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.738 qpair failed and we were unable to recover it. 00:30:30.738 [2024-12-09 06:29:25.019016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.738 [2024-12-09 06:29:25.019045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.738 qpair failed and we were unable to recover it. 00:30:30.738 [2024-12-09 06:29:25.019393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.738 [2024-12-09 06:29:25.019421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.738 qpair failed and we were unable to recover it. 00:30:30.738 [2024-12-09 06:29:25.019793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.738 [2024-12-09 06:29:25.019823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.738 qpair failed and we were unable to recover it. 00:30:30.738 [2024-12-09 06:29:25.020185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.738 [2024-12-09 06:29:25.020214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.738 qpair failed and we were unable to recover it. 00:30:30.738 [2024-12-09 06:29:25.020572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.738 [2024-12-09 06:29:25.020603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.738 qpair failed and we were unable to recover it. 00:30:30.738 [2024-12-09 06:29:25.020834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.738 [2024-12-09 06:29:25.020862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.738 qpair failed and we were unable to recover it. 00:30:30.738 [2024-12-09 06:29:25.021226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.738 [2024-12-09 06:29:25.021254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.738 qpair failed and we were unable to recover it. 00:30:30.738 [2024-12-09 06:29:25.021607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.738 [2024-12-09 06:29:25.021637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.739 qpair failed and we were unable to recover it. 00:30:30.739 [2024-12-09 06:29:25.022005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.739 [2024-12-09 06:29:25.022035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.739 qpair failed and we were unable to recover it. 00:30:30.739 [2024-12-09 06:29:25.022379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.739 [2024-12-09 06:29:25.022408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.739 qpair failed and we were unable to recover it. 00:30:30.739 [2024-12-09 06:29:25.022657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.739 [2024-12-09 06:29:25.022687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.739 qpair failed and we were unable to recover it. 00:30:30.739 [2024-12-09 06:29:25.023025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.739 [2024-12-09 06:29:25.023054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.739 qpair failed and we were unable to recover it. 00:30:30.739 [2024-12-09 06:29:25.023398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.739 [2024-12-09 06:29:25.023427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.739 qpair failed and we were unable to recover it. 00:30:30.739 [2024-12-09 06:29:25.023759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.739 [2024-12-09 06:29:25.023789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.739 qpair failed and we were unable to recover it. 00:30:30.739 [2024-12-09 06:29:25.024164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.739 [2024-12-09 06:29:25.024192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.739 qpair failed and we were unable to recover it. 00:30:30.739 [2024-12-09 06:29:25.024533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.739 [2024-12-09 06:29:25.024564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.739 qpair failed and we were unable to recover it. 00:30:30.739 [2024-12-09 06:29:25.024910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.739 [2024-12-09 06:29:25.024938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.739 qpair failed and we were unable to recover it. 00:30:30.739 [2024-12-09 06:29:25.025283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.739 [2024-12-09 06:29:25.025312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.739 qpair failed and we were unable to recover it. 00:30:30.739 [2024-12-09 06:29:25.025644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.739 [2024-12-09 06:29:25.025675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.739 qpair failed and we were unable to recover it. 00:30:30.739 [2024-12-09 06:29:25.026015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.739 [2024-12-09 06:29:25.026044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.739 qpair failed and we were unable to recover it. 00:30:30.739 [2024-12-09 06:29:25.026266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.739 [2024-12-09 06:29:25.026294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.739 qpair failed and we were unable to recover it. 00:30:30.739 [2024-12-09 06:29:25.026712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.739 [2024-12-09 06:29:25.026742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.739 qpair failed and we were unable to recover it. 00:30:30.739 [2024-12-09 06:29:25.027117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.739 [2024-12-09 06:29:25.027147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.739 qpair failed and we were unable to recover it. 00:30:30.739 [2024-12-09 06:29:25.027487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.739 [2024-12-09 06:29:25.027517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.739 qpair failed and we were unable to recover it. 00:30:30.739 [2024-12-09 06:29:25.027868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.739 [2024-12-09 06:29:25.027897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.739 qpair failed and we were unable to recover it. 00:30:30.739 [2024-12-09 06:29:25.028292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.739 [2024-12-09 06:29:25.028321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.739 qpair failed and we were unable to recover it. 00:30:30.739 [2024-12-09 06:29:25.028661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.739 [2024-12-09 06:29:25.028692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.739 qpair failed and we were unable to recover it. 00:30:30.739 [2024-12-09 06:29:25.029061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.739 [2024-12-09 06:29:25.029090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.739 qpair failed and we were unable to recover it. 00:30:30.739 [2024-12-09 06:29:25.029405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.739 [2024-12-09 06:29:25.029434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.739 qpair failed and we were unable to recover it. 00:30:30.739 [2024-12-09 06:29:25.029772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.739 [2024-12-09 06:29:25.029802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.739 qpair failed and we were unable to recover it. 00:30:30.739 [2024-12-09 06:29:25.030179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.739 [2024-12-09 06:29:25.030208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.739 qpair failed and we were unable to recover it. 00:30:30.739 [2024-12-09 06:29:25.030558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.739 [2024-12-09 06:29:25.030588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.739 qpair failed and we were unable to recover it. 00:30:30.739 [2024-12-09 06:29:25.030936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.739 [2024-12-09 06:29:25.030964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.739 qpair failed and we were unable to recover it. 00:30:30.739 [2024-12-09 06:29:25.031212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.739 [2024-12-09 06:29:25.031244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.739 qpair failed and we were unable to recover it. 00:30:30.739 [2024-12-09 06:29:25.031572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.739 [2024-12-09 06:29:25.031602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.739 qpair failed and we were unable to recover it. 00:30:30.739 [2024-12-09 06:29:25.031999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.739 [2024-12-09 06:29:25.032035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.739 qpair failed and we were unable to recover it. 00:30:30.739 [2024-12-09 06:29:25.032373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.739 [2024-12-09 06:29:25.032402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.739 qpair failed and we were unable to recover it. 00:30:30.739 [2024-12-09 06:29:25.032745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.739 [2024-12-09 06:29:25.032776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.739 qpair failed and we were unable to recover it. 00:30:30.739 [2024-12-09 06:29:25.033010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.739 [2024-12-09 06:29:25.033043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.739 qpair failed and we were unable to recover it. 00:30:30.739 [2024-12-09 06:29:25.033478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.739 [2024-12-09 06:29:25.033508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.739 qpair failed and we were unable to recover it. 00:30:30.739 [2024-12-09 06:29:25.033855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.739 [2024-12-09 06:29:25.033884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.739 qpair failed and we were unable to recover it. 00:30:30.739 [2024-12-09 06:29:25.034293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.739 [2024-12-09 06:29:25.034323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.739 qpair failed and we were unable to recover it. 00:30:30.739 [2024-12-09 06:29:25.034718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.739 [2024-12-09 06:29:25.034749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.739 qpair failed and we were unable to recover it. 00:30:30.739 [2024-12-09 06:29:25.035081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.739 [2024-12-09 06:29:25.035109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.739 qpair failed and we were unable to recover it. 00:30:30.739 [2024-12-09 06:29:25.035434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.739 [2024-12-09 06:29:25.035473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.739 qpair failed and we were unable to recover it. 00:30:30.739 [2024-12-09 06:29:25.035854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.739 [2024-12-09 06:29:25.035882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.739 qpair failed and we were unable to recover it. 00:30:30.739 [2024-12-09 06:29:25.036104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.739 [2024-12-09 06:29:25.036132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.739 qpair failed and we were unable to recover it. 00:30:30.739 [2024-12-09 06:29:25.036481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.739 [2024-12-09 06:29:25.036512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.739 qpair failed and we were unable to recover it. 00:30:30.739 [2024-12-09 06:29:25.036901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.739 [2024-12-09 06:29:25.036931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.739 qpair failed and we were unable to recover it. 00:30:30.739 [2024-12-09 06:29:25.037299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.739 [2024-12-09 06:29:25.037329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.739 qpair failed and we were unable to recover it. 00:30:30.739 [2024-12-09 06:29:25.037658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.739 [2024-12-09 06:29:25.037688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.739 qpair failed and we were unable to recover it. 00:30:30.739 [2024-12-09 06:29:25.038034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.739 [2024-12-09 06:29:25.038063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.739 qpair failed and we were unable to recover it. 00:30:30.739 [2024-12-09 06:29:25.038290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.739 [2024-12-09 06:29:25.038319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.739 qpair failed and we were unable to recover it. 00:30:30.739 [2024-12-09 06:29:25.038698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.739 [2024-12-09 06:29:25.038728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.739 qpair failed and we were unable to recover it. 00:30:30.739 [2024-12-09 06:29:25.039110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.739 [2024-12-09 06:29:25.039139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.739 qpair failed and we were unable to recover it. 00:30:30.739 [2024-12-09 06:29:25.039480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.739 [2024-12-09 06:29:25.039511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.739 qpair failed and we were unable to recover it. 00:30:30.739 [2024-12-09 06:29:25.039894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.739 [2024-12-09 06:29:25.039923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.739 qpair failed and we were unable to recover it. 00:30:30.739 [2024-12-09 06:29:25.040258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.739 [2024-12-09 06:29:25.040287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.739 qpair failed and we were unable to recover it. 00:30:30.739 [2024-12-09 06:29:25.040543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.739 [2024-12-09 06:29:25.040574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.739 qpair failed and we were unable to recover it. 00:30:30.739 [2024-12-09 06:29:25.040928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.739 [2024-12-09 06:29:25.040957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.740 qpair failed and we were unable to recover it. 00:30:30.740 [2024-12-09 06:29:25.041323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.740 [2024-12-09 06:29:25.041352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.740 qpair failed and we were unable to recover it. 00:30:30.740 [2024-12-09 06:29:25.041713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.740 [2024-12-09 06:29:25.041744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.740 qpair failed and we were unable to recover it. 00:30:30.740 [2024-12-09 06:29:25.042119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.740 [2024-12-09 06:29:25.042154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.740 qpair failed and we were unable to recover it. 00:30:30.740 [2024-12-09 06:29:25.042498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.740 [2024-12-09 06:29:25.042528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.740 qpair failed and we were unable to recover it. 00:30:30.740 [2024-12-09 06:29:25.042875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.740 [2024-12-09 06:29:25.042904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.740 qpair failed and we were unable to recover it. 00:30:30.740 [2024-12-09 06:29:25.043153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.740 [2024-12-09 06:29:25.043181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.740 qpair failed and we were unable to recover it. 00:30:30.740 [2024-12-09 06:29:25.043400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.740 [2024-12-09 06:29:25.043435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.740 qpair failed and we were unable to recover it. 00:30:30.740 [2024-12-09 06:29:25.043801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.740 [2024-12-09 06:29:25.043830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.740 qpair failed and we were unable to recover it. 00:30:30.740 [2024-12-09 06:29:25.044173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.740 [2024-12-09 06:29:25.044202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.740 qpair failed and we were unable to recover it. 00:30:30.740 [2024-12-09 06:29:25.044556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.740 [2024-12-09 06:29:25.044586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.740 qpair failed and we were unable to recover it. 00:30:30.740 [2024-12-09 06:29:25.044909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.740 [2024-12-09 06:29:25.044938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.740 qpair failed and we were unable to recover it. 00:30:30.740 [2024-12-09 06:29:25.045288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.740 [2024-12-09 06:29:25.045316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.740 qpair failed and we were unable to recover it. 00:30:30.740 [2024-12-09 06:29:25.045563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.740 [2024-12-09 06:29:25.045597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.740 qpair failed and we were unable to recover it. 00:30:30.740 [2024-12-09 06:29:25.046014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.740 [2024-12-09 06:29:25.046043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.740 qpair failed and we were unable to recover it. 00:30:30.740 [2024-12-09 06:29:25.046375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.740 [2024-12-09 06:29:25.046403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.740 qpair failed and we were unable to recover it. 00:30:30.740 [2024-12-09 06:29:25.046799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.740 [2024-12-09 06:29:25.046829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.740 qpair failed and we were unable to recover it. 00:30:30.740 [2024-12-09 06:29:25.047180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.740 [2024-12-09 06:29:25.047210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.740 qpair failed and we were unable to recover it. 00:30:30.740 [2024-12-09 06:29:25.047578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.740 [2024-12-09 06:29:25.047609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.740 qpair failed and we were unable to recover it. 00:30:30.740 [2024-12-09 06:29:25.047885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.740 [2024-12-09 06:29:25.047918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.740 qpair failed and we were unable to recover it. 00:30:30.740 [2024-12-09 06:29:25.048266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.740 [2024-12-09 06:29:25.048296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.740 qpair failed and we were unable to recover it. 00:30:30.740 [2024-12-09 06:29:25.048705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.740 [2024-12-09 06:29:25.048735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.740 qpair failed and we were unable to recover it. 00:30:30.740 [2024-12-09 06:29:25.049100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.740 [2024-12-09 06:29:25.049129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.740 qpair failed and we were unable to recover it. 00:30:30.740 [2024-12-09 06:29:25.049481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.740 [2024-12-09 06:29:25.049528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.740 qpair failed and we were unable to recover it. 00:30:30.740 [2024-12-09 06:29:25.049841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.740 [2024-12-09 06:29:25.049871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.740 qpair failed and we were unable to recover it. 00:30:30.740 [2024-12-09 06:29:25.050214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.740 [2024-12-09 06:29:25.050244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.740 qpair failed and we were unable to recover it. 00:30:30.740 [2024-12-09 06:29:25.050586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.740 [2024-12-09 06:29:25.050617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.740 qpair failed and we were unable to recover it. 00:30:30.740 [2024-12-09 06:29:25.050964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.740 [2024-12-09 06:29:25.050992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.740 qpair failed and we were unable to recover it. 00:30:30.740 [2024-12-09 06:29:25.051335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.740 [2024-12-09 06:29:25.051364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.740 qpair failed and we were unable to recover it. 00:30:30.740 [2024-12-09 06:29:25.051712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.740 [2024-12-09 06:29:25.051743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.740 qpair failed and we were unable to recover it. 00:30:30.740 [2024-12-09 06:29:25.052137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.740 [2024-12-09 06:29:25.052166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.740 qpair failed and we were unable to recover it. 00:30:30.740 [2024-12-09 06:29:25.052572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.740 [2024-12-09 06:29:25.052602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.740 qpair failed and we were unable to recover it. 00:30:30.740 [2024-12-09 06:29:25.052941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.740 [2024-12-09 06:29:25.052969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.740 qpair failed and we were unable to recover it. 00:30:30.740 [2024-12-09 06:29:25.053347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.740 [2024-12-09 06:29:25.053375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.740 qpair failed and we were unable to recover it. 00:30:30.740 [2024-12-09 06:29:25.053789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.740 [2024-12-09 06:29:25.053820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.740 qpair failed and we were unable to recover it. 00:30:30.740 [2024-12-09 06:29:25.054161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.740 [2024-12-09 06:29:25.054189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.740 qpair failed and we were unable to recover it. 00:30:30.740 [2024-12-09 06:29:25.054420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.740 [2024-12-09 06:29:25.054469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.740 qpair failed and we were unable to recover it. 00:30:30.740 [2024-12-09 06:29:25.054843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.740 [2024-12-09 06:29:25.054872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.740 qpair failed and we were unable to recover it. 00:30:30.740 [2024-12-09 06:29:25.055216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.740 [2024-12-09 06:29:25.055246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.740 qpair failed and we were unable to recover it. 00:30:30.740 [2024-12-09 06:29:25.055591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.740 [2024-12-09 06:29:25.055621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.740 qpair failed and we were unable to recover it. 00:30:30.740 [2024-12-09 06:29:25.055956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.740 [2024-12-09 06:29:25.055986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.740 qpair failed and we were unable to recover it. 00:30:30.740 [2024-12-09 06:29:25.056336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.740 [2024-12-09 06:29:25.056365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.740 qpair failed and we were unable to recover it. 00:30:30.740 [2024-12-09 06:29:25.056739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.740 [2024-12-09 06:29:25.056768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.740 qpair failed and we were unable to recover it. 00:30:30.740 [2024-12-09 06:29:25.057001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.740 [2024-12-09 06:29:25.057033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.740 qpair failed and we were unable to recover it. 00:30:30.740 [2024-12-09 06:29:25.057387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.740 [2024-12-09 06:29:25.057422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.740 qpair failed and we were unable to recover it. 00:30:30.740 [2024-12-09 06:29:25.057783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.740 [2024-12-09 06:29:25.057813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.740 qpair failed and we were unable to recover it. 00:30:30.740 [2024-12-09 06:29:25.058159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.740 [2024-12-09 06:29:25.058188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.740 qpair failed and we were unable to recover it. 00:30:30.740 [2024-12-09 06:29:25.058502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.740 [2024-12-09 06:29:25.058534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.740 qpair failed and we were unable to recover it. 00:30:30.740 [2024-12-09 06:29:25.058899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.740 [2024-12-09 06:29:25.058928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.740 qpair failed and we were unable to recover it. 00:30:30.740 [2024-12-09 06:29:25.059269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.740 [2024-12-09 06:29:25.059300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.740 qpair failed and we were unable to recover it. 00:30:30.740 [2024-12-09 06:29:25.059658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.740 [2024-12-09 06:29:25.059688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.740 qpair failed and we were unable to recover it. 00:30:30.741 [2024-12-09 06:29:25.059912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.741 [2024-12-09 06:29:25.059941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.741 qpair failed and we were unable to recover it. 00:30:30.741 [2024-12-09 06:29:25.060296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.741 [2024-12-09 06:29:25.060324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.741 qpair failed and we were unable to recover it. 00:30:30.741 [2024-12-09 06:29:25.060662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.741 [2024-12-09 06:29:25.060692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.741 qpair failed and we were unable to recover it. 00:30:30.741 [2024-12-09 06:29:25.061018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.741 [2024-12-09 06:29:25.061048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.741 qpair failed and we were unable to recover it. 00:30:30.741 [2024-12-09 06:29:25.061430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.741 [2024-12-09 06:29:25.061470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.741 qpair failed and we were unable to recover it. 00:30:30.741 [2024-12-09 06:29:25.061810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.741 [2024-12-09 06:29:25.061840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.741 qpair failed and we were unable to recover it. 00:30:30.741 [2024-12-09 06:29:25.062177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.741 [2024-12-09 06:29:25.062207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.741 qpair failed and we were unable to recover it. 00:30:30.741 [2024-12-09 06:29:25.062581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.741 [2024-12-09 06:29:25.062613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.741 qpair failed and we were unable to recover it. 00:30:30.741 [2024-12-09 06:29:25.062861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.741 [2024-12-09 06:29:25.062890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.741 qpair failed and we were unable to recover it. 00:30:30.741 [2024-12-09 06:29:25.063234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.741 [2024-12-09 06:29:25.063263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.741 qpair failed and we were unable to recover it. 00:30:30.741 [2024-12-09 06:29:25.063611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.741 [2024-12-09 06:29:25.063642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.741 qpair failed and we were unable to recover it. 00:30:30.741 [2024-12-09 06:29:25.064015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.741 [2024-12-09 06:29:25.064044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.741 qpair failed and we were unable to recover it. 00:30:30.741 [2024-12-09 06:29:25.064391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.741 [2024-12-09 06:29:25.064419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.741 qpair failed and we were unable to recover it. 00:30:30.741 [2024-12-09 06:29:25.064816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.741 [2024-12-09 06:29:25.064846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.741 qpair failed and we were unable to recover it. 00:30:30.741 [2024-12-09 06:29:25.065224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.741 [2024-12-09 06:29:25.065254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.741 qpair failed and we were unable to recover it. 00:30:30.741 [2024-12-09 06:29:25.065630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.741 [2024-12-09 06:29:25.065661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.741 qpair failed and we were unable to recover it. 00:30:30.741 [2024-12-09 06:29:25.065930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.741 [2024-12-09 06:29:25.065959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.741 qpair failed and we were unable to recover it. 00:30:30.741 [2024-12-09 06:29:25.066278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.741 [2024-12-09 06:29:25.066307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.741 qpair failed and we were unable to recover it. 00:30:30.741 [2024-12-09 06:29:25.066684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.741 [2024-12-09 06:29:25.066715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.741 qpair failed and we were unable to recover it. 00:30:30.741 [2024-12-09 06:29:25.067096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.741 [2024-12-09 06:29:25.067126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.741 qpair failed and we were unable to recover it. 00:30:30.741 [2024-12-09 06:29:25.067459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.741 [2024-12-09 06:29:25.067495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.741 qpair failed and we were unable to recover it. 00:30:30.741 [2024-12-09 06:29:25.067855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.741 [2024-12-09 06:29:25.067884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.741 qpair failed and we were unable to recover it. 00:30:30.741 [2024-12-09 06:29:25.068261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.741 [2024-12-09 06:29:25.068290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.741 qpair failed and we were unable to recover it. 00:30:30.741 [2024-12-09 06:29:25.068670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.741 [2024-12-09 06:29:25.068700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.741 qpair failed and we were unable to recover it. 00:30:30.741 [2024-12-09 06:29:25.069031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.741 [2024-12-09 06:29:25.069060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.741 qpair failed and we were unable to recover it. 00:30:30.741 [2024-12-09 06:29:25.069406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.741 [2024-12-09 06:29:25.069435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.741 qpair failed and we were unable to recover it. 00:30:30.741 [2024-12-09 06:29:25.069824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.741 [2024-12-09 06:29:25.069853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.741 qpair failed and we were unable to recover it. 00:30:30.741 [2024-12-09 06:29:25.070204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.741 [2024-12-09 06:29:25.070233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.741 qpair failed and we were unable to recover it. 00:30:30.741 [2024-12-09 06:29:25.070569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.741 [2024-12-09 06:29:25.070600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.741 qpair failed and we were unable to recover it. 00:30:30.741 [2024-12-09 06:29:25.070919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.741 [2024-12-09 06:29:25.070948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.741 qpair failed and we were unable to recover it. 00:30:30.741 [2024-12-09 06:29:25.071329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.741 [2024-12-09 06:29:25.071358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.741 qpair failed and we were unable to recover it. 00:30:30.741 [2024-12-09 06:29:25.071583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.741 [2024-12-09 06:29:25.071613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.741 qpair failed and we were unable to recover it. 00:30:30.741 [2024-12-09 06:29:25.071956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.741 [2024-12-09 06:29:25.071985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.741 qpair failed and we were unable to recover it. 00:30:30.741 [2024-12-09 06:29:25.072331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.741 [2024-12-09 06:29:25.072361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.741 qpair failed and we were unable to recover it. 00:30:30.741 [2024-12-09 06:29:25.072725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.741 [2024-12-09 06:29:25.072755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.741 qpair failed and we were unable to recover it. 00:30:30.741 [2024-12-09 06:29:25.073102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.741 [2024-12-09 06:29:25.073131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.741 qpair failed and we were unable to recover it. 00:30:30.741 [2024-12-09 06:29:25.073469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.741 [2024-12-09 06:29:25.073500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.741 qpair failed and we were unable to recover it. 00:30:30.741 [2024-12-09 06:29:25.073846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.741 [2024-12-09 06:29:25.073875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.741 qpair failed and we were unable to recover it. 00:30:30.741 [2024-12-09 06:29:25.074257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.741 [2024-12-09 06:29:25.074286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.741 qpair failed and we were unable to recover it. 00:30:30.741 [2024-12-09 06:29:25.074623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.741 [2024-12-09 06:29:25.074653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.741 qpair failed and we were unable to recover it. 00:30:30.741 [2024-12-09 06:29:25.074985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.741 [2024-12-09 06:29:25.075015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.741 qpair failed and we were unable to recover it. 00:30:30.741 [2024-12-09 06:29:25.075357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.741 [2024-12-09 06:29:25.075386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.741 qpair failed and we were unable to recover it. 00:30:30.741 [2024-12-09 06:29:25.075612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.741 [2024-12-09 06:29:25.075646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.741 qpair failed and we were unable to recover it. 00:30:30.741 [2024-12-09 06:29:25.075896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.741 [2024-12-09 06:29:25.075926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.741 qpair failed and we were unable to recover it. 00:30:30.741 [2024-12-09 06:29:25.076296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.741 [2024-12-09 06:29:25.076326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.741 qpair failed and we were unable to recover it. 00:30:30.741 [2024-12-09 06:29:25.076578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.741 [2024-12-09 06:29:25.076608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.741 qpair failed and we were unable to recover it. 00:30:30.741 [2024-12-09 06:29:25.076956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.741 [2024-12-09 06:29:25.076985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.741 qpair failed and we were unable to recover it. 00:30:30.741 [2024-12-09 06:29:25.077332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.741 [2024-12-09 06:29:25.077361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.741 qpair failed and we were unable to recover it. 00:30:30.741 [2024-12-09 06:29:25.077686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.741 [2024-12-09 06:29:25.077716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.741 qpair failed and we were unable to recover it. 00:30:30.741 [2024-12-09 06:29:25.078071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.741 [2024-12-09 06:29:25.078101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.741 qpair failed and we were unable to recover it. 00:30:30.741 [2024-12-09 06:29:25.078438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.741 [2024-12-09 06:29:25.078481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.741 qpair failed and we were unable to recover it. 00:30:30.741 [2024-12-09 06:29:25.078840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.741 [2024-12-09 06:29:25.078869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.741 qpair failed and we were unable to recover it. 00:30:30.741 [2024-12-09 06:29:25.079116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.741 [2024-12-09 06:29:25.079145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.741 qpair failed and we were unable to recover it. 00:30:30.741 [2024-12-09 06:29:25.079535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.741 [2024-12-09 06:29:25.079566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.741 qpair failed and we were unable to recover it. 00:30:30.741 [2024-12-09 06:29:25.079947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.741 [2024-12-09 06:29:25.079975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.741 qpair failed and we were unable to recover it. 00:30:30.741 [2024-12-09 06:29:25.080319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.741 [2024-12-09 06:29:25.080348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.741 qpair failed and we were unable to recover it. 00:30:30.741 [2024-12-09 06:29:25.080574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.741 [2024-12-09 06:29:25.080604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.741 qpair failed and we were unable to recover it. 00:30:30.741 [2024-12-09 06:29:25.080958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.741 [2024-12-09 06:29:25.080987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.741 qpair failed and we were unable to recover it. 00:30:30.741 [2024-12-09 06:29:25.081336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.741 [2024-12-09 06:29:25.081366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.741 qpair failed and we were unable to recover it. 00:30:30.741 [2024-12-09 06:29:25.081697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.741 [2024-12-09 06:29:25.081727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.741 qpair failed and we were unable to recover it. 00:30:30.741 [2024-12-09 06:29:25.082075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.741 [2024-12-09 06:29:25.082103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.741 qpair failed and we were unable to recover it. 00:30:30.741 [2024-12-09 06:29:25.082333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.741 [2024-12-09 06:29:25.082367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.741 qpair failed and we were unable to recover it. 00:30:30.741 [2024-12-09 06:29:25.082735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.741 [2024-12-09 06:29:25.082766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.741 qpair failed and we were unable to recover it. 00:30:30.741 [2024-12-09 06:29:25.083134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.741 [2024-12-09 06:29:25.083163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.741 qpair failed and we were unable to recover it. 00:30:30.741 [2024-12-09 06:29:25.083373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.741 [2024-12-09 06:29:25.083401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.741 qpair failed and we were unable to recover it. 00:30:30.742 [2024-12-09 06:29:25.083658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.742 [2024-12-09 06:29:25.083689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.742 qpair failed and we were unable to recover it. 00:30:30.742 [2024-12-09 06:29:25.083931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.742 [2024-12-09 06:29:25.083960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.742 qpair failed and we were unable to recover it. 00:30:30.742 [2024-12-09 06:29:25.084306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.742 [2024-12-09 06:29:25.084335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.742 qpair failed and we were unable to recover it. 00:30:30.742 [2024-12-09 06:29:25.084682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.742 [2024-12-09 06:29:25.084713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.742 qpair failed and we were unable to recover it. 00:30:30.742 [2024-12-09 06:29:25.084959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.742 [2024-12-09 06:29:25.084988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.742 qpair failed and we were unable to recover it. 00:30:30.742 [2024-12-09 06:29:25.085227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.742 [2024-12-09 06:29:25.085255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.742 qpair failed and we were unable to recover it. 00:30:30.742 [2024-12-09 06:29:25.085642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.742 [2024-12-09 06:29:25.085672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.742 qpair failed and we were unable to recover it. 00:30:30.742 [2024-12-09 06:29:25.086010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.742 [2024-12-09 06:29:25.086039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.742 qpair failed and we were unable to recover it. 00:30:30.742 [2024-12-09 06:29:25.086386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.742 [2024-12-09 06:29:25.086415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.742 qpair failed and we were unable to recover it. 00:30:30.742 [2024-12-09 06:29:25.086804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.742 [2024-12-09 06:29:25.086835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.742 qpair failed and we were unable to recover it. 00:30:30.742 [2024-12-09 06:29:25.087186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.742 [2024-12-09 06:29:25.087216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.742 qpair failed and we were unable to recover it. 00:30:30.742 [2024-12-09 06:29:25.087439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.742 [2024-12-09 06:29:25.087480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.742 qpair failed and we were unable to recover it. 00:30:30.742 [2024-12-09 06:29:25.087879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.742 [2024-12-09 06:29:25.087907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.742 qpair failed and we were unable to recover it. 00:30:30.742 [2024-12-09 06:29:25.088321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.742 [2024-12-09 06:29:25.088349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.742 qpair failed and we were unable to recover it. 00:30:30.742 [2024-12-09 06:29:25.088726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.742 [2024-12-09 06:29:25.088757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.742 qpair failed and we were unable to recover it. 00:30:30.742 [2024-12-09 06:29:25.089011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.742 [2024-12-09 06:29:25.089040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.742 qpair failed and we were unable to recover it. 00:30:30.742 [2024-12-09 06:29:25.089407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.742 [2024-12-09 06:29:25.089435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.742 qpair failed and we were unable to recover it. 00:30:30.742 [2024-12-09 06:29:25.089764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.742 [2024-12-09 06:29:25.089794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.742 qpair failed and we were unable to recover it. 00:30:30.742 [2024-12-09 06:29:25.090106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.742 [2024-12-09 06:29:25.090135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.742 qpair failed and we were unable to recover it. 00:30:30.742 [2024-12-09 06:29:25.090516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.742 [2024-12-09 06:29:25.090547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.742 qpair failed and we were unable to recover it. 00:30:30.742 [2024-12-09 06:29:25.090901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.742 [2024-12-09 06:29:25.090929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.742 qpair failed and we were unable to recover it. 00:30:30.742 [2024-12-09 06:29:25.091250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.742 [2024-12-09 06:29:25.091279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.742 qpair failed and we were unable to recover it. 00:30:30.742 [2024-12-09 06:29:25.091627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.742 [2024-12-09 06:29:25.091658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.742 qpair failed and we were unable to recover it. 00:30:30.742 [2024-12-09 06:29:25.092035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.742 [2024-12-09 06:29:25.092064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.742 qpair failed and we were unable to recover it. 00:30:30.742 [2024-12-09 06:29:25.092343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.742 [2024-12-09 06:29:25.092372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.742 qpair failed and we were unable to recover it. 00:30:30.742 [2024-12-09 06:29:25.092718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.742 [2024-12-09 06:29:25.092749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.742 qpair failed and we were unable to recover it. 00:30:30.742 [2024-12-09 06:29:25.093125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.742 [2024-12-09 06:29:25.093153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.742 qpair failed and we were unable to recover it. 00:30:30.742 [2024-12-09 06:29:25.093497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.742 [2024-12-09 06:29:25.093526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.742 qpair failed and we were unable to recover it. 00:30:30.742 [2024-12-09 06:29:25.093865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.742 [2024-12-09 06:29:25.093895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.742 qpair failed and we were unable to recover it. 00:30:30.742 [2024-12-09 06:29:25.094245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.742 [2024-12-09 06:29:25.094274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.742 qpair failed and we were unable to recover it. 00:30:30.742 [2024-12-09 06:29:25.094619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.742 [2024-12-09 06:29:25.094650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.742 qpair failed and we were unable to recover it. 00:30:30.742 [2024-12-09 06:29:25.095019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.742 [2024-12-09 06:29:25.095048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.742 qpair failed and we were unable to recover it. 00:30:30.742 [2024-12-09 06:29:25.095382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.742 [2024-12-09 06:29:25.095410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.742 qpair failed and we were unable to recover it. 00:30:30.742 [2024-12-09 06:29:25.095761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.742 [2024-12-09 06:29:25.095791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.742 qpair failed and we were unable to recover it. 00:30:30.742 [2024-12-09 06:29:25.096131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.742 [2024-12-09 06:29:25.096159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.742 qpair failed and we were unable to recover it. 00:30:30.742 [2024-12-09 06:29:25.096503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.742 [2024-12-09 06:29:25.096534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.742 qpair failed and we were unable to recover it. 00:30:30.742 [2024-12-09 06:29:25.096757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.742 [2024-12-09 06:29:25.096787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.742 qpair failed and we were unable to recover it. 00:30:30.742 [2024-12-09 06:29:25.097127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.742 [2024-12-09 06:29:25.097156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.742 qpair failed and we were unable to recover it. 00:30:30.742 [2024-12-09 06:29:25.097537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.742 [2024-12-09 06:29:25.097567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.742 qpair failed and we were unable to recover it. 00:30:30.742 [2024-12-09 06:29:25.097910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.742 [2024-12-09 06:29:25.097938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.742 qpair failed and we were unable to recover it. 00:30:30.742 [2024-12-09 06:29:25.098278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.742 [2024-12-09 06:29:25.098307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.742 qpair failed and we were unable to recover it. 00:30:30.742 [2024-12-09 06:29:25.098689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.742 [2024-12-09 06:29:25.098719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.742 qpair failed and we were unable to recover it. 00:30:30.742 [2024-12-09 06:29:25.099109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.742 [2024-12-09 06:29:25.099138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.742 qpair failed and we were unable to recover it. 00:30:30.742 [2024-12-09 06:29:25.099475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.742 [2024-12-09 06:29:25.099505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.742 qpair failed and we were unable to recover it. 00:30:30.742 [2024-12-09 06:29:25.099837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.742 [2024-12-09 06:29:25.099866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.742 qpair failed and we were unable to recover it. 00:30:30.742 [2024-12-09 06:29:25.100220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.742 [2024-12-09 06:29:25.100248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.742 qpair failed and we were unable to recover it. 00:30:30.742 [2024-12-09 06:29:25.100635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.742 [2024-12-09 06:29:25.100666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.742 qpair failed and we were unable to recover it. 00:30:30.742 [2024-12-09 06:29:25.101028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.742 [2024-12-09 06:29:25.101057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.742 qpair failed and we were unable to recover it. 00:30:30.742 [2024-12-09 06:29:25.101377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.742 [2024-12-09 06:29:25.101405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.742 qpair failed and we were unable to recover it. 00:30:30.742 [2024-12-09 06:29:25.101756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.742 [2024-12-09 06:29:25.101786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.742 qpair failed and we were unable to recover it. 00:30:30.742 [2024-12-09 06:29:25.102171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.742 [2024-12-09 06:29:25.102200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.742 qpair failed and we were unable to recover it. 00:30:30.742 [2024-12-09 06:29:25.102426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.742 [2024-12-09 06:29:25.102477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.742 qpair failed and we were unable to recover it. 00:30:30.742 [2024-12-09 06:29:25.102829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.742 [2024-12-09 06:29:25.102858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.742 qpair failed and we were unable to recover it. 00:30:30.742 [2024-12-09 06:29:25.103195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.742 [2024-12-09 06:29:25.103223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.742 qpair failed and we were unable to recover it. 00:30:30.742 [2024-12-09 06:29:25.103603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.742 [2024-12-09 06:29:25.103633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.742 qpair failed and we were unable to recover it. 00:30:30.742 [2024-12-09 06:29:25.103977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.742 [2024-12-09 06:29:25.104009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.742 qpair failed and we were unable to recover it. 00:30:30.742 [2024-12-09 06:29:25.104384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.742 [2024-12-09 06:29:25.104413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.742 qpair failed and we were unable to recover it. 00:30:30.742 [2024-12-09 06:29:25.104801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.742 [2024-12-09 06:29:25.104831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.742 qpair failed and we were unable to recover it. 00:30:30.742 [2024-12-09 06:29:25.105214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.742 [2024-12-09 06:29:25.105243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.742 qpair failed and we were unable to recover it. 00:30:30.742 [2024-12-09 06:29:25.105559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.742 [2024-12-09 06:29:25.105589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.742 qpair failed and we were unable to recover it. 00:30:30.742 [2024-12-09 06:29:25.105973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.742 [2024-12-09 06:29:25.106002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.742 qpair failed and we were unable to recover it. 00:30:30.742 [2024-12-09 06:29:25.106346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.742 [2024-12-09 06:29:25.106375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.742 qpair failed and we were unable to recover it. 00:30:30.742 [2024-12-09 06:29:25.106680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.742 [2024-12-09 06:29:25.106711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.742 qpair failed and we were unable to recover it. 00:30:30.742 [2024-12-09 06:29:25.107039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.742 [2024-12-09 06:29:25.107068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.742 qpair failed and we were unable to recover it. 00:30:30.742 [2024-12-09 06:29:25.107409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.742 [2024-12-09 06:29:25.107445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.742 qpair failed and we were unable to recover it. 00:30:30.742 [2024-12-09 06:29:25.107798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.742 [2024-12-09 06:29:25.107828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.742 qpair failed and we were unable to recover it. 00:30:30.742 [2024-12-09 06:29:25.108206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.742 [2024-12-09 06:29:25.108236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.742 qpair failed and we were unable to recover it. 00:30:30.742 [2024-12-09 06:29:25.108593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.742 [2024-12-09 06:29:25.108623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.742 qpair failed and we were unable to recover it. 00:30:30.742 [2024-12-09 06:29:25.108955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.742 [2024-12-09 06:29:25.108984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.742 qpair failed and we were unable to recover it. 00:30:30.742 [2024-12-09 06:29:25.109336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.742 [2024-12-09 06:29:25.109365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.742 qpair failed and we were unable to recover it. 00:30:30.742 [2024-12-09 06:29:25.109620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.742 [2024-12-09 06:29:25.109651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.742 qpair failed and we were unable to recover it. 00:30:30.742 [2024-12-09 06:29:25.110002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.742 [2024-12-09 06:29:25.110032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.742 qpair failed and we were unable to recover it. 00:30:30.742 [2024-12-09 06:29:25.110262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.742 [2024-12-09 06:29:25.110294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.743 qpair failed and we were unable to recover it. 00:30:30.743 [2024-12-09 06:29:25.110548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.743 [2024-12-09 06:29:25.110580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.743 qpair failed and we were unable to recover it. 00:30:30.743 [2024-12-09 06:29:25.110946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.743 [2024-12-09 06:29:25.110975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.743 qpair failed and we were unable to recover it. 00:30:30.743 [2024-12-09 06:29:25.111308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.743 [2024-12-09 06:29:25.111337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.743 qpair failed and we were unable to recover it. 00:30:30.743 [2024-12-09 06:29:25.111654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.743 [2024-12-09 06:29:25.111684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.743 qpair failed and we were unable to recover it. 00:30:30.743 [2024-12-09 06:29:25.112096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.743 [2024-12-09 06:29:25.112125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.743 qpair failed and we were unable to recover it. 00:30:30.743 [2024-12-09 06:29:25.112505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.743 [2024-12-09 06:29:25.112535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.743 qpair failed and we were unable to recover it. 00:30:30.743 [2024-12-09 06:29:25.112876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.743 [2024-12-09 06:29:25.112905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.743 qpair failed and we were unable to recover it. 00:30:30.743 [2024-12-09 06:29:25.113251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.743 [2024-12-09 06:29:25.113280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.743 qpair failed and we were unable to recover it. 00:30:30.743 [2024-12-09 06:29:25.113600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.743 [2024-12-09 06:29:25.113629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.743 qpair failed and we were unable to recover it. 00:30:30.743 [2024-12-09 06:29:25.114003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.743 [2024-12-09 06:29:25.114032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.743 qpair failed and we were unable to recover it. 00:30:30.743 [2024-12-09 06:29:25.114380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.743 [2024-12-09 06:29:25.114410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.743 qpair failed and we were unable to recover it. 00:30:30.743 [2024-12-09 06:29:25.114782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.743 [2024-12-09 06:29:25.114812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.743 qpair failed and we were unable to recover it. 00:30:30.743 [2024-12-09 06:29:25.115050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.743 [2024-12-09 06:29:25.115079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.743 qpair failed and we were unable to recover it. 00:30:30.743 [2024-12-09 06:29:25.115467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.743 [2024-12-09 06:29:25.115499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.743 qpair failed and we were unable to recover it. 00:30:30.743 [2024-12-09 06:29:25.115880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.743 [2024-12-09 06:29:25.115909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.743 qpair failed and we were unable to recover it. 00:30:30.743 [2024-12-09 06:29:25.116235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.743 [2024-12-09 06:29:25.116263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.743 qpair failed and we were unable to recover it. 00:30:30.743 [2024-12-09 06:29:25.116600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.743 [2024-12-09 06:29:25.116630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.743 qpair failed and we were unable to recover it. 00:30:30.743 [2024-12-09 06:29:25.116979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.743 [2024-12-09 06:29:25.117008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.743 qpair failed and we were unable to recover it. 00:30:30.743 [2024-12-09 06:29:25.117350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.743 [2024-12-09 06:29:25.117379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.743 qpair failed and we were unable to recover it. 00:30:30.743 [2024-12-09 06:29:25.117745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.743 [2024-12-09 06:29:25.117776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.743 qpair failed and we were unable to recover it. 00:30:30.743 [2024-12-09 06:29:25.118131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.743 [2024-12-09 06:29:25.118160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.743 qpair failed and we were unable to recover it. 00:30:30.743 [2024-12-09 06:29:25.118507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.743 [2024-12-09 06:29:25.118537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.743 qpair failed and we were unable to recover it. 00:30:30.743 [2024-12-09 06:29:25.118889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.743 [2024-12-09 06:29:25.118917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.743 qpair failed and we were unable to recover it. 00:30:30.743 [2024-12-09 06:29:25.119258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.743 [2024-12-09 06:29:25.119287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.743 qpair failed and we were unable to recover it. 00:30:30.743 [2024-12-09 06:29:25.119610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.743 [2024-12-09 06:29:25.119641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.743 qpair failed and we were unable to recover it. 00:30:30.743 [2024-12-09 06:29:25.120018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.743 [2024-12-09 06:29:25.120047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.743 qpair failed and we were unable to recover it. 00:30:30.743 [2024-12-09 06:29:25.120421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.743 [2024-12-09 06:29:25.120460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.743 qpair failed and we were unable to recover it. 00:30:30.743 [2024-12-09 06:29:25.120843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.743 [2024-12-09 06:29:25.120872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.743 qpair failed and we were unable to recover it. 00:30:30.743 [2024-12-09 06:29:25.121097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.743 [2024-12-09 06:29:25.121125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.743 qpair failed and we were unable to recover it. 00:30:30.743 [2024-12-09 06:29:25.121497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.743 [2024-12-09 06:29:25.121528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.743 qpair failed and we were unable to recover it. 00:30:30.743 [2024-12-09 06:29:25.121879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.743 [2024-12-09 06:29:25.121908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.743 qpair failed and we were unable to recover it. 00:30:30.743 [2024-12-09 06:29:25.122127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.743 [2024-12-09 06:29:25.122157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.743 qpair failed and we were unable to recover it. 00:30:30.743 [2024-12-09 06:29:25.122497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.743 [2024-12-09 06:29:25.122534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.743 qpair failed and we were unable to recover it. 00:30:30.743 [2024-12-09 06:29:25.122907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.743 [2024-12-09 06:29:25.122936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.743 qpair failed and we were unable to recover it. 00:30:30.743 [2024-12-09 06:29:25.123304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.743 [2024-12-09 06:29:25.123333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.743 qpair failed and we were unable to recover it. 00:30:30.743 [2024-12-09 06:29:25.123659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.743 [2024-12-09 06:29:25.123690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.743 qpair failed and we were unable to recover it. 00:30:30.743 [2024-12-09 06:29:25.124036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.743 [2024-12-09 06:29:25.124065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.743 qpair failed and we were unable to recover it. 00:30:30.743 [2024-12-09 06:29:25.124443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.743 [2024-12-09 06:29:25.124485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.743 qpair failed and we were unable to recover it. 00:30:30.743 [2024-12-09 06:29:25.124836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.743 [2024-12-09 06:29:25.124866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.743 qpair failed and we were unable to recover it. 00:30:30.743 [2024-12-09 06:29:25.125276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.743 [2024-12-09 06:29:25.125305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.743 qpair failed and we were unable to recover it. 00:30:30.743 [2024-12-09 06:29:25.125655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.743 [2024-12-09 06:29:25.125686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.743 qpair failed and we were unable to recover it. 00:30:30.743 [2024-12-09 06:29:25.125921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.743 [2024-12-09 06:29:25.125951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.743 qpair failed and we were unable to recover it. 00:30:30.743 [2024-12-09 06:29:25.126299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.743 [2024-12-09 06:29:25.126327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.743 qpair failed and we were unable to recover it. 00:30:30.743 [2024-12-09 06:29:25.126648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.743 [2024-12-09 06:29:25.126678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.743 qpair failed and we were unable to recover it. 00:30:30.743 [2024-12-09 06:29:25.127015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.743 [2024-12-09 06:29:25.127044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.743 qpair failed and we were unable to recover it. 00:30:30.743 [2024-12-09 06:29:25.127378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.743 [2024-12-09 06:29:25.127407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.743 qpair failed and we were unable to recover it. 00:30:30.743 [2024-12-09 06:29:25.127720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.743 [2024-12-09 06:29:25.127751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.743 qpair failed and we were unable to recover it. 00:30:30.743 [2024-12-09 06:29:25.128084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.743 [2024-12-09 06:29:25.128114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.743 qpair failed and we were unable to recover it. 00:30:30.743 [2024-12-09 06:29:25.128465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.743 [2024-12-09 06:29:25.128496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.743 qpair failed and we were unable to recover it. 00:30:30.743 [2024-12-09 06:29:25.128887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.743 [2024-12-09 06:29:25.128916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.743 qpair failed and we were unable to recover it. 00:30:30.743 [2024-12-09 06:29:25.129256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.743 [2024-12-09 06:29:25.129286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.743 qpair failed and we were unable to recover it. 00:30:30.743 [2024-12-09 06:29:25.129642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.743 [2024-12-09 06:29:25.129672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.743 qpair failed and we were unable to recover it. 00:30:30.743 [2024-12-09 06:29:25.130021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.743 [2024-12-09 06:29:25.130050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.743 qpair failed and we were unable to recover it. 00:30:30.743 [2024-12-09 06:29:25.130429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.743 [2024-12-09 06:29:25.130480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.743 qpair failed and we were unable to recover it. 00:30:30.743 [2024-12-09 06:29:25.130829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.743 [2024-12-09 06:29:25.130858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.743 qpair failed and we were unable to recover it. 00:30:30.743 [2024-12-09 06:29:25.131201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.743 [2024-12-09 06:29:25.131229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.743 qpair failed and we were unable to recover it. 00:30:30.743 [2024-12-09 06:29:25.131555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.743 [2024-12-09 06:29:25.131585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.743 qpair failed and we were unable to recover it. 00:30:30.743 [2024-12-09 06:29:25.131960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.743 [2024-12-09 06:29:25.131990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.743 qpair failed and we were unable to recover it. 00:30:30.743 [2024-12-09 06:29:25.132398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.743 [2024-12-09 06:29:25.132427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.743 qpair failed and we were unable to recover it. 00:30:30.743 [2024-12-09 06:29:25.132758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.743 [2024-12-09 06:29:25.132793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.743 qpair failed and we were unable to recover it. 00:30:30.743 [2024-12-09 06:29:25.133136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.743 [2024-12-09 06:29:25.133165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.743 qpair failed and we were unable to recover it. 00:30:30.743 [2024-12-09 06:29:25.133508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.743 [2024-12-09 06:29:25.133539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.743 qpair failed and we were unable to recover it. 00:30:30.743 [2024-12-09 06:29:25.133900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.743 [2024-12-09 06:29:25.133929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.743 qpair failed and we were unable to recover it. 00:30:30.743 [2024-12-09 06:29:25.134272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.743 [2024-12-09 06:29:25.134301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.743 qpair failed and we were unable to recover it. 00:30:30.743 [2024-12-09 06:29:25.134620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.743 [2024-12-09 06:29:25.134650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.743 qpair failed and we were unable to recover it. 00:30:30.743 [2024-12-09 06:29:25.135030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.743 [2024-12-09 06:29:25.135058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.743 qpair failed and we were unable to recover it. 00:30:30.743 [2024-12-09 06:29:25.135400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.743 [2024-12-09 06:29:25.135429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.744 qpair failed and we were unable to recover it. 00:30:30.744 [2024-12-09 06:29:25.135776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.744 [2024-12-09 06:29:25.135805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.744 qpair failed and we were unable to recover it. 00:30:30.744 [2024-12-09 06:29:25.136147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.744 [2024-12-09 06:29:25.136177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.744 qpair failed and we were unable to recover it. 00:30:30.744 [2024-12-09 06:29:25.136564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.744 [2024-12-09 06:29:25.136596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.744 qpair failed and we were unable to recover it. 00:30:30.744 [2024-12-09 06:29:25.136943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.744 [2024-12-09 06:29:25.136971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.744 qpair failed and we were unable to recover it. 00:30:30.744 [2024-12-09 06:29:25.137334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.744 [2024-12-09 06:29:25.137363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.744 qpair failed and we were unable to recover it. 00:30:30.744 [2024-12-09 06:29:25.137683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.744 [2024-12-09 06:29:25.137714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.744 qpair failed and we were unable to recover it. 00:30:30.744 [2024-12-09 06:29:25.138105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.744 [2024-12-09 06:29:25.138135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.744 qpair failed and we were unable to recover it. 00:30:30.744 [2024-12-09 06:29:25.138480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.744 [2024-12-09 06:29:25.138511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.744 qpair failed and we were unable to recover it. 00:30:30.744 [2024-12-09 06:29:25.138763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.744 [2024-12-09 06:29:25.138796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.744 qpair failed and we were unable to recover it. 00:30:30.744 [2024-12-09 06:29:25.139141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.744 [2024-12-09 06:29:25.139170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.744 qpair failed and we were unable to recover it. 00:30:30.744 [2024-12-09 06:29:25.139558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.744 [2024-12-09 06:29:25.139589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.744 qpair failed and we were unable to recover it. 00:30:30.744 [2024-12-09 06:29:25.139928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.744 [2024-12-09 06:29:25.139957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.744 qpair failed and we were unable to recover it. 00:30:30.744 [2024-12-09 06:29:25.140312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.744 [2024-12-09 06:29:25.140340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.744 qpair failed and we were unable to recover it. 00:30:30.744 [2024-12-09 06:29:25.140692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.744 [2024-12-09 06:29:25.140722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.744 qpair failed and we were unable to recover it. 00:30:30.744 [2024-12-09 06:29:25.141032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.744 [2024-12-09 06:29:25.141061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.744 qpair failed and we were unable to recover it. 00:30:30.744 [2024-12-09 06:29:25.141403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.744 [2024-12-09 06:29:25.141432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.744 qpair failed and we were unable to recover it. 00:30:30.744 [2024-12-09 06:29:25.141826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.744 [2024-12-09 06:29:25.141855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.744 qpair failed and we were unable to recover it. 00:30:30.744 [2024-12-09 06:29:25.142198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.744 [2024-12-09 06:29:25.142227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.744 qpair failed and we were unable to recover it. 00:30:30.744 [2024-12-09 06:29:25.142615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.744 [2024-12-09 06:29:25.142646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.744 qpair failed and we were unable to recover it. 00:30:30.744 [2024-12-09 06:29:25.142989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.744 [2024-12-09 06:29:25.143017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.744 qpair failed and we were unable to recover it. 00:30:30.744 [2024-12-09 06:29:25.143361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.744 [2024-12-09 06:29:25.143390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.744 qpair failed and we were unable to recover it. 00:30:30.744 [2024-12-09 06:29:25.143709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.744 [2024-12-09 06:29:25.143739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.744 qpair failed and we were unable to recover it. 00:30:30.744 [2024-12-09 06:29:25.144050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.744 [2024-12-09 06:29:25.144078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.744 qpair failed and we were unable to recover it. 00:30:30.744 [2024-12-09 06:29:25.144428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.744 [2024-12-09 06:29:25.144468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.744 qpair failed and we were unable to recover it. 00:30:30.744 [2024-12-09 06:29:25.144722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.744 [2024-12-09 06:29:25.144754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.744 qpair failed and we were unable to recover it. 00:30:30.744 [2024-12-09 06:29:25.145112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.744 [2024-12-09 06:29:25.145142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.744 qpair failed and we were unable to recover it. 00:30:30.744 [2024-12-09 06:29:25.145528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.744 [2024-12-09 06:29:25.145559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.744 qpair failed and we were unable to recover it. 00:30:30.744 [2024-12-09 06:29:25.145903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.744 [2024-12-09 06:29:25.145932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.744 qpair failed and we were unable to recover it. 00:30:30.744 [2024-12-09 06:29:25.146253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.744 [2024-12-09 06:29:25.146281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.744 qpair failed and we were unable to recover it. 00:30:30.744 [2024-12-09 06:29:25.146638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.744 [2024-12-09 06:29:25.146669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.744 qpair failed and we were unable to recover it. 00:30:30.744 [2024-12-09 06:29:25.147010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.744 [2024-12-09 06:29:25.147038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.744 qpair failed and we were unable to recover it. 00:30:30.744 [2024-12-09 06:29:25.147466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.744 [2024-12-09 06:29:25.147497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.744 qpair failed and we were unable to recover it. 00:30:30.744 [2024-12-09 06:29:25.147819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.744 [2024-12-09 06:29:25.147848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.744 qpair failed and we were unable to recover it. 00:30:30.744 [2024-12-09 06:29:25.148195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.744 [2024-12-09 06:29:25.148231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.744 qpair failed and we were unable to recover it. 00:30:30.744 [2024-12-09 06:29:25.148624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.744 [2024-12-09 06:29:25.148656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.744 qpair failed and we were unable to recover it. 00:30:30.744 [2024-12-09 06:29:25.148996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.744 [2024-12-09 06:29:25.149026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.744 qpair failed and we were unable to recover it. 00:30:30.744 [2024-12-09 06:29:25.149371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.744 [2024-12-09 06:29:25.149399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.744 qpair failed and we were unable to recover it. 00:30:30.744 [2024-12-09 06:29:25.149748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.744 [2024-12-09 06:29:25.149778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.744 qpair failed and we were unable to recover it. 00:30:30.744 [2024-12-09 06:29:25.149998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.744 [2024-12-09 06:29:25.150028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.744 qpair failed and we were unable to recover it. 00:30:30.744 [2024-12-09 06:29:25.150380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.744 [2024-12-09 06:29:25.150409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.744 qpair failed and we were unable to recover it. 00:30:30.744 [2024-12-09 06:29:25.150786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.744 [2024-12-09 06:29:25.150817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.744 qpair failed and we were unable to recover it. 00:30:30.744 [2024-12-09 06:29:25.151229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.744 [2024-12-09 06:29:25.151259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.744 qpair failed and we were unable to recover it. 00:30:30.744 [2024-12-09 06:29:25.151601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.744 [2024-12-09 06:29:25.151632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.744 qpair failed and we were unable to recover it. 00:30:30.744 [2024-12-09 06:29:25.151858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.744 [2024-12-09 06:29:25.151887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.744 qpair failed and we were unable to recover it. 00:30:30.744 [2024-12-09 06:29:25.152243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.744 [2024-12-09 06:29:25.152272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.744 qpair failed and we were unable to recover it. 00:30:30.744 [2024-12-09 06:29:25.152616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.744 [2024-12-09 06:29:25.152646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.744 qpair failed and we were unable to recover it. 00:30:30.744 [2024-12-09 06:29:25.153024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.744 [2024-12-09 06:29:25.153053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.744 qpair failed and we were unable to recover it. 00:30:30.744 [2024-12-09 06:29:25.153439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.744 [2024-12-09 06:29:25.153479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.744 qpair failed and we were unable to recover it. 00:30:30.744 [2024-12-09 06:29:25.153818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.744 [2024-12-09 06:29:25.153850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.744 qpair failed and we were unable to recover it. 00:30:30.744 [2024-12-09 06:29:25.154087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.744 [2024-12-09 06:29:25.154115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.744 qpair failed and we were unable to recover it. 00:30:30.744 [2024-12-09 06:29:25.154463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.744 [2024-12-09 06:29:25.154494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.744 qpair failed and we were unable to recover it. 00:30:30.744 [2024-12-09 06:29:25.154835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.744 [2024-12-09 06:29:25.154864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.744 qpair failed and we were unable to recover it. 00:30:30.744 [2024-12-09 06:29:25.155195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.744 [2024-12-09 06:29:25.155223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.744 qpair failed and we were unable to recover it. 00:30:30.744 [2024-12-09 06:29:25.155542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.744 [2024-12-09 06:29:25.155574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.744 qpair failed and we were unable to recover it. 00:30:30.744 [2024-12-09 06:29:25.155950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.744 [2024-12-09 06:29:25.155979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.744 qpair failed and we were unable to recover it. 00:30:30.744 [2024-12-09 06:29:25.156202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.744 [2024-12-09 06:29:25.156231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.744 qpair failed and we were unable to recover it. 00:30:30.744 [2024-12-09 06:29:25.156506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.744 [2024-12-09 06:29:25.156536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.744 qpair failed and we were unable to recover it. 00:30:30.744 [2024-12-09 06:29:25.156884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.744 [2024-12-09 06:29:25.156913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.744 qpair failed and we were unable to recover it. 00:30:30.744 [2024-12-09 06:29:25.157160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.744 [2024-12-09 06:29:25.157190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.744 qpair failed and we were unable to recover it. 00:30:30.744 [2024-12-09 06:29:25.157409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.744 [2024-12-09 06:29:25.157439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.744 qpair failed and we were unable to recover it. 00:30:30.744 [2024-12-09 06:29:25.157813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.744 [2024-12-09 06:29:25.157848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.744 qpair failed and we were unable to recover it. 00:30:30.744 [2024-12-09 06:29:25.158100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.744 [2024-12-09 06:29:25.158129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.744 qpair failed and we were unable to recover it. 00:30:30.744 [2024-12-09 06:29:25.158463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.744 [2024-12-09 06:29:25.158494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.744 qpair failed and we were unable to recover it. 00:30:30.744 [2024-12-09 06:29:25.158894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.744 [2024-12-09 06:29:25.158922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.744 qpair failed and we were unable to recover it. 00:30:30.744 [2024-12-09 06:29:25.159247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.744 [2024-12-09 06:29:25.159276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.744 qpair failed and we were unable to recover it. 00:30:30.744 [2024-12-09 06:29:25.159624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.744 [2024-12-09 06:29:25.159654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.744 qpair failed and we were unable to recover it. 00:30:30.744 [2024-12-09 06:29:25.159990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.744 [2024-12-09 06:29:25.160020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.744 qpair failed and we were unable to recover it. 00:30:30.744 [2024-12-09 06:29:25.160369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.744 [2024-12-09 06:29:25.160398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.744 qpair failed and we were unable to recover it. 00:30:30.744 [2024-12-09 06:29:25.160727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.744 [2024-12-09 06:29:25.160756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.744 qpair failed and we were unable to recover it. 00:30:30.744 [2024-12-09 06:29:25.161083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.744 [2024-12-09 06:29:25.161113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.744 qpair failed and we were unable to recover it. 00:30:30.744 [2024-12-09 06:29:25.161496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.744 [2024-12-09 06:29:25.161527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.744 qpair failed and we were unable to recover it. 00:30:30.744 [2024-12-09 06:29:25.161878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.744 [2024-12-09 06:29:25.161907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.744 qpair failed and we were unable to recover it. 00:30:30.744 [2024-12-09 06:29:25.162148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.744 [2024-12-09 06:29:25.162177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.744 qpair failed and we were unable to recover it. 00:30:30.745 [2024-12-09 06:29:25.162519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.745 [2024-12-09 06:29:25.162549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.745 qpair failed and we were unable to recover it. 00:30:30.745 [2024-12-09 06:29:25.162890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.745 [2024-12-09 06:29:25.162920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.745 qpair failed and we were unable to recover it. 00:30:30.745 [2024-12-09 06:29:25.163267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.745 [2024-12-09 06:29:25.163296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.745 qpair failed and we were unable to recover it. 00:30:30.745 [2024-12-09 06:29:25.163639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.745 [2024-12-09 06:29:25.163670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.745 qpair failed and we were unable to recover it. 00:30:30.745 [2024-12-09 06:29:25.164009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.745 [2024-12-09 06:29:25.164039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.745 qpair failed and we were unable to recover it. 00:30:30.745 [2024-12-09 06:29:25.164422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.745 [2024-12-09 06:29:25.164461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.745 qpair failed and we were unable to recover it. 00:30:30.745 [2024-12-09 06:29:25.164839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.745 [2024-12-09 06:29:25.164869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.745 qpair failed and we were unable to recover it. 00:30:30.745 [2024-12-09 06:29:25.165205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.745 [2024-12-09 06:29:25.165233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.745 qpair failed and we were unable to recover it. 00:30:30.745 [2024-12-09 06:29:25.165625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.745 [2024-12-09 06:29:25.165655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.745 qpair failed and we were unable to recover it. 00:30:30.745 [2024-12-09 06:29:25.166003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.745 [2024-12-09 06:29:25.166032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.745 qpair failed and we were unable to recover it. 00:30:30.745 [2024-12-09 06:29:25.166394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.745 [2024-12-09 06:29:25.166423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.745 qpair failed and we were unable to recover it. 00:30:30.745 [2024-12-09 06:29:25.166786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.745 [2024-12-09 06:29:25.166815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.745 qpair failed and we were unable to recover it. 00:30:30.745 [2024-12-09 06:29:25.167154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.745 [2024-12-09 06:29:25.167183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.745 qpair failed and we were unable to recover it. 00:30:30.745 [2024-12-09 06:29:25.167525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.745 [2024-12-09 06:29:25.167556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.745 qpair failed and we were unable to recover it. 00:30:30.745 [2024-12-09 06:29:25.167820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.745 [2024-12-09 06:29:25.167849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.745 qpair failed and we were unable to recover it. 00:30:30.745 [2024-12-09 06:29:25.168214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.745 [2024-12-09 06:29:25.168243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.745 qpair failed and we were unable to recover it. 00:30:30.745 [2024-12-09 06:29:25.168467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.745 [2024-12-09 06:29:25.168498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.745 qpair failed and we were unable to recover it. 00:30:30.745 [2024-12-09 06:29:25.168837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.745 [2024-12-09 06:29:25.168866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.745 qpair failed and we were unable to recover it. 00:30:30.745 [2024-12-09 06:29:25.169228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.745 [2024-12-09 06:29:25.169257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.745 qpair failed and we were unable to recover it. 00:30:30.745 [2024-12-09 06:29:25.169581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.745 [2024-12-09 06:29:25.169612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.745 qpair failed and we were unable to recover it. 00:30:30.745 [2024-12-09 06:29:25.169971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.745 [2024-12-09 06:29:25.169999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.745 qpair failed and we were unable to recover it. 00:30:30.745 [2024-12-09 06:29:25.170338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.745 [2024-12-09 06:29:25.170366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.745 qpair failed and we were unable to recover it. 00:30:30.745 [2024-12-09 06:29:25.170645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.745 [2024-12-09 06:29:25.170675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.745 qpair failed and we were unable to recover it. 00:30:30.745 [2024-12-09 06:29:25.171097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.745 [2024-12-09 06:29:25.171126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.745 qpair failed and we were unable to recover it. 00:30:30.745 [2024-12-09 06:29:25.171400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.745 [2024-12-09 06:29:25.171429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.745 qpair failed and we were unable to recover it. 00:30:30.745 [2024-12-09 06:29:25.171691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.745 [2024-12-09 06:29:25.171721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.745 qpair failed and we were unable to recover it. 00:30:30.745 [2024-12-09 06:29:25.172053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.745 [2024-12-09 06:29:25.172082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.745 qpair failed and we were unable to recover it. 00:30:30.745 [2024-12-09 06:29:25.172405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.745 [2024-12-09 06:29:25.172433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.745 qpair failed and we were unable to recover it. 00:30:30.745 [2024-12-09 06:29:25.172824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.745 [2024-12-09 06:29:25.172859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.745 qpair failed and we were unable to recover it. 00:30:30.745 [2024-12-09 06:29:25.173203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.745 [2024-12-09 06:29:25.173232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.745 qpair failed and we were unable to recover it. 00:30:30.745 [2024-12-09 06:29:25.173572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.745 [2024-12-09 06:29:25.173603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.745 qpair failed and we were unable to recover it. 00:30:30.745 [2024-12-09 06:29:25.173929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.745 [2024-12-09 06:29:25.173959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.745 qpair failed and we were unable to recover it. 00:30:30.745 [2024-12-09 06:29:25.174308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.745 [2024-12-09 06:29:25.174337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.745 qpair failed and we were unable to recover it. 00:30:30.745 [2024-12-09 06:29:25.174663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.745 [2024-12-09 06:29:25.174692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.745 qpair failed and we were unable to recover it. 00:30:30.745 [2024-12-09 06:29:25.175031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.745 [2024-12-09 06:29:25.175060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.745 qpair failed and we were unable to recover it. 00:30:30.745 [2024-12-09 06:29:25.175400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.745 [2024-12-09 06:29:25.175428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.745 qpair failed and we were unable to recover it. 00:30:30.745 [2024-12-09 06:29:25.175766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.745 [2024-12-09 06:29:25.175796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.745 qpair failed and we were unable to recover it. 00:30:30.745 [2024-12-09 06:29:25.176170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.745 [2024-12-09 06:29:25.176199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.745 qpair failed and we were unable to recover it. 00:30:30.745 [2024-12-09 06:29:25.176550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.745 [2024-12-09 06:29:25.176580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.745 qpair failed and we were unable to recover it. 00:30:30.745 [2024-12-09 06:29:25.176918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.745 [2024-12-09 06:29:25.176946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.745 qpair failed and we were unable to recover it. 00:30:30.745 [2024-12-09 06:29:25.177283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.745 [2024-12-09 06:29:25.177311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.745 qpair failed and we were unable to recover it. 00:30:30.745 [2024-12-09 06:29:25.177645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.745 [2024-12-09 06:29:25.177675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.745 qpair failed and we were unable to recover it. 00:30:30.745 [2024-12-09 06:29:25.177954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.745 [2024-12-09 06:29:25.177984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.745 qpair failed and we were unable to recover it. 00:30:30.745 [2024-12-09 06:29:25.178326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.745 [2024-12-09 06:29:25.178355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.745 qpair failed and we were unable to recover it. 00:30:30.745 [2024-12-09 06:29:25.178680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.745 [2024-12-09 06:29:25.178710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.745 qpair failed and we were unable to recover it. 00:30:30.745 [2024-12-09 06:29:25.178973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.745 [2024-12-09 06:29:25.179001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.745 qpair failed and we were unable to recover it. 00:30:30.745 [2024-12-09 06:29:25.179322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.745 [2024-12-09 06:29:25.179351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.745 qpair failed and we were unable to recover it. 00:30:30.745 [2024-12-09 06:29:25.179676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.745 [2024-12-09 06:29:25.179706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.745 qpair failed and we were unable to recover it. 00:30:30.745 [2024-12-09 06:29:25.179936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.745 [2024-12-09 06:29:25.179968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.745 qpair failed and we were unable to recover it. 00:30:30.745 [2024-12-09 06:29:25.180325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.745 [2024-12-09 06:29:25.180354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.745 qpair failed and we were unable to recover it. 00:30:30.745 [2024-12-09 06:29:25.180674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.745 [2024-12-09 06:29:25.180704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.745 qpair failed and we were unable to recover it. 00:30:30.745 [2024-12-09 06:29:25.181017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.745 [2024-12-09 06:29:25.181046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.745 qpair failed and we were unable to recover it. 00:30:30.745 [2024-12-09 06:29:25.181385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.745 [2024-12-09 06:29:25.181413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.745 qpair failed and we were unable to recover it. 00:30:30.745 [2024-12-09 06:29:25.181779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.745 [2024-12-09 06:29:25.181810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.745 qpair failed and we were unable to recover it. 00:30:30.745 [2024-12-09 06:29:25.182156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.745 [2024-12-09 06:29:25.182185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.745 qpair failed and we were unable to recover it. 00:30:30.745 [2024-12-09 06:29:25.182520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.745 [2024-12-09 06:29:25.182557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.745 qpair failed and we were unable to recover it. 00:30:30.745 [2024-12-09 06:29:25.182794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.745 [2024-12-09 06:29:25.182822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.745 qpair failed and we were unable to recover it. 00:30:30.745 [2024-12-09 06:29:25.183181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.745 [2024-12-09 06:29:25.183211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.745 qpair failed and we were unable to recover it. 00:30:30.745 [2024-12-09 06:29:25.183474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.745 [2024-12-09 06:29:25.183505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.745 qpair failed and we were unable to recover it. 00:30:30.745 [2024-12-09 06:29:25.183849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.745 [2024-12-09 06:29:25.183878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.745 qpair failed and we were unable to recover it. 00:30:30.745 [2024-12-09 06:29:25.184231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.745 [2024-12-09 06:29:25.184260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.745 qpair failed and we were unable to recover it. 00:30:30.745 [2024-12-09 06:29:25.184639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.745 [2024-12-09 06:29:25.184670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.745 qpair failed and we were unable to recover it. 00:30:30.745 [2024-12-09 06:29:25.185024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.745 [2024-12-09 06:29:25.185053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.745 qpair failed and we were unable to recover it. 00:30:30.745 [2024-12-09 06:29:25.185388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.745 [2024-12-09 06:29:25.185417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.745 qpair failed and we were unable to recover it. 00:30:30.745 [2024-12-09 06:29:25.185787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.745 [2024-12-09 06:29:25.185817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.745 qpair failed and we were unable to recover it. 00:30:30.745 [2024-12-09 06:29:25.186161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.745 [2024-12-09 06:29:25.186190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.745 qpair failed and we were unable to recover it. 00:30:30.745 [2024-12-09 06:29:25.186530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.745 [2024-12-09 06:29:25.186561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.745 qpair failed and we were unable to recover it. 00:30:30.745 [2024-12-09 06:29:25.186904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.745 [2024-12-09 06:29:25.186932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.745 qpair failed and we were unable to recover it. 00:30:30.745 [2024-12-09 06:29:25.187274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.745 [2024-12-09 06:29:25.187303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.745 qpair failed and we were unable to recover it. 00:30:30.745 [2024-12-09 06:29:25.187675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.745 [2024-12-09 06:29:25.187705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.745 qpair failed and we were unable to recover it. 00:30:30.745 [2024-12-09 06:29:25.188040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.745 [2024-12-09 06:29:25.188068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.745 qpair failed and we were unable to recover it. 00:30:30.745 [2024-12-09 06:29:25.188316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.746 [2024-12-09 06:29:25.188349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.746 qpair failed and we were unable to recover it. 00:30:30.746 [2024-12-09 06:29:25.188590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.746 [2024-12-09 06:29:25.188620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.746 qpair failed and we were unable to recover it. 00:30:30.746 [2024-12-09 06:29:25.188966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.746 [2024-12-09 06:29:25.188994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.746 qpair failed and we were unable to recover it. 00:30:30.746 [2024-12-09 06:29:25.189340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.746 [2024-12-09 06:29:25.189370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.746 qpair failed and we were unable to recover it. 00:30:30.746 [2024-12-09 06:29:25.189617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.746 [2024-12-09 06:29:25.189648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.746 qpair failed and we were unable to recover it. 00:30:30.746 [2024-12-09 06:29:25.190021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.746 [2024-12-09 06:29:25.190050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.746 qpair failed and we were unable to recover it. 00:30:30.746 [2024-12-09 06:29:25.190397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.746 [2024-12-09 06:29:25.190427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.746 qpair failed and we were unable to recover it. 00:30:30.746 [2024-12-09 06:29:25.190810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.746 [2024-12-09 06:29:25.190842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.746 qpair failed and we were unable to recover it. 00:30:30.746 [2024-12-09 06:29:25.191195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.746 [2024-12-09 06:29:25.191224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.746 qpair failed and we were unable to recover it. 00:30:30.746 [2024-12-09 06:29:25.191568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.746 [2024-12-09 06:29:25.191599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.746 qpair failed and we were unable to recover it. 00:30:30.746 [2024-12-09 06:29:25.193591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.746 [2024-12-09 06:29:25.193657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.746 qpair failed and we were unable to recover it. 00:30:30.746 [2024-12-09 06:29:25.194044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.746 [2024-12-09 06:29:25.194078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.746 qpair failed and we were unable to recover it. 00:30:30.746 [2024-12-09 06:29:25.194443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.746 [2024-12-09 06:29:25.194489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.746 qpair failed and we were unable to recover it. 00:30:30.746 [2024-12-09 06:29:25.194864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.746 [2024-12-09 06:29:25.194894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.746 qpair failed and we were unable to recover it. 00:30:30.746 [2024-12-09 06:29:25.195276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.746 [2024-12-09 06:29:25.195305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.746 qpair failed and we were unable to recover it. 00:30:30.746 [2024-12-09 06:29:25.195579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.746 [2024-12-09 06:29:25.195612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.746 qpair failed and we were unable to recover it. 00:30:30.746 [2024-12-09 06:29:25.195977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.746 [2024-12-09 06:29:25.196006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.746 qpair failed and we were unable to recover it. 00:30:30.746 [2024-12-09 06:29:25.196356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.746 [2024-12-09 06:29:25.196386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.746 qpair failed and we were unable to recover it. 00:30:30.746 [2024-12-09 06:29:25.196766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.746 [2024-12-09 06:29:25.196798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.746 qpair failed and we were unable to recover it. 00:30:30.746 [2024-12-09 06:29:25.197145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.746 [2024-12-09 06:29:25.197174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.746 qpair failed and we were unable to recover it. 00:30:30.746 [2024-12-09 06:29:25.197523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.746 [2024-12-09 06:29:25.197553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.746 qpair failed and we were unable to recover it. 00:30:30.746 [2024-12-09 06:29:25.197897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.746 [2024-12-09 06:29:25.197926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.746 qpair failed and we were unable to recover it. 00:30:30.746 [2024-12-09 06:29:25.198291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.746 [2024-12-09 06:29:25.198321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.746 qpair failed and we were unable to recover it. 00:30:30.746 [2024-12-09 06:29:25.198567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.746 [2024-12-09 06:29:25.198597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.746 qpair failed and we were unable to recover it. 00:30:30.746 [2024-12-09 06:29:25.198970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.746 [2024-12-09 06:29:25.199001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.746 qpair failed and we were unable to recover it. 00:30:30.746 [2024-12-09 06:29:25.199341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.746 [2024-12-09 06:29:25.199379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.746 qpair failed and we were unable to recover it. 00:30:30.746 [2024-12-09 06:29:25.199653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.746 [2024-12-09 06:29:25.199687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.746 qpair failed and we were unable to recover it. 00:30:30.746 [2024-12-09 06:29:25.200028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.746 [2024-12-09 06:29:25.200058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.746 qpair failed and we were unable to recover it. 00:30:30.746 [2024-12-09 06:29:25.200385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.746 [2024-12-09 06:29:25.200413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.746 qpair failed and we were unable to recover it. 00:30:30.746 [2024-12-09 06:29:25.200770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.746 [2024-12-09 06:29:25.200801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.746 qpair failed and we were unable to recover it. 00:30:30.746 [2024-12-09 06:29:25.201160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.746 [2024-12-09 06:29:25.201190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.746 qpair failed and we were unable to recover it. 00:30:30.746 [2024-12-09 06:29:25.201538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.746 [2024-12-09 06:29:25.201569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.746 qpair failed and we were unable to recover it. 00:30:30.746 [2024-12-09 06:29:25.201932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.746 [2024-12-09 06:29:25.201962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.746 qpair failed and we were unable to recover it. 00:30:30.746 [2024-12-09 06:29:25.202371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.746 [2024-12-09 06:29:25.202400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.746 qpair failed and we were unable to recover it. 00:30:30.746 [2024-12-09 06:29:25.202779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.746 [2024-12-09 06:29:25.202810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.746 qpair failed and we were unable to recover it. 00:30:30.746 [2024-12-09 06:29:25.203179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.746 [2024-12-09 06:29:25.203209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.746 qpair failed and we were unable to recover it. 00:30:30.746 [2024-12-09 06:29:25.203527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.746 [2024-12-09 06:29:25.203560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.746 qpair failed and we were unable to recover it. 00:30:30.746 [2024-12-09 06:29:25.203920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.746 [2024-12-09 06:29:25.203950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.746 qpair failed and we were unable to recover it. 00:30:30.746 [2024-12-09 06:29:25.204333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.746 [2024-12-09 06:29:25.204363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.746 qpair failed and we were unable to recover it. 00:30:30.746 [2024-12-09 06:29:25.204739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.746 [2024-12-09 06:29:25.204769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.746 qpair failed and we were unable to recover it. 00:30:30.746 [2024-12-09 06:29:25.205091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.746 [2024-12-09 06:29:25.205121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.746 qpair failed and we were unable to recover it. 00:30:30.746 [2024-12-09 06:29:25.205471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.746 [2024-12-09 06:29:25.205503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.746 qpair failed and we were unable to recover it. 00:30:30.746 [2024-12-09 06:29:25.205843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.746 [2024-12-09 06:29:25.205873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.746 qpair failed and we were unable to recover it. 00:30:30.746 [2024-12-09 06:29:25.206199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.746 [2024-12-09 06:29:25.206229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.746 qpair failed and we were unable to recover it. 00:30:30.746 [2024-12-09 06:29:25.206620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.746 [2024-12-09 06:29:25.206651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.746 qpair failed and we were unable to recover it. 00:30:30.746 [2024-12-09 06:29:25.207034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.746 [2024-12-09 06:29:25.207064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.746 qpair failed and we were unable to recover it. 00:30:30.746 [2024-12-09 06:29:25.207304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.746 [2024-12-09 06:29:25.207334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.746 qpair failed and we were unable to recover it. 00:30:30.746 [2024-12-09 06:29:25.207695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.746 [2024-12-09 06:29:25.207727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.746 qpair failed and we were unable to recover it. 00:30:30.746 [2024-12-09 06:29:25.208045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.746 [2024-12-09 06:29:25.208073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.746 qpair failed and we were unable to recover it. 00:30:30.746 [2024-12-09 06:29:25.208374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.746 [2024-12-09 06:29:25.208404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.746 qpair failed and we were unable to recover it. 00:30:30.746 [2024-12-09 06:29:25.208663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.746 [2024-12-09 06:29:25.208696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.746 qpair failed and we were unable to recover it. 00:30:30.746 [2024-12-09 06:29:25.209017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.746 [2024-12-09 06:29:25.209046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.746 qpair failed and we were unable to recover it. 00:30:30.746 [2024-12-09 06:29:25.209428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.746 [2024-12-09 06:29:25.209468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.746 qpair failed and we were unable to recover it. 00:30:30.746 [2024-12-09 06:29:25.209812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.746 [2024-12-09 06:29:25.209842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.746 qpair failed and we were unable to recover it. 00:30:30.746 [2024-12-09 06:29:25.210174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.746 [2024-12-09 06:29:25.210204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.746 qpair failed and we were unable to recover it. 00:30:30.746 [2024-12-09 06:29:25.210427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.746 [2024-12-09 06:29:25.210467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.746 qpair failed and we were unable to recover it. 00:30:30.746 [2024-12-09 06:29:25.210815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.746 [2024-12-09 06:29:25.210844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.746 qpair failed and we were unable to recover it. 00:30:30.746 [2024-12-09 06:29:25.211211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.746 [2024-12-09 06:29:25.211240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.746 qpair failed and we were unable to recover it. 00:30:30.746 [2024-12-09 06:29:25.211591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.746 [2024-12-09 06:29:25.211623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.746 qpair failed and we were unable to recover it. 00:30:30.746 [2024-12-09 06:29:25.211872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.746 [2024-12-09 06:29:25.211902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.746 qpair failed and we were unable to recover it. 00:30:30.746 [2024-12-09 06:29:25.212169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.746 [2024-12-09 06:29:25.212199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.746 qpair failed and we were unable to recover it. 00:30:30.746 [2024-12-09 06:29:25.212549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.746 [2024-12-09 06:29:25.212580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.746 qpair failed and we were unable to recover it. 00:30:30.746 [2024-12-09 06:29:25.212815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.746 [2024-12-09 06:29:25.212848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.746 qpair failed and we were unable to recover it. 00:30:30.746 [2024-12-09 06:29:25.213191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.746 [2024-12-09 06:29:25.213220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.746 qpair failed and we were unable to recover it. 00:30:30.746 [2024-12-09 06:29:25.213533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.746 [2024-12-09 06:29:25.213566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.746 qpair failed and we were unable to recover it. 00:30:30.746 [2024-12-09 06:29:25.213942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.746 [2024-12-09 06:29:25.213972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.746 qpair failed and we were unable to recover it. 00:30:30.746 [2024-12-09 06:29:25.214362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.746 [2024-12-09 06:29:25.214392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.746 qpair failed and we were unable to recover it. 00:30:30.746 [2024-12-09 06:29:25.214752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.746 [2024-12-09 06:29:25.214783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.746 qpair failed and we were unable to recover it. 00:30:30.746 [2024-12-09 06:29:25.215023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.747 [2024-12-09 06:29:25.215053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.747 qpair failed and we were unable to recover it. 00:30:30.747 [2024-12-09 06:29:25.215415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.747 [2024-12-09 06:29:25.215444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.747 qpair failed and we were unable to recover it. 00:30:30.747 [2024-12-09 06:29:25.215818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.747 [2024-12-09 06:29:25.215848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.747 qpair failed and we were unable to recover it. 00:30:30.747 [2024-12-09 06:29:25.216252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.747 [2024-12-09 06:29:25.216282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.747 qpair failed and we were unable to recover it. 00:30:30.747 [2024-12-09 06:29:25.216500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.747 [2024-12-09 06:29:25.216530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.747 qpair failed and we were unable to recover it. 00:30:30.747 [2024-12-09 06:29:25.216879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.747 [2024-12-09 06:29:25.216910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.747 qpair failed and we were unable to recover it. 00:30:30.747 [2024-12-09 06:29:25.217293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.747 [2024-12-09 06:29:25.217323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.747 qpair failed and we were unable to recover it. 00:30:30.747 [2024-12-09 06:29:25.217666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.747 [2024-12-09 06:29:25.217698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.747 qpair failed and we were unable to recover it. 00:30:30.747 [2024-12-09 06:29:25.218015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.747 [2024-12-09 06:29:25.218045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.747 qpair failed and we were unable to recover it. 00:30:30.747 [2024-12-09 06:29:25.218395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.747 [2024-12-09 06:29:25.218425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.747 qpair failed and we were unable to recover it. 00:30:30.747 [2024-12-09 06:29:25.218821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.747 [2024-12-09 06:29:25.218853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.747 qpair failed and we were unable to recover it. 00:30:30.747 [2024-12-09 06:29:25.219204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.747 [2024-12-09 06:29:25.219235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.747 qpair failed and we were unable to recover it. 00:30:30.747 [2024-12-09 06:29:25.219590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.747 [2024-12-09 06:29:25.219621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.747 qpair failed and we were unable to recover it. 00:30:30.747 [2024-12-09 06:29:25.219971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.747 [2024-12-09 06:29:25.220000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.747 qpair failed and we were unable to recover it. 00:30:30.747 [2024-12-09 06:29:25.220353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.747 [2024-12-09 06:29:25.220382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.747 qpair failed and we were unable to recover it. 00:30:30.747 [2024-12-09 06:29:25.220742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.747 [2024-12-09 06:29:25.220774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.747 qpair failed and we were unable to recover it. 00:30:30.747 [2024-12-09 06:29:25.221124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.747 [2024-12-09 06:29:25.221153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.747 qpair failed and we were unable to recover it. 00:30:30.747 [2024-12-09 06:29:25.221494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.747 [2024-12-09 06:29:25.221525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.747 qpair failed and we were unable to recover it. 00:30:30.747 [2024-12-09 06:29:25.221661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.747 [2024-12-09 06:29:25.221694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.747 qpair failed and we were unable to recover it. 00:30:30.747 [2024-12-09 06:29:25.221941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.747 [2024-12-09 06:29:25.221972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.747 qpair failed and we were unable to recover it. 00:30:30.747 [2024-12-09 06:29:25.222293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.747 [2024-12-09 06:29:25.222323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.747 qpair failed and we were unable to recover it. 00:30:30.747 [2024-12-09 06:29:25.222650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.747 [2024-12-09 06:29:25.222681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.747 qpair failed and we were unable to recover it. 00:30:30.747 [2024-12-09 06:29:25.222954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.747 [2024-12-09 06:29:25.222984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.747 qpair failed and we were unable to recover it. 00:30:30.747 [2024-12-09 06:29:25.223205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.747 [2024-12-09 06:29:25.223237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.747 qpair failed and we were unable to recover it. 00:30:30.747 [2024-12-09 06:29:25.223595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.747 [2024-12-09 06:29:25.223627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.747 qpair failed and we were unable to recover it. 00:30:30.747 [2024-12-09 06:29:25.223864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.747 [2024-12-09 06:29:25.223901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:30.747 qpair failed and we were unable to recover it. 00:30:30.747 Read completed with error (sct=0, sc=8) 00:30:30.747 starting I/O failed 00:30:30.747 Read completed with error (sct=0, sc=8) 00:30:30.747 starting I/O failed 00:30:30.747 Read completed with error (sct=0, sc=8) 00:30:30.747 starting I/O failed 00:30:30.747 Read completed with error (sct=0, sc=8) 00:30:30.747 starting I/O failed 00:30:30.747 Read completed with error (sct=0, sc=8) 00:30:30.747 starting I/O failed 00:30:30.747 Write completed with error (sct=0, sc=8) 00:30:30.747 starting I/O failed 00:30:30.747 Read completed with error (sct=0, sc=8) 00:30:30.747 starting I/O failed 00:30:30.747 Read completed with error (sct=0, sc=8) 00:30:30.747 starting I/O failed 00:30:30.747 Write completed with error (sct=0, sc=8) 00:30:30.747 starting I/O failed 00:30:30.747 Read completed with error (sct=0, sc=8) 00:30:30.747 starting I/O failed 00:30:30.747 Read completed with error (sct=0, sc=8) 00:30:30.747 starting I/O failed 00:30:30.747 Read completed with error (sct=0, sc=8) 00:30:30.747 starting I/O failed 00:30:30.747 Read completed with error (sct=0, sc=8) 00:30:30.747 starting I/O failed 00:30:30.747 Write completed with error (sct=0, sc=8) 00:30:30.747 starting I/O failed 00:30:30.747 Write completed with error (sct=0, sc=8) 00:30:30.747 starting I/O failed 00:30:30.747 Read completed with error (sct=0, sc=8) 00:30:30.747 starting I/O failed 00:30:30.747 Read completed with error (sct=0, sc=8) 00:30:30.747 starting I/O failed 00:30:30.747 Write completed with error (sct=0, sc=8) 00:30:30.747 starting I/O failed 00:30:30.747 Read completed with error (sct=0, sc=8) 00:30:30.747 starting I/O failed 00:30:30.747 Write completed with error (sct=0, sc=8) 00:30:30.747 starting I/O failed 00:30:30.747 Read completed with error (sct=0, sc=8) 00:30:30.747 starting I/O failed 00:30:30.747 Read completed with error (sct=0, sc=8) 00:30:30.747 starting I/O failed 00:30:30.747 Read completed with error (sct=0, sc=8) 00:30:30.747 starting I/O failed 00:30:30.747 Read completed with error (sct=0, sc=8) 00:30:30.747 starting I/O failed 00:30:30.747 Read completed with error (sct=0, sc=8) 00:30:30.747 starting I/O failed 00:30:30.747 Write completed with error (sct=0, sc=8) 00:30:30.747 starting I/O failed 00:30:30.747 Read completed with error (sct=0, sc=8) 00:30:30.747 starting I/O failed 00:30:30.747 Read completed with error (sct=0, sc=8) 00:30:30.747 starting I/O failed 00:30:30.747 Write completed with error (sct=0, sc=8) 00:30:30.747 starting I/O failed 00:30:30.747 Write completed with error (sct=0, sc=8) 00:30:30.747 starting I/O failed 00:30:30.747 Read completed with error (sct=0, sc=8) 00:30:30.747 starting I/O failed 00:30:30.747 Write completed with error (sct=0, sc=8) 00:30:30.747 starting I/O failed 00:30:30.747 [2024-12-09 06:29:25.224259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.747 [2024-12-09 06:29:25.224562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.747 [2024-12-09 06:29:25.224590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.747 qpair failed and we were unable to recover it. 00:30:30.747 [2024-12-09 06:29:25.224784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.747 [2024-12-09 06:29:25.224795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.747 qpair failed and we were unable to recover it. 00:30:30.747 [2024-12-09 06:29:25.224990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.747 [2024-12-09 06:29:25.225002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.747 qpair failed and we were unable to recover it. 00:30:30.747 [2024-12-09 06:29:25.225343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.747 [2024-12-09 06:29:25.225354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.747 qpair failed and we were unable to recover it. 00:30:30.747 [2024-12-09 06:29:25.225693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.747 [2024-12-09 06:29:25.225705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.747 qpair failed and we were unable to recover it. 00:30:30.747 [2024-12-09 06:29:25.226039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.747 [2024-12-09 06:29:25.226051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.747 qpair failed and we were unable to recover it. 00:30:30.747 [2024-12-09 06:29:25.226388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.747 [2024-12-09 06:29:25.226399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.747 qpair failed and we were unable to recover it. 00:30:30.747 [2024-12-09 06:29:25.226590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.747 [2024-12-09 06:29:25.226602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.747 qpair failed and we were unable to recover it. 00:30:30.747 [2024-12-09 06:29:25.226816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.747 [2024-12-09 06:29:25.226828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.747 qpair failed and we were unable to recover it. 00:30:30.747 [2024-12-09 06:29:25.227009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.747 [2024-12-09 06:29:25.227019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.747 qpair failed and we were unable to recover it. 00:30:30.747 [2024-12-09 06:29:25.227368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.747 [2024-12-09 06:29:25.227378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.747 qpair failed and we were unable to recover it. 00:30:30.747 [2024-12-09 06:29:25.227740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.747 [2024-12-09 06:29:25.227751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.747 qpair failed and we were unable to recover it. 00:30:30.747 [2024-12-09 06:29:25.227844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.747 [2024-12-09 06:29:25.227853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.747 qpair failed and we were unable to recover it. 00:30:30.747 [2024-12-09 06:29:25.228127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.747 [2024-12-09 06:29:25.228137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.747 qpair failed and we were unable to recover it. 00:30:30.747 [2024-12-09 06:29:25.228439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.747 [2024-12-09 06:29:25.228455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.747 qpair failed and we were unable to recover it. 00:30:30.747 [2024-12-09 06:29:25.228784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.747 [2024-12-09 06:29:25.228795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.747 qpair failed and we were unable to recover it. 00:30:30.747 [2024-12-09 06:29:25.229003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.747 [2024-12-09 06:29:25.229014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.747 qpair failed and we were unable to recover it. 00:30:30.747 [2024-12-09 06:29:25.229210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.747 [2024-12-09 06:29:25.229220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.747 qpair failed and we were unable to recover it. 00:30:30.747 [2024-12-09 06:29:25.229524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.747 [2024-12-09 06:29:25.229535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.747 qpair failed and we were unable to recover it. 00:30:30.747 [2024-12-09 06:29:25.229868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.747 [2024-12-09 06:29:25.229878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.747 qpair failed and we were unable to recover it. 00:30:30.747 [2024-12-09 06:29:25.230203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.747 [2024-12-09 06:29:25.230213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.747 qpair failed and we were unable to recover it. 00:30:30.747 [2024-12-09 06:29:25.230595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.747 [2024-12-09 06:29:25.230607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.747 qpair failed and we were unable to recover it. 00:30:30.747 [2024-12-09 06:29:25.230945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.747 [2024-12-09 06:29:25.230956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.747 qpair failed and we were unable to recover it. 00:30:30.747 [2024-12-09 06:29:25.231290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.747 [2024-12-09 06:29:25.231301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.747 qpair failed and we were unable to recover it. 00:30:30.747 [2024-12-09 06:29:25.231609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.747 [2024-12-09 06:29:25.231621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.747 qpair failed and we were unable to recover it. 00:30:30.747 [2024-12-09 06:29:25.231939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.747 [2024-12-09 06:29:25.231950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.747 qpair failed and we were unable to recover it. 00:30:30.747 [2024-12-09 06:29:25.232146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.747 [2024-12-09 06:29:25.232158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.747 qpair failed and we were unable to recover it. 00:30:30.747 [2024-12-09 06:29:25.232488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.747 [2024-12-09 06:29:25.232500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.747 qpair failed and we were unable to recover it. 00:30:30.747 [2024-12-09 06:29:25.232813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.747 [2024-12-09 06:29:25.232824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.747 qpair failed and we were unable to recover it. 00:30:30.747 [2024-12-09 06:29:25.233054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.747 [2024-12-09 06:29:25.233065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.747 qpair failed and we were unable to recover it. 00:30:30.747 [2024-12-09 06:29:25.233340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.747 [2024-12-09 06:29:25.233351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.747 qpair failed and we were unable to recover it. 00:30:30.747 [2024-12-09 06:29:25.233653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.747 [2024-12-09 06:29:25.233664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.747 qpair failed and we were unable to recover it. 00:30:30.747 [2024-12-09 06:29:25.233967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.747 [2024-12-09 06:29:25.233980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.747 qpair failed and we were unable to recover it. 00:30:30.747 [2024-12-09 06:29:25.234173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.747 [2024-12-09 06:29:25.234184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.747 qpair failed and we were unable to recover it. 00:30:30.747 [2024-12-09 06:29:25.234374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.747 [2024-12-09 06:29:25.234385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.747 qpair failed and we were unable to recover it. 00:30:30.747 [2024-12-09 06:29:25.234726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.747 [2024-12-09 06:29:25.234736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.748 qpair failed and we were unable to recover it. 00:30:30.748 [2024-12-09 06:29:25.235068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.748 [2024-12-09 06:29:25.235080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.748 qpair failed and we were unable to recover it. 00:30:30.748 [2024-12-09 06:29:25.235404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.748 [2024-12-09 06:29:25.235415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.748 qpair failed and we were unable to recover it. 00:30:30.748 [2024-12-09 06:29:25.235752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.748 [2024-12-09 06:29:25.235764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.748 qpair failed and we were unable to recover it. 00:30:30.748 [2024-12-09 06:29:25.236091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.748 [2024-12-09 06:29:25.236103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.748 qpair failed and we were unable to recover it. 00:30:30.748 [2024-12-09 06:29:25.236438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.748 [2024-12-09 06:29:25.236453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.748 qpair failed and we were unable to recover it. 00:30:30.748 [2024-12-09 06:29:25.236680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.748 [2024-12-09 06:29:25.236691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.748 qpair failed and we were unable to recover it. 00:30:30.748 [2024-12-09 06:29:25.236892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.748 [2024-12-09 06:29:25.236901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.748 qpair failed and we were unable to recover it. 00:30:30.748 [2024-12-09 06:29:25.237217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.748 [2024-12-09 06:29:25.237227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.748 qpair failed and we were unable to recover it. 00:30:30.748 [2024-12-09 06:29:25.237427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.748 [2024-12-09 06:29:25.237437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.748 qpair failed and we were unable to recover it. 00:30:30.748 [2024-12-09 06:29:25.237766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.748 [2024-12-09 06:29:25.237776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.748 qpair failed and we were unable to recover it. 00:30:30.748 [2024-12-09 06:29:25.238093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.748 [2024-12-09 06:29:25.238103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.748 qpair failed and we were unable to recover it. 00:30:30.748 [2024-12-09 06:29:25.238299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.748 [2024-12-09 06:29:25.238312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.748 qpair failed and we were unable to recover it. 00:30:30.748 [2024-12-09 06:29:25.238575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.748 [2024-12-09 06:29:25.238585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.748 qpair failed and we were unable to recover it. 00:30:30.748 [2024-12-09 06:29:25.238898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.748 [2024-12-09 06:29:25.238909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.748 qpair failed and we were unable to recover it. 00:30:30.748 [2024-12-09 06:29:25.239239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.748 [2024-12-09 06:29:25.239249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.748 qpair failed and we were unable to recover it. 00:30:30.748 [2024-12-09 06:29:25.239447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.748 [2024-12-09 06:29:25.239464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.748 qpair failed and we were unable to recover it. 00:30:30.748 [2024-12-09 06:29:25.239804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.748 [2024-12-09 06:29:25.239814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.748 qpair failed and we were unable to recover it. 00:30:30.748 [2024-12-09 06:29:25.240150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.748 [2024-12-09 06:29:25.240161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.748 qpair failed and we were unable to recover it. 00:30:30.748 [2024-12-09 06:29:25.240503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.748 [2024-12-09 06:29:25.240514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.748 qpair failed and we were unable to recover it. 00:30:30.748 [2024-12-09 06:29:25.240825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.748 [2024-12-09 06:29:25.240834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.748 qpair failed and we were unable to recover it. 00:30:30.748 [2024-12-09 06:29:25.241139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.748 [2024-12-09 06:29:25.241149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.748 qpair failed and we were unable to recover it. 00:30:30.748 [2024-12-09 06:29:25.241526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.748 [2024-12-09 06:29:25.241536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.748 qpair failed and we were unable to recover it. 00:30:30.748 [2024-12-09 06:29:25.241840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.748 [2024-12-09 06:29:25.241851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.748 qpair failed and we were unable to recover it. 00:30:30.748 [2024-12-09 06:29:25.242113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.748 [2024-12-09 06:29:25.242123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.748 qpair failed and we were unable to recover it. 00:30:30.748 [2024-12-09 06:29:25.242444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.748 [2024-12-09 06:29:25.242457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.748 qpair failed and we were unable to recover it. 00:30:30.748 [2024-12-09 06:29:25.242797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.748 [2024-12-09 06:29:25.242807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.748 qpair failed and we were unable to recover it. 00:30:30.748 [2024-12-09 06:29:25.242993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.748 [2024-12-09 06:29:25.243002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.748 qpair failed and we were unable to recover it. 00:30:30.748 [2024-12-09 06:29:25.243343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.748 [2024-12-09 06:29:25.243353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.748 qpair failed and we were unable to recover it. 00:30:30.748 [2024-12-09 06:29:25.243697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.748 [2024-12-09 06:29:25.243708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.748 qpair failed and we were unable to recover it. 00:30:30.748 [2024-12-09 06:29:25.244043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.748 [2024-12-09 06:29:25.244052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.748 qpair failed and we were unable to recover it. 00:30:30.748 [2024-12-09 06:29:25.244237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.748 [2024-12-09 06:29:25.244246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.748 qpair failed and we were unable to recover it. 00:30:30.748 [2024-12-09 06:29:25.244539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.748 [2024-12-09 06:29:25.244550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.748 qpair failed and we were unable to recover it. 00:30:30.748 [2024-12-09 06:29:25.244873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.748 [2024-12-09 06:29:25.244882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.748 qpair failed and we were unable to recover it. 00:30:30.748 [2024-12-09 06:29:25.245200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.748 [2024-12-09 06:29:25.245211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.748 qpair failed and we were unable to recover it. 00:30:30.748 [2024-12-09 06:29:25.245414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.748 [2024-12-09 06:29:25.245425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.748 qpair failed and we were unable to recover it. 00:30:30.748 [2024-12-09 06:29:25.245722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.748 [2024-12-09 06:29:25.245734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.748 qpair failed and we were unable to recover it. 00:30:30.748 [2024-12-09 06:29:25.246068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.748 [2024-12-09 06:29:25.246077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.748 qpair failed and we were unable to recover it. 00:30:30.748 [2024-12-09 06:29:25.246422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.748 [2024-12-09 06:29:25.246432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.748 qpair failed and we were unable to recover it. 00:30:30.748 [2024-12-09 06:29:25.246700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.748 [2024-12-09 06:29:25.246711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.748 qpair failed and we were unable to recover it. 00:30:30.748 [2024-12-09 06:29:25.246891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.748 [2024-12-09 06:29:25.246900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.748 qpair failed and we were unable to recover it. 00:30:30.748 [2024-12-09 06:29:25.247211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.748 [2024-12-09 06:29:25.247221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.748 qpair failed and we were unable to recover it. 00:30:30.748 [2024-12-09 06:29:25.247541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.748 [2024-12-09 06:29:25.247551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.748 qpair failed and we were unable to recover it. 00:30:30.748 [2024-12-09 06:29:25.247881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.748 [2024-12-09 06:29:25.247891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.748 qpair failed and we were unable to recover it. 00:30:30.748 [2024-12-09 06:29:25.248210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.748 [2024-12-09 06:29:25.248219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.748 qpair failed and we were unable to recover it. 00:30:30.748 [2024-12-09 06:29:25.248562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.748 [2024-12-09 06:29:25.248573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.748 qpair failed and we were unable to recover it. 00:30:30.748 [2024-12-09 06:29:25.248873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.748 [2024-12-09 06:29:25.248882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.748 qpair failed and we were unable to recover it. 00:30:30.748 [2024-12-09 06:29:25.249194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.748 [2024-12-09 06:29:25.249205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.748 qpair failed and we were unable to recover it. 00:30:30.748 [2024-12-09 06:29:25.249484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.748 [2024-12-09 06:29:25.249493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.748 qpair failed and we were unable to recover it. 00:30:30.748 [2024-12-09 06:29:25.249843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.748 [2024-12-09 06:29:25.249853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.748 qpair failed and we were unable to recover it. 00:30:30.748 [2024-12-09 06:29:25.250158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.748 [2024-12-09 06:29:25.250167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.748 qpair failed and we were unable to recover it. 00:30:30.748 [2024-12-09 06:29:25.250506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.748 [2024-12-09 06:29:25.250516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.748 qpair failed and we were unable to recover it. 00:30:30.748 [2024-12-09 06:29:25.250737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.748 [2024-12-09 06:29:25.250747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.748 qpair failed and we were unable to recover it. 00:30:30.748 [2024-12-09 06:29:25.251066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.748 [2024-12-09 06:29:25.251076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.748 qpair failed and we were unable to recover it. 00:30:30.748 [2024-12-09 06:29:25.251394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.748 [2024-12-09 06:29:25.251403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.748 qpair failed and we were unable to recover it. 00:30:30.748 [2024-12-09 06:29:25.251683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.748 [2024-12-09 06:29:25.251693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.748 qpair failed and we were unable to recover it. 00:30:30.748 [2024-12-09 06:29:25.251804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.748 [2024-12-09 06:29:25.251812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.748 qpair failed and we were unable to recover it. 00:30:30.748 [2024-12-09 06:29:25.252127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.748 [2024-12-09 06:29:25.252136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.748 qpair failed and we were unable to recover it. 00:30:30.748 [2024-12-09 06:29:25.252326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.748 [2024-12-09 06:29:25.252336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.748 qpair failed and we were unable to recover it. 00:30:30.748 [2024-12-09 06:29:25.252647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.748 [2024-12-09 06:29:25.252658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.748 qpair failed and we were unable to recover it. 00:30:30.748 [2024-12-09 06:29:25.252949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.748 [2024-12-09 06:29:25.252958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.748 qpair failed and we were unable to recover it. 00:30:30.748 [2024-12-09 06:29:25.253274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.748 [2024-12-09 06:29:25.253283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.748 qpair failed and we were unable to recover it. 00:30:30.748 [2024-12-09 06:29:25.253612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.748 [2024-12-09 06:29:25.253622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.748 qpair failed and we were unable to recover it. 00:30:30.748 [2024-12-09 06:29:25.253962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.748 [2024-12-09 06:29:25.253971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.748 qpair failed and we were unable to recover it. 00:30:30.748 [2024-12-09 06:29:25.254238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.748 [2024-12-09 06:29:25.254251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.748 qpair failed and we were unable to recover it. 00:30:30.748 [2024-12-09 06:29:25.254574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.748 [2024-12-09 06:29:25.254584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.748 qpair failed and we were unable to recover it. 00:30:30.748 [2024-12-09 06:29:25.254776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.748 [2024-12-09 06:29:25.254787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.748 qpair failed and we were unable to recover it. 00:30:30.748 [2024-12-09 06:29:25.255099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.748 [2024-12-09 06:29:25.255110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.748 qpair failed and we were unable to recover it. 00:30:30.749 [2024-12-09 06:29:25.255447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.749 [2024-12-09 06:29:25.255467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.749 qpair failed and we were unable to recover it. 00:30:30.749 [2024-12-09 06:29:25.255772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.749 [2024-12-09 06:29:25.255781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.749 qpair failed and we were unable to recover it. 00:30:30.749 [2024-12-09 06:29:25.256095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.749 [2024-12-09 06:29:25.256104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.749 qpair failed and we were unable to recover it. 00:30:30.749 [2024-12-09 06:29:25.256178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.749 [2024-12-09 06:29:25.256187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.749 qpair failed and we were unable to recover it. 00:30:30.749 [2024-12-09 06:29:25.256503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.749 [2024-12-09 06:29:25.256514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.749 qpair failed and we were unable to recover it. 00:30:30.749 [2024-12-09 06:29:25.256711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.749 [2024-12-09 06:29:25.256720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.749 qpair failed and we were unable to recover it. 00:30:30.749 [2024-12-09 06:29:25.257046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.749 [2024-12-09 06:29:25.257055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.749 qpair failed and we were unable to recover it. 00:30:30.749 [2024-12-09 06:29:25.257409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.749 [2024-12-09 06:29:25.257419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.749 qpair failed and we were unable to recover it. 00:30:30.749 [2024-12-09 06:29:25.257825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.749 [2024-12-09 06:29:25.257835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.749 qpair failed and we were unable to recover it. 00:30:30.749 [2024-12-09 06:29:25.258153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.749 [2024-12-09 06:29:25.258162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.749 qpair failed and we were unable to recover it. 00:30:30.749 [2024-12-09 06:29:25.258338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.749 [2024-12-09 06:29:25.258348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.749 qpair failed and we were unable to recover it. 00:30:30.749 [2024-12-09 06:29:25.258518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.749 [2024-12-09 06:29:25.258530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.749 qpair failed and we were unable to recover it. 00:30:30.749 [2024-12-09 06:29:25.258869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.749 [2024-12-09 06:29:25.258879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.749 qpair failed and we were unable to recover it. 00:30:30.749 [2024-12-09 06:29:25.259187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.749 [2024-12-09 06:29:25.259198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.749 qpair failed and we were unable to recover it. 00:30:30.749 [2024-12-09 06:29:25.259380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.749 [2024-12-09 06:29:25.259390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.749 qpair failed and we were unable to recover it. 00:30:30.749 [2024-12-09 06:29:25.259698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.749 [2024-12-09 06:29:25.259709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.749 qpair failed and we were unable to recover it. 00:30:30.749 [2024-12-09 06:29:25.260050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.749 [2024-12-09 06:29:25.260060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.749 qpair failed and we were unable to recover it. 00:30:30.749 [2024-12-09 06:29:25.260397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.749 [2024-12-09 06:29:25.260406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.749 qpair failed and we were unable to recover it. 00:30:30.749 [2024-12-09 06:29:25.260727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.749 [2024-12-09 06:29:25.260737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.749 qpair failed and we were unable to recover it. 00:30:30.749 [2024-12-09 06:29:25.261074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.749 [2024-12-09 06:29:25.261084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.749 qpair failed and we were unable to recover it. 00:30:30.749 [2024-12-09 06:29:25.261406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.749 [2024-12-09 06:29:25.261416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.749 qpair failed and we were unable to recover it. 00:30:30.749 [2024-12-09 06:29:25.261586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.749 [2024-12-09 06:29:25.261598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.749 qpair failed and we were unable to recover it. 00:30:30.749 [2024-12-09 06:29:25.261966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.749 [2024-12-09 06:29:25.261976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.749 qpair failed and we were unable to recover it. 00:30:30.749 [2024-12-09 06:29:25.262170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.749 [2024-12-09 06:29:25.262181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.749 qpair failed and we were unable to recover it. 00:30:30.749 [2024-12-09 06:29:25.262459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.749 [2024-12-09 06:29:25.262470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.749 qpair failed and we were unable to recover it. 00:30:30.749 [2024-12-09 06:29:25.262706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.749 [2024-12-09 06:29:25.262717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.749 qpair failed and we were unable to recover it. 00:30:30.749 [2024-12-09 06:29:25.263014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.749 [2024-12-09 06:29:25.263024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.749 qpair failed and we were unable to recover it. 00:30:30.749 [2024-12-09 06:29:25.263338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.749 [2024-12-09 06:29:25.263347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.749 qpair failed and we were unable to recover it. 00:30:30.749 [2024-12-09 06:29:25.263656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.749 [2024-12-09 06:29:25.263666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.749 qpair failed and we were unable to recover it. 00:30:30.749 [2024-12-09 06:29:25.263851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.749 [2024-12-09 06:29:25.263861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.749 qpair failed and we were unable to recover it. 00:30:30.749 [2024-12-09 06:29:25.264193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.749 [2024-12-09 06:29:25.264203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.749 qpair failed and we were unable to recover it. 00:30:30.749 [2024-12-09 06:29:25.264521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.749 [2024-12-09 06:29:25.264532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.749 qpair failed and we were unable to recover it. 00:30:30.749 [2024-12-09 06:29:25.264879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.749 [2024-12-09 06:29:25.264888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.749 qpair failed and we were unable to recover it. 00:30:30.749 [2024-12-09 06:29:25.265132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.749 [2024-12-09 06:29:25.265142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.749 qpair failed and we were unable to recover it. 00:30:30.749 [2024-12-09 06:29:25.265461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.749 [2024-12-09 06:29:25.265472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.749 qpair failed and we were unable to recover it. 00:30:30.749 [2024-12-09 06:29:25.265773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.749 [2024-12-09 06:29:25.265783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.749 qpair failed and we were unable to recover it. 00:30:30.749 [2024-12-09 06:29:25.266081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.749 [2024-12-09 06:29:25.266094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.749 qpair failed and we were unable to recover it. 00:30:30.749 [2024-12-09 06:29:25.266388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.749 [2024-12-09 06:29:25.266397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.749 qpair failed and we were unable to recover it. 00:30:30.749 [2024-12-09 06:29:25.266603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.749 [2024-12-09 06:29:25.266614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.749 qpair failed and we were unable to recover it. 00:30:30.749 [2024-12-09 06:29:25.266887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.749 [2024-12-09 06:29:25.266896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.749 qpair failed and we were unable to recover it. 00:30:30.749 [2024-12-09 06:29:25.267231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.749 [2024-12-09 06:29:25.267241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.749 qpair failed and we were unable to recover it. 00:30:30.749 [2024-12-09 06:29:25.267640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.749 [2024-12-09 06:29:25.267651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.749 qpair failed and we were unable to recover it. 00:30:30.749 [2024-12-09 06:29:25.267968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.749 [2024-12-09 06:29:25.267977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.749 qpair failed and we were unable to recover it. 00:30:30.749 [2024-12-09 06:29:25.268279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.749 [2024-12-09 06:29:25.268289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.749 qpair failed and we were unable to recover it. 00:30:30.749 [2024-12-09 06:29:25.268625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.749 [2024-12-09 06:29:25.268636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.749 qpair failed and we were unable to recover it. 00:30:30.749 [2024-12-09 06:29:25.268933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.749 [2024-12-09 06:29:25.268943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.749 qpair failed and we were unable to recover it. 00:30:30.749 [2024-12-09 06:29:25.269156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.749 [2024-12-09 06:29:25.269166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.749 qpair failed and we were unable to recover it. 00:30:30.749 [2024-12-09 06:29:25.269457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.749 [2024-12-09 06:29:25.269467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.749 qpair failed and we were unable to recover it. 00:30:30.749 [2024-12-09 06:29:25.269772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.749 [2024-12-09 06:29:25.269783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.749 qpair failed and we were unable to recover it. 00:30:30.749 [2024-12-09 06:29:25.269987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.749 [2024-12-09 06:29:25.269997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.749 qpair failed and we were unable to recover it. 00:30:30.749 [2024-12-09 06:29:25.270312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.749 [2024-12-09 06:29:25.270322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.749 qpair failed and we were unable to recover it. 00:30:30.749 [2024-12-09 06:29:25.270650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.749 [2024-12-09 06:29:25.270660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.749 qpair failed and we were unable to recover it. 00:30:30.749 [2024-12-09 06:29:25.270997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.749 [2024-12-09 06:29:25.271007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.749 qpair failed and we were unable to recover it. 00:30:30.749 [2024-12-09 06:29:25.271304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.749 [2024-12-09 06:29:25.271314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.749 qpair failed and we were unable to recover it. 00:30:30.749 [2024-12-09 06:29:25.271503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.749 [2024-12-09 06:29:25.271513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.749 qpair failed and we were unable to recover it. 00:30:30.749 [2024-12-09 06:29:25.271699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.749 [2024-12-09 06:29:25.271708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.749 qpair failed and we were unable to recover it. 00:30:30.749 [2024-12-09 06:29:25.271892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.749 [2024-12-09 06:29:25.271902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.749 qpair failed and we were unable to recover it. 00:30:30.749 [2024-12-09 06:29:25.271957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.749 [2024-12-09 06:29:25.271966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.749 qpair failed and we were unable to recover it. 00:30:30.749 [2024-12-09 06:29:25.272271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.749 [2024-12-09 06:29:25.272282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.749 qpair failed and we were unable to recover it. 00:30:30.749 [2024-12-09 06:29:25.272557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.749 [2024-12-09 06:29:25.272567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.749 qpair failed and we were unable to recover it. 00:30:30.749 [2024-12-09 06:29:25.272885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.749 [2024-12-09 06:29:25.272894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.749 qpair failed and we were unable to recover it. 00:30:30.749 [2024-12-09 06:29:25.273211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.749 [2024-12-09 06:29:25.273221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.749 qpair failed and we were unable to recover it. 00:30:30.749 [2024-12-09 06:29:25.273411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.749 [2024-12-09 06:29:25.273420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.749 qpair failed and we were unable to recover it. 00:30:30.749 [2024-12-09 06:29:25.273723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.749 [2024-12-09 06:29:25.273734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.749 qpair failed and we were unable to recover it. 00:30:30.749 [2024-12-09 06:29:25.274073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.749 [2024-12-09 06:29:25.274083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.749 qpair failed and we were unable to recover it. 00:30:30.749 [2024-12-09 06:29:25.274308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.749 [2024-12-09 06:29:25.274317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.749 qpair failed and we were unable to recover it. 00:30:30.749 [2024-12-09 06:29:25.274520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.749 [2024-12-09 06:29:25.274531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.749 qpair failed and we were unable to recover it. 00:30:30.749 [2024-12-09 06:29:25.274935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.749 [2024-12-09 06:29:25.274945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.749 qpair failed and we were unable to recover it. 00:30:30.749 [2024-12-09 06:29:25.275143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.749 [2024-12-09 06:29:25.275152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.749 qpair failed and we were unable to recover it. 00:30:30.749 [2024-12-09 06:29:25.275365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.749 [2024-12-09 06:29:25.275374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.749 qpair failed and we were unable to recover it. 00:30:30.749 [2024-12-09 06:29:25.275771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.749 [2024-12-09 06:29:25.275781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.750 qpair failed and we were unable to recover it. 00:30:30.750 [2024-12-09 06:29:25.276061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.750 [2024-12-09 06:29:25.276071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.750 qpair failed and we were unable to recover it. 00:30:30.750 [2024-12-09 06:29:25.276391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.750 [2024-12-09 06:29:25.276400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.750 qpair failed and we were unable to recover it. 00:30:30.750 [2024-12-09 06:29:25.276710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.750 [2024-12-09 06:29:25.276720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.750 qpair failed and we were unable to recover it. 00:30:30.750 [2024-12-09 06:29:25.277060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.750 [2024-12-09 06:29:25.277069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.750 qpair failed and we were unable to recover it. 00:30:30.750 [2024-12-09 06:29:25.277411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.750 [2024-12-09 06:29:25.277421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.750 qpair failed and we were unable to recover it. 00:30:30.750 [2024-12-09 06:29:25.277776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.750 [2024-12-09 06:29:25.277789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.750 qpair failed and we were unable to recover it. 00:30:30.750 [2024-12-09 06:29:25.278100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.750 [2024-12-09 06:29:25.278110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.750 qpair failed and we were unable to recover it. 00:30:30.750 [2024-12-09 06:29:25.278423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.750 [2024-12-09 06:29:25.278434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.750 qpair failed and we were unable to recover it. 00:30:30.750 [2024-12-09 06:29:25.278745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.750 [2024-12-09 06:29:25.278756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.750 qpair failed and we were unable to recover it. 00:30:30.750 [2024-12-09 06:29:25.278949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.750 [2024-12-09 06:29:25.278959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.750 qpair failed and we were unable to recover it. 00:30:30.750 [2024-12-09 06:29:25.279234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.750 [2024-12-09 06:29:25.279243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.750 qpair failed and we were unable to recover it. 00:30:30.750 [2024-12-09 06:29:25.279465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.750 [2024-12-09 06:29:25.279477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.750 qpair failed and we were unable to recover it. 00:30:30.750 [2024-12-09 06:29:25.279812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.750 [2024-12-09 06:29:25.279822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.750 qpair failed and we were unable to recover it. 00:30:30.750 [2024-12-09 06:29:25.280056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.750 [2024-12-09 06:29:25.280066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.750 qpair failed and we were unable to recover it. 00:30:30.750 [2024-12-09 06:29:25.280249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.750 [2024-12-09 06:29:25.280259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.750 qpair failed and we were unable to recover it. 00:30:30.750 [2024-12-09 06:29:25.280527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.750 [2024-12-09 06:29:25.280537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.750 qpair failed and we were unable to recover it. 00:30:30.750 [2024-12-09 06:29:25.280834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.750 [2024-12-09 06:29:25.280844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.750 qpair failed and we were unable to recover it. 00:30:30.750 [2024-12-09 06:29:25.281144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.750 [2024-12-09 06:29:25.281154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.750 qpair failed and we were unable to recover it. 00:30:30.750 [2024-12-09 06:29:25.281458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.750 [2024-12-09 06:29:25.281468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.750 qpair failed and we were unable to recover it. 00:30:30.750 [2024-12-09 06:29:25.281813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.750 [2024-12-09 06:29:25.281823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.750 qpair failed and we were unable to recover it. 00:30:30.750 [2024-12-09 06:29:25.282127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.750 [2024-12-09 06:29:25.282137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.750 qpair failed and we were unable to recover it. 00:30:30.750 [2024-12-09 06:29:25.282338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.750 [2024-12-09 06:29:25.282349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.750 qpair failed and we were unable to recover it. 00:30:30.750 [2024-12-09 06:29:25.282666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.750 [2024-12-09 06:29:25.282677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.750 qpair failed and we were unable to recover it. 00:30:30.750 [2024-12-09 06:29:25.283003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.750 [2024-12-09 06:29:25.283012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.750 qpair failed and we were unable to recover it. 00:30:30.750 [2024-12-09 06:29:25.283373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.750 [2024-12-09 06:29:25.283383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.750 qpair failed and we were unable to recover it. 00:30:30.750 [2024-12-09 06:29:25.283651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.750 [2024-12-09 06:29:25.283661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.750 qpair failed and we were unable to recover it. 00:30:30.750 [2024-12-09 06:29:25.283847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.750 [2024-12-09 06:29:25.283857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.750 qpair failed and we were unable to recover it. 00:30:30.750 [2024-12-09 06:29:25.284216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.750 [2024-12-09 06:29:25.284225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.750 qpair failed and we were unable to recover it. 00:30:30.750 [2024-12-09 06:29:25.284439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.750 [2024-12-09 06:29:25.284453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.750 qpair failed and we were unable to recover it. 00:30:30.750 [2024-12-09 06:29:25.284663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.750 [2024-12-09 06:29:25.284672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.750 qpair failed and we were unable to recover it. 00:30:30.750 [2024-12-09 06:29:25.284974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.750 [2024-12-09 06:29:25.284984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.750 qpair failed and we were unable to recover it. 00:30:30.750 [2024-12-09 06:29:25.285291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.750 [2024-12-09 06:29:25.285300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.750 qpair failed and we were unable to recover it. 00:30:30.750 [2024-12-09 06:29:25.285503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.750 [2024-12-09 06:29:25.285514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.750 qpair failed and we were unable to recover it. 00:30:30.750 [2024-12-09 06:29:25.285796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.750 [2024-12-09 06:29:25.285806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.750 qpair failed and we were unable to recover it. 00:30:30.750 [2024-12-09 06:29:25.286177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.750 [2024-12-09 06:29:25.286186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.750 qpair failed and we were unable to recover it. 00:30:30.750 [2024-12-09 06:29:25.286489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.750 [2024-12-09 06:29:25.286499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.750 qpair failed and we were unable to recover it. 00:30:30.750 [2024-12-09 06:29:25.286814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.750 [2024-12-09 06:29:25.286824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.750 qpair failed and we were unable to recover it. 00:30:30.750 [2024-12-09 06:29:25.287015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.750 [2024-12-09 06:29:25.287026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.750 qpair failed and we were unable to recover it. 00:30:30.750 [2024-12-09 06:29:25.287311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.750 [2024-12-09 06:29:25.287320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.750 qpair failed and we were unable to recover it. 00:30:30.750 [2024-12-09 06:29:25.287622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.750 [2024-12-09 06:29:25.287633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.750 qpair failed and we were unable to recover it. 00:30:30.750 [2024-12-09 06:29:25.287824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.750 [2024-12-09 06:29:25.287835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.750 qpair failed and we were unable to recover it. 00:30:30.750 [2024-12-09 06:29:25.288086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.750 [2024-12-09 06:29:25.288095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.750 qpair failed and we were unable to recover it. 00:30:30.750 [2024-12-09 06:29:25.288403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.750 [2024-12-09 06:29:25.288413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.750 qpair failed and we were unable to recover it. 00:30:30.750 [2024-12-09 06:29:25.288632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.750 [2024-12-09 06:29:25.288643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.750 qpair failed and we were unable to recover it. 00:30:30.750 [2024-12-09 06:29:25.288828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.750 [2024-12-09 06:29:25.288837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.750 qpair failed and we were unable to recover it. 00:30:30.750 [2024-12-09 06:29:25.289116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.750 [2024-12-09 06:29:25.289131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.750 qpair failed and we were unable to recover it. 00:30:30.750 [2024-12-09 06:29:25.289454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.750 [2024-12-09 06:29:25.289465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.750 qpair failed and we were unable to recover it. 00:30:30.750 [2024-12-09 06:29:25.289772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.750 [2024-12-09 06:29:25.289782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.750 qpair failed and we were unable to recover it. 00:30:30.750 [2024-12-09 06:29:25.290120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.750 [2024-12-09 06:29:25.290130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.750 qpair failed and we were unable to recover it. 00:30:30.750 [2024-12-09 06:29:25.290465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.750 [2024-12-09 06:29:25.290476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.750 qpair failed and we were unable to recover it. 00:30:30.750 [2024-12-09 06:29:25.290811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.750 [2024-12-09 06:29:25.290821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.750 qpair failed and we were unable to recover it. 00:30:30.750 [2024-12-09 06:29:25.291112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.750 [2024-12-09 06:29:25.291121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.750 qpair failed and we were unable to recover it. 00:30:30.750 [2024-12-09 06:29:25.291413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.750 [2024-12-09 06:29:25.291422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.750 qpair failed and we were unable to recover it. 00:30:30.750 [2024-12-09 06:29:25.291721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.750 [2024-12-09 06:29:25.291731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.750 qpair failed and we were unable to recover it. 00:30:30.750 [2024-12-09 06:29:25.292059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.750 [2024-12-09 06:29:25.292068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.750 qpair failed and we were unable to recover it. 00:30:30.750 [2024-12-09 06:29:25.292375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.750 [2024-12-09 06:29:25.292385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.750 qpair failed and we were unable to recover it. 00:30:30.750 [2024-12-09 06:29:25.292621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.750 [2024-12-09 06:29:25.292631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.750 qpair failed and we were unable to recover it. 00:30:30.750 [2024-12-09 06:29:25.292939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.750 [2024-12-09 06:29:25.292949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.750 qpair failed and we were unable to recover it. 00:30:30.750 [2024-12-09 06:29:25.293271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.750 [2024-12-09 06:29:25.293281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.750 qpair failed and we were unable to recover it. 00:30:30.750 [2024-12-09 06:29:25.293598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.750 [2024-12-09 06:29:25.293608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.750 qpair failed and we were unable to recover it. 00:30:30.750 [2024-12-09 06:29:25.293993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.750 [2024-12-09 06:29:25.294002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.750 qpair failed and we were unable to recover it. 00:30:30.750 [2024-12-09 06:29:25.294319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.750 [2024-12-09 06:29:25.294329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.750 qpair failed and we were unable to recover it. 00:30:30.750 [2024-12-09 06:29:25.294614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.750 [2024-12-09 06:29:25.294624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.750 qpair failed and we were unable to recover it. 00:30:30.750 [2024-12-09 06:29:25.294941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.750 [2024-12-09 06:29:25.294953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.750 qpair failed and we were unable to recover it. 00:30:30.750 [2024-12-09 06:29:25.295293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.750 [2024-12-09 06:29:25.295304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.750 qpair failed and we were unable to recover it. 00:30:30.750 [2024-12-09 06:29:25.295622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.750 [2024-12-09 06:29:25.295633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.750 qpair failed and we were unable to recover it. 00:30:30.750 [2024-12-09 06:29:25.295956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.750 [2024-12-09 06:29:25.295966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.750 qpair failed and we were unable to recover it. 00:30:30.750 [2024-12-09 06:29:25.296285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.750 [2024-12-09 06:29:25.296296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.750 qpair failed and we were unable to recover it. 00:30:30.750 [2024-12-09 06:29:25.296630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.750 [2024-12-09 06:29:25.296640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.750 qpair failed and we were unable to recover it. 00:30:30.750 [2024-12-09 06:29:25.296975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.750 [2024-12-09 06:29:25.296985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.750 qpair failed and we were unable to recover it. 00:30:30.751 [2024-12-09 06:29:25.297288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.751 [2024-12-09 06:29:25.297297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.751 qpair failed and we were unable to recover it. 00:30:30.751 [2024-12-09 06:29:25.297587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.751 [2024-12-09 06:29:25.297597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.751 qpair failed and we were unable to recover it. 00:30:30.751 [2024-12-09 06:29:25.297986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.751 [2024-12-09 06:29:25.297995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.751 qpair failed and we were unable to recover it. 00:30:30.751 [2024-12-09 06:29:25.298312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.751 [2024-12-09 06:29:25.298322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.751 qpair failed and we were unable to recover it. 00:30:30.751 [2024-12-09 06:29:25.298627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.751 [2024-12-09 06:29:25.298638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.751 qpair failed and we were unable to recover it. 00:30:30.751 [2024-12-09 06:29:25.298951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.751 [2024-12-09 06:29:25.298961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.751 qpair failed and we were unable to recover it. 00:30:30.751 [2024-12-09 06:29:25.299293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.751 [2024-12-09 06:29:25.299302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.751 qpair failed and we were unable to recover it. 00:30:30.751 [2024-12-09 06:29:25.299635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.751 [2024-12-09 06:29:25.299645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.751 qpair failed and we were unable to recover it. 00:30:30.751 [2024-12-09 06:29:25.299880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.751 [2024-12-09 06:29:25.299891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.751 qpair failed and we were unable to recover it. 00:30:30.751 [2024-12-09 06:29:25.300189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.751 [2024-12-09 06:29:25.300198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.751 qpair failed and we were unable to recover it. 00:30:30.751 [2024-12-09 06:29:25.300595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.751 [2024-12-09 06:29:25.300605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.751 qpair failed and we were unable to recover it. 00:30:30.751 [2024-12-09 06:29:25.300805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.751 [2024-12-09 06:29:25.300815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.751 qpair failed and we were unable to recover it. 00:30:30.751 [2024-12-09 06:29:25.301167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.751 [2024-12-09 06:29:25.301176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.751 qpair failed and we were unable to recover it. 00:30:30.751 [2024-12-09 06:29:25.301412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.751 [2024-12-09 06:29:25.301422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.751 qpair failed and we were unable to recover it. 00:30:30.751 [2024-12-09 06:29:25.301743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.751 [2024-12-09 06:29:25.301754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.751 qpair failed and we were unable to recover it. 00:30:30.751 [2024-12-09 06:29:25.302064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.751 [2024-12-09 06:29:25.302076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.751 qpair failed and we were unable to recover it. 00:30:30.751 [2024-12-09 06:29:25.302385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.751 [2024-12-09 06:29:25.302395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.751 qpair failed and we were unable to recover it. 00:30:30.751 [2024-12-09 06:29:25.302725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.751 [2024-12-09 06:29:25.302735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.751 qpair failed and we were unable to recover it. 00:30:30.751 [2024-12-09 06:29:25.303071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.751 [2024-12-09 06:29:25.303082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:30.751 qpair failed and we were unable to recover it. 00:30:30.751 [2024-12-09 06:29:25.303385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.751 [2024-12-09 06:29:25.303396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.025 qpair failed and we were unable to recover it. 00:30:31.025 [2024-12-09 06:29:25.303707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.025 [2024-12-09 06:29:25.303720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.025 qpair failed and we were unable to recover it. 00:30:31.025 [2024-12-09 06:29:25.304036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.025 [2024-12-09 06:29:25.304046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.025 qpair failed and we were unable to recover it. 00:30:31.025 [2024-12-09 06:29:25.304414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.025 [2024-12-09 06:29:25.304424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.025 qpair failed and we were unable to recover it. 00:30:31.025 [2024-12-09 06:29:25.304617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.025 [2024-12-09 06:29:25.304627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.025 qpair failed and we were unable to recover it. 00:30:31.025 [2024-12-09 06:29:25.304825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.025 [2024-12-09 06:29:25.304834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.025 qpair failed and we were unable to recover it. 00:30:31.025 [2024-12-09 06:29:25.305088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.025 [2024-12-09 06:29:25.305099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.025 qpair failed and we were unable to recover it. 00:30:31.025 [2024-12-09 06:29:25.305425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.026 [2024-12-09 06:29:25.305434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.026 qpair failed and we were unable to recover it. 00:30:31.026 [2024-12-09 06:29:25.305645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.026 [2024-12-09 06:29:25.305655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.026 qpair failed and we were unable to recover it. 00:30:31.026 [2024-12-09 06:29:25.305954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.026 [2024-12-09 06:29:25.305964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.026 qpair failed and we were unable to recover it. 00:30:31.026 [2024-12-09 06:29:25.306260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.026 [2024-12-09 06:29:25.306271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.026 qpair failed and we were unable to recover it. 00:30:31.026 [2024-12-09 06:29:25.306583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.026 [2024-12-09 06:29:25.306593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.026 qpair failed and we were unable to recover it. 00:30:31.026 [2024-12-09 06:29:25.306776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.026 [2024-12-09 06:29:25.306786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.026 qpair failed and we were unable to recover it. 00:30:31.026 [2024-12-09 06:29:25.307078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.026 [2024-12-09 06:29:25.307088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.026 qpair failed and we were unable to recover it. 00:30:31.026 [2024-12-09 06:29:25.307422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.026 [2024-12-09 06:29:25.307431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.026 qpair failed and we were unable to recover it. 00:30:31.026 [2024-12-09 06:29:25.307770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.026 [2024-12-09 06:29:25.307780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.026 qpair failed and we were unable to recover it. 00:30:31.026 [2024-12-09 06:29:25.308120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.026 [2024-12-09 06:29:25.308129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.026 qpair failed and we were unable to recover it. 00:30:31.026 [2024-12-09 06:29:25.308462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.026 [2024-12-09 06:29:25.308472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.026 qpair failed and we were unable to recover it. 00:30:31.026 [2024-12-09 06:29:25.308643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.026 [2024-12-09 06:29:25.308653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.026 qpair failed and we were unable to recover it. 00:30:31.026 [2024-12-09 06:29:25.308967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.026 [2024-12-09 06:29:25.308977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.026 qpair failed and we were unable to recover it. 00:30:31.026 [2024-12-09 06:29:25.309308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.026 [2024-12-09 06:29:25.309318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.026 qpair failed and we were unable to recover it. 00:30:31.026 [2024-12-09 06:29:25.309637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.026 [2024-12-09 06:29:25.309647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.026 qpair failed and we were unable to recover it. 00:30:31.026 [2024-12-09 06:29:25.309947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.026 [2024-12-09 06:29:25.309957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.026 qpair failed and we were unable to recover it. 00:30:31.026 [2024-12-09 06:29:25.310279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.026 [2024-12-09 06:29:25.310289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.026 qpair failed and we were unable to recover it. 00:30:31.026 [2024-12-09 06:29:25.310592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.026 [2024-12-09 06:29:25.310601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.026 qpair failed and we were unable to recover it. 00:30:31.026 [2024-12-09 06:29:25.310937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.026 [2024-12-09 06:29:25.310947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.026 qpair failed and we were unable to recover it. 00:30:31.026 [2024-12-09 06:29:25.311282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.026 [2024-12-09 06:29:25.311291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.026 qpair failed and we were unable to recover it. 00:30:31.026 [2024-12-09 06:29:25.311644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.026 [2024-12-09 06:29:25.311654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.026 qpair failed and we were unable to recover it. 00:30:31.026 [2024-12-09 06:29:25.311957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.026 [2024-12-09 06:29:25.311967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.026 qpair failed and we were unable to recover it. 00:30:31.026 [2024-12-09 06:29:25.312312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.026 [2024-12-09 06:29:25.312322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.026 qpair failed and we were unable to recover it. 00:30:31.026 [2024-12-09 06:29:25.312658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.026 [2024-12-09 06:29:25.312669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.026 qpair failed and we were unable to recover it. 00:30:31.026 [2024-12-09 06:29:25.312966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.026 [2024-12-09 06:29:25.312976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.026 qpair failed and we were unable to recover it. 00:30:31.026 [2024-12-09 06:29:25.313285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.026 [2024-12-09 06:29:25.313294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.026 qpair failed and we were unable to recover it. 00:30:31.026 [2024-12-09 06:29:25.313587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.026 [2024-12-09 06:29:25.313597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.026 qpair failed and we were unable to recover it. 00:30:31.026 [2024-12-09 06:29:25.313894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.026 [2024-12-09 06:29:25.313904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.026 qpair failed and we were unable to recover it. 00:30:31.026 [2024-12-09 06:29:25.314101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.026 [2024-12-09 06:29:25.314111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.026 qpair failed and we were unable to recover it. 00:30:31.026 [2024-12-09 06:29:25.314385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.026 [2024-12-09 06:29:25.314398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.026 qpair failed and we were unable to recover it. 00:30:31.026 [2024-12-09 06:29:25.314706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.026 [2024-12-09 06:29:25.314716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.026 qpair failed and we were unable to recover it. 00:30:31.026 [2024-12-09 06:29:25.315053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.026 [2024-12-09 06:29:25.315063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.026 qpair failed and we were unable to recover it. 00:30:31.026 [2024-12-09 06:29:25.315387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.026 [2024-12-09 06:29:25.315396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.026 qpair failed and we were unable to recover it. 00:30:31.026 [2024-12-09 06:29:25.315772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.026 [2024-12-09 06:29:25.315783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.026 qpair failed and we were unable to recover it. 00:30:31.026 [2024-12-09 06:29:25.316123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.026 [2024-12-09 06:29:25.316133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.026 qpair failed and we were unable to recover it. 00:30:31.026 [2024-12-09 06:29:25.316456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.026 [2024-12-09 06:29:25.316467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.026 qpair failed and we were unable to recover it. 00:30:31.026 [2024-12-09 06:29:25.316806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.026 [2024-12-09 06:29:25.316815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.027 qpair failed and we were unable to recover it. 00:30:31.027 [2024-12-09 06:29:25.317130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.027 [2024-12-09 06:29:25.317139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.027 qpair failed and we were unable to recover it. 00:30:31.027 [2024-12-09 06:29:25.317478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.027 [2024-12-09 06:29:25.317488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.027 qpair failed and we were unable to recover it. 00:30:31.027 [2024-12-09 06:29:25.317806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.027 [2024-12-09 06:29:25.317815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.027 qpair failed and we were unable to recover it. 00:30:31.027 [2024-12-09 06:29:25.318131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.027 [2024-12-09 06:29:25.318140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.027 qpair failed and we were unable to recover it. 00:30:31.027 [2024-12-09 06:29:25.318453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.027 [2024-12-09 06:29:25.318463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.027 qpair failed and we were unable to recover it. 00:30:31.027 [2024-12-09 06:29:25.318752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.027 [2024-12-09 06:29:25.318762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.027 qpair failed and we were unable to recover it. 00:30:31.027 [2024-12-09 06:29:25.319059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.027 [2024-12-09 06:29:25.319068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.027 qpair failed and we were unable to recover it. 00:30:31.027 [2024-12-09 06:29:25.319384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.027 [2024-12-09 06:29:25.319395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.027 qpair failed and we were unable to recover it. 00:30:31.027 [2024-12-09 06:29:25.319732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.027 [2024-12-09 06:29:25.319743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.027 qpair failed and we were unable to recover it. 00:30:31.027 [2024-12-09 06:29:25.320045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.027 [2024-12-09 06:29:25.320055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.027 qpair failed and we were unable to recover it. 00:30:31.027 [2024-12-09 06:29:25.320374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.027 [2024-12-09 06:29:25.320384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.027 qpair failed and we were unable to recover it. 00:30:31.027 [2024-12-09 06:29:25.320715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.027 [2024-12-09 06:29:25.320725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.027 qpair failed and we were unable to recover it. 00:30:31.027 [2024-12-09 06:29:25.321034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.027 [2024-12-09 06:29:25.321043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.027 qpair failed and we were unable to recover it. 00:30:31.027 [2024-12-09 06:29:25.321376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.027 [2024-12-09 06:29:25.321385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.027 qpair failed and we were unable to recover it. 00:30:31.027 [2024-12-09 06:29:25.321725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.027 [2024-12-09 06:29:25.321735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.027 qpair failed and we were unable to recover it. 00:30:31.027 [2024-12-09 06:29:25.322074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.027 [2024-12-09 06:29:25.322085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.027 qpair failed and we were unable to recover it. 00:30:31.027 [2024-12-09 06:29:25.322417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.027 [2024-12-09 06:29:25.322428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.027 qpair failed and we were unable to recover it. 00:30:31.027 [2024-12-09 06:29:25.322718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.027 [2024-12-09 06:29:25.322728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.027 qpair failed and we were unable to recover it. 00:30:31.027 [2024-12-09 06:29:25.323081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.027 [2024-12-09 06:29:25.323091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.027 qpair failed and we were unable to recover it. 00:30:31.027 [2024-12-09 06:29:25.323423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.027 [2024-12-09 06:29:25.323433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.027 qpair failed and we were unable to recover it. 00:30:31.027 [2024-12-09 06:29:25.323758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.027 [2024-12-09 06:29:25.323770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.027 qpair failed and we were unable to recover it. 00:30:31.027 [2024-12-09 06:29:25.324098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.027 [2024-12-09 06:29:25.324108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.027 qpair failed and we were unable to recover it. 00:30:31.027 [2024-12-09 06:29:25.324285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.027 [2024-12-09 06:29:25.324295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.027 qpair failed and we were unable to recover it. 00:30:31.027 [2024-12-09 06:29:25.324606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.027 [2024-12-09 06:29:25.324616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.027 qpair failed and we were unable to recover it. 00:30:31.027 [2024-12-09 06:29:25.324840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.027 [2024-12-09 06:29:25.324852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.027 qpair failed and we were unable to recover it. 00:30:31.027 [2024-12-09 06:29:25.325129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.027 [2024-12-09 06:29:25.325139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.027 qpair failed and we were unable to recover it. 00:30:31.027 [2024-12-09 06:29:25.325473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.027 [2024-12-09 06:29:25.325484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.027 qpair failed and we were unable to recover it. 00:30:31.027 [2024-12-09 06:29:25.325648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.027 [2024-12-09 06:29:25.325658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.027 qpair failed and we were unable to recover it. 00:30:31.027 [2024-12-09 06:29:25.325903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.027 [2024-12-09 06:29:25.325913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.027 qpair failed and we were unable to recover it. 00:30:31.027 [2024-12-09 06:29:25.326202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.027 [2024-12-09 06:29:25.326211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.027 qpair failed and we were unable to recover it. 00:30:31.027 [2024-12-09 06:29:25.326517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.027 [2024-12-09 06:29:25.326527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.027 qpair failed and we were unable to recover it. 00:30:31.027 [2024-12-09 06:29:25.326895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.027 [2024-12-09 06:29:25.326905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.027 qpair failed and we were unable to recover it. 00:30:31.027 [2024-12-09 06:29:25.327184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.027 [2024-12-09 06:29:25.327197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.027 qpair failed and we were unable to recover it. 00:30:31.027 [2024-12-09 06:29:25.327508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.027 [2024-12-09 06:29:25.327518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.027 qpair failed and we were unable to recover it. 00:30:31.028 [2024-12-09 06:29:25.327839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.028 [2024-12-09 06:29:25.327848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.028 qpair failed and we were unable to recover it. 00:30:31.028 [2024-12-09 06:29:25.328184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.028 [2024-12-09 06:29:25.328194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.028 qpair failed and we were unable to recover it. 00:30:31.028 [2024-12-09 06:29:25.328548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.028 [2024-12-09 06:29:25.328559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.028 qpair failed and we were unable to recover it. 00:30:31.028 [2024-12-09 06:29:25.328865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.028 [2024-12-09 06:29:25.328875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.028 qpair failed and we were unable to recover it. 00:30:31.028 [2024-12-09 06:29:25.329213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.028 [2024-12-09 06:29:25.329223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.028 qpair failed and we were unable to recover it. 00:30:31.028 [2024-12-09 06:29:25.329552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.028 [2024-12-09 06:29:25.329563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.028 qpair failed and we were unable to recover it. 00:30:31.028 [2024-12-09 06:29:25.329905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.028 [2024-12-09 06:29:25.329915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.028 qpair failed and we were unable to recover it. 00:30:31.028 [2024-12-09 06:29:25.330292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.028 [2024-12-09 06:29:25.330301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.028 qpair failed and we were unable to recover it. 00:30:31.028 [2024-12-09 06:29:25.330619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.028 [2024-12-09 06:29:25.330629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.028 qpair failed and we were unable to recover it. 00:30:31.028 [2024-12-09 06:29:25.330818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.028 [2024-12-09 06:29:25.330829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.028 qpair failed and we were unable to recover it. 00:30:31.028 [2024-12-09 06:29:25.331147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.028 [2024-12-09 06:29:25.331157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.028 qpair failed and we were unable to recover it. 00:30:31.028 [2024-12-09 06:29:25.331457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.028 [2024-12-09 06:29:25.331467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.028 qpair failed and we were unable to recover it. 00:30:31.028 [2024-12-09 06:29:25.331673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.028 [2024-12-09 06:29:25.331682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.028 qpair failed and we were unable to recover it. 00:30:31.028 [2024-12-09 06:29:25.332021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.028 [2024-12-09 06:29:25.332030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.028 qpair failed and we were unable to recover it. 00:30:31.028 [2024-12-09 06:29:25.332357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.028 [2024-12-09 06:29:25.332366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.028 qpair failed and we were unable to recover it. 00:30:31.028 [2024-12-09 06:29:25.332660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.028 [2024-12-09 06:29:25.332670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.028 qpair failed and we were unable to recover it. 00:30:31.028 [2024-12-09 06:29:25.333011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.028 [2024-12-09 06:29:25.333021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.028 qpair failed and we were unable to recover it. 00:30:31.028 [2024-12-09 06:29:25.333351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.028 [2024-12-09 06:29:25.333361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.028 qpair failed and we were unable to recover it. 00:30:31.028 [2024-12-09 06:29:25.333679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.028 [2024-12-09 06:29:25.333690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.028 qpair failed and we were unable to recover it. 00:30:31.028 [2024-12-09 06:29:25.334001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.028 [2024-12-09 06:29:25.334011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.028 qpair failed and we were unable to recover it. 00:30:31.028 [2024-12-09 06:29:25.334325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.028 [2024-12-09 06:29:25.334334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.028 qpair failed and we were unable to recover it. 00:30:31.028 [2024-12-09 06:29:25.334534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.028 [2024-12-09 06:29:25.334545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.028 qpair failed and we were unable to recover it. 00:30:31.028 [2024-12-09 06:29:25.334892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.028 [2024-12-09 06:29:25.334902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.028 qpair failed and we were unable to recover it. 00:30:31.028 [2024-12-09 06:29:25.335115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.028 [2024-12-09 06:29:25.335124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.028 qpair failed and we were unable to recover it. 00:30:31.028 [2024-12-09 06:29:25.335453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.028 [2024-12-09 06:29:25.335463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.028 qpair failed and we were unable to recover it. 00:30:31.028 [2024-12-09 06:29:25.335807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.028 [2024-12-09 06:29:25.335817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.028 qpair failed and we were unable to recover it. 00:30:31.028 [2024-12-09 06:29:25.336118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.028 [2024-12-09 06:29:25.336128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.028 qpair failed and we were unable to recover it. 00:30:31.028 [2024-12-09 06:29:25.336415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.028 [2024-12-09 06:29:25.336424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.028 qpair failed and we were unable to recover it. 00:30:31.028 [2024-12-09 06:29:25.336718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.028 [2024-12-09 06:29:25.336728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.028 qpair failed and we were unable to recover it. 00:30:31.028 [2024-12-09 06:29:25.337059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.028 [2024-12-09 06:29:25.337069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.028 qpair failed and we were unable to recover it. 00:30:31.028 [2024-12-09 06:29:25.337387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.028 [2024-12-09 06:29:25.337398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.028 qpair failed and we were unable to recover it. 00:30:31.028 [2024-12-09 06:29:25.337724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.028 [2024-12-09 06:29:25.337734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.028 qpair failed and we were unable to recover it. 00:30:31.028 [2024-12-09 06:29:25.337907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.028 [2024-12-09 06:29:25.337918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.028 qpair failed and we were unable to recover it. 00:30:31.028 [2024-12-09 06:29:25.338219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.028 [2024-12-09 06:29:25.338229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.028 qpair failed and we were unable to recover it. 00:30:31.028 [2024-12-09 06:29:25.338562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.028 [2024-12-09 06:29:25.338573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.028 qpair failed and we were unable to recover it. 00:30:31.028 [2024-12-09 06:29:25.338875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.028 [2024-12-09 06:29:25.338884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.028 qpair failed and we were unable to recover it. 00:30:31.028 [2024-12-09 06:29:25.339207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.028 [2024-12-09 06:29:25.339216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.028 qpair failed and we were unable to recover it. 00:30:31.028 [2024-12-09 06:29:25.339430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.028 [2024-12-09 06:29:25.339440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.029 qpair failed and we were unable to recover it. 00:30:31.029 [2024-12-09 06:29:25.339772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.029 [2024-12-09 06:29:25.339784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.029 qpair failed and we were unable to recover it. 00:30:31.029 [2024-12-09 06:29:25.340120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.029 [2024-12-09 06:29:25.340130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.029 qpair failed and we were unable to recover it. 00:30:31.029 [2024-12-09 06:29:25.340464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.029 [2024-12-09 06:29:25.340474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.029 qpair failed and we were unable to recover it. 00:30:31.029 [2024-12-09 06:29:25.340774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.029 [2024-12-09 06:29:25.340783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.029 qpair failed and we were unable to recover it. 00:30:31.029 [2024-12-09 06:29:25.341092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.029 [2024-12-09 06:29:25.341101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.029 qpair failed and we were unable to recover it. 00:30:31.029 [2024-12-09 06:29:25.341393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.029 [2024-12-09 06:29:25.341403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.029 qpair failed and we were unable to recover it. 00:30:31.029 [2024-12-09 06:29:25.341605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.029 [2024-12-09 06:29:25.341615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.029 qpair failed and we were unable to recover it. 00:30:31.029 [2024-12-09 06:29:25.341927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.029 [2024-12-09 06:29:25.341936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.029 qpair failed and we were unable to recover it. 00:30:31.029 [2024-12-09 06:29:25.342251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.029 [2024-12-09 06:29:25.342261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.029 qpair failed and we were unable to recover it. 00:30:31.029 [2024-12-09 06:29:25.342609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.029 [2024-12-09 06:29:25.342619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.029 qpair failed and we were unable to recover it. 00:30:31.029 [2024-12-09 06:29:25.342962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.029 [2024-12-09 06:29:25.342972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.029 qpair failed and we were unable to recover it. 00:30:31.029 [2024-12-09 06:29:25.343282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.029 [2024-12-09 06:29:25.343292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.029 qpair failed and we were unable to recover it. 00:30:31.029 [2024-12-09 06:29:25.343600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.029 [2024-12-09 06:29:25.343610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.029 qpair failed and we were unable to recover it. 00:30:31.029 [2024-12-09 06:29:25.343979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.029 [2024-12-09 06:29:25.343988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.029 qpair failed and we were unable to recover it. 00:30:31.029 [2024-12-09 06:29:25.344293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.029 [2024-12-09 06:29:25.344303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.029 qpair failed and we were unable to recover it. 00:30:31.029 [2024-12-09 06:29:25.344617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.029 [2024-12-09 06:29:25.344626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.029 qpair failed and we were unable to recover it. 00:30:31.029 [2024-12-09 06:29:25.344961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.029 [2024-12-09 06:29:25.344970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.029 qpair failed and we were unable to recover it. 00:30:31.029 [2024-12-09 06:29:25.345316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.029 [2024-12-09 06:29:25.345325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.029 qpair failed and we were unable to recover it. 00:30:31.029 [2024-12-09 06:29:25.345617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.029 [2024-12-09 06:29:25.345627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.029 qpair failed and we were unable to recover it. 00:30:31.029 [2024-12-09 06:29:25.345924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.029 [2024-12-09 06:29:25.345935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.029 qpair failed and we were unable to recover it. 00:30:31.029 [2024-12-09 06:29:25.346259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.029 [2024-12-09 06:29:25.346269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.029 qpair failed and we were unable to recover it. 00:30:31.029 [2024-12-09 06:29:25.346614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.029 [2024-12-09 06:29:25.346625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.029 qpair failed and we were unable to recover it. 00:30:31.029 [2024-12-09 06:29:25.346953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.029 [2024-12-09 06:29:25.346963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.029 qpair failed and we were unable to recover it. 00:30:31.029 [2024-12-09 06:29:25.347254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.029 [2024-12-09 06:29:25.347263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.029 qpair failed and we were unable to recover it. 00:30:31.029 [2024-12-09 06:29:25.347554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.029 [2024-12-09 06:29:25.347564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.029 qpair failed and we were unable to recover it. 00:30:31.029 [2024-12-09 06:29:25.347887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.029 [2024-12-09 06:29:25.347896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.029 qpair failed and we were unable to recover it. 00:30:31.029 [2024-12-09 06:29:25.348079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.029 [2024-12-09 06:29:25.348089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.029 qpair failed and we were unable to recover it. 00:30:31.029 [2024-12-09 06:29:25.348504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.029 [2024-12-09 06:29:25.348514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.029 qpair failed and we were unable to recover it. 00:30:31.029 [2024-12-09 06:29:25.348721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.029 [2024-12-09 06:29:25.348733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.029 qpair failed and we were unable to recover it. 00:30:31.029 [2024-12-09 06:29:25.349002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.029 [2024-12-09 06:29:25.349012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.029 qpair failed and we were unable to recover it. 00:30:31.029 [2024-12-09 06:29:25.349320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.029 [2024-12-09 06:29:25.349331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.029 qpair failed and we were unable to recover it. 00:30:31.029 [2024-12-09 06:29:25.349652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.029 [2024-12-09 06:29:25.349662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.029 qpair failed and we were unable to recover it. 00:30:31.029 [2024-12-09 06:29:25.349959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.029 [2024-12-09 06:29:25.349969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.029 qpair failed and we were unable to recover it. 00:30:31.029 [2024-12-09 06:29:25.350269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.029 [2024-12-09 06:29:25.350279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.029 qpair failed and we were unable to recover it. 00:30:31.029 [2024-12-09 06:29:25.350588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.029 [2024-12-09 06:29:25.350598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.029 qpair failed and we were unable to recover it. 00:30:31.029 [2024-12-09 06:29:25.350819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.029 [2024-12-09 06:29:25.350831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.029 qpair failed and we were unable to recover it. 00:30:31.029 [2024-12-09 06:29:25.351173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.029 [2024-12-09 06:29:25.351183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.029 qpair failed and we were unable to recover it. 00:30:31.029 [2024-12-09 06:29:25.351504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.029 [2024-12-09 06:29:25.351514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.030 qpair failed and we were unable to recover it. 00:30:31.030 [2024-12-09 06:29:25.351826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.030 [2024-12-09 06:29:25.351835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.030 qpair failed and we were unable to recover it. 00:30:31.030 [2024-12-09 06:29:25.352166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.030 [2024-12-09 06:29:25.352175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.030 qpair failed and we were unable to recover it. 00:30:31.030 [2024-12-09 06:29:25.352501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.030 [2024-12-09 06:29:25.352516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.030 qpair failed and we were unable to recover it. 00:30:31.030 [2024-12-09 06:29:25.352892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.030 [2024-12-09 06:29:25.352901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.030 qpair failed and we were unable to recover it. 00:30:31.030 [2024-12-09 06:29:25.353118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.030 [2024-12-09 06:29:25.353128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.030 qpair failed and we were unable to recover it. 00:30:31.030 [2024-12-09 06:29:25.353459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.030 [2024-12-09 06:29:25.353469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.030 qpair failed and we were unable to recover it. 00:30:31.030 [2024-12-09 06:29:25.353807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.030 [2024-12-09 06:29:25.353816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.030 qpair failed and we were unable to recover it. 00:30:31.030 [2024-12-09 06:29:25.354135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.030 [2024-12-09 06:29:25.354144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.030 qpair failed and we were unable to recover it. 00:30:31.030 [2024-12-09 06:29:25.354463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.030 [2024-12-09 06:29:25.354473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.030 qpair failed and we were unable to recover it. 00:30:31.030 [2024-12-09 06:29:25.354778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.030 [2024-12-09 06:29:25.354788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.030 qpair failed and we were unable to recover it. 00:30:31.030 [2024-12-09 06:29:25.355120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.030 [2024-12-09 06:29:25.355129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.030 qpair failed and we were unable to recover it. 00:30:31.030 [2024-12-09 06:29:25.355312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.030 [2024-12-09 06:29:25.355322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.030 qpair failed and we were unable to recover it. 00:30:31.030 [2024-12-09 06:29:25.355504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.030 [2024-12-09 06:29:25.355515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.030 qpair failed and we were unable to recover it. 00:30:31.030 [2024-12-09 06:29:25.355745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.030 [2024-12-09 06:29:25.355754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.030 qpair failed and we were unable to recover it. 00:30:31.030 [2024-12-09 06:29:25.356087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.030 [2024-12-09 06:29:25.356098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.030 qpair failed and we were unable to recover it. 00:30:31.030 [2024-12-09 06:29:25.356427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.030 [2024-12-09 06:29:25.356437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.030 qpair failed and we were unable to recover it. 00:30:31.030 [2024-12-09 06:29:25.356770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.030 [2024-12-09 06:29:25.356781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.030 qpair failed and we were unable to recover it. 00:30:31.030 [2024-12-09 06:29:25.357082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.030 [2024-12-09 06:29:25.357091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.030 qpair failed and we were unable to recover it. 00:30:31.030 [2024-12-09 06:29:25.357427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.030 [2024-12-09 06:29:25.357436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.030 qpair failed and we were unable to recover it. 00:30:31.030 [2024-12-09 06:29:25.357755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.030 [2024-12-09 06:29:25.357765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.030 qpair failed and we were unable to recover it. 00:30:31.030 [2024-12-09 06:29:25.358059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.030 [2024-12-09 06:29:25.358068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.030 qpair failed and we were unable to recover it. 00:30:31.030 [2024-12-09 06:29:25.358376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.030 [2024-12-09 06:29:25.358386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.030 qpair failed and we were unable to recover it. 00:30:31.030 [2024-12-09 06:29:25.358707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.030 [2024-12-09 06:29:25.358717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.030 qpair failed and we were unable to recover it. 00:30:31.030 [2024-12-09 06:29:25.359049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.030 [2024-12-09 06:29:25.359058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.030 qpair failed and we were unable to recover it. 00:30:31.030 [2024-12-09 06:29:25.359367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.030 [2024-12-09 06:29:25.359378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.030 qpair failed and we were unable to recover it. 00:30:31.030 [2024-12-09 06:29:25.359695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.030 [2024-12-09 06:29:25.359705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.030 qpair failed and we were unable to recover it. 00:30:31.030 [2024-12-09 06:29:25.360025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.030 [2024-12-09 06:29:25.360035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.030 qpair failed and we were unable to recover it. 00:30:31.030 [2024-12-09 06:29:25.360247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.030 [2024-12-09 06:29:25.360257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.030 qpair failed and we were unable to recover it. 00:30:31.030 [2024-12-09 06:29:25.360443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.030 [2024-12-09 06:29:25.360457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.030 qpair failed and we were unable to recover it. 00:30:31.030 [2024-12-09 06:29:25.360744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.030 [2024-12-09 06:29:25.360754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.030 qpair failed and we were unable to recover it. 00:30:31.030 [2024-12-09 06:29:25.361065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.030 [2024-12-09 06:29:25.361074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.030 qpair failed and we were unable to recover it. 00:30:31.030 [2024-12-09 06:29:25.361379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.030 [2024-12-09 06:29:25.361389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.030 qpair failed and we were unable to recover it. 00:30:31.031 [2024-12-09 06:29:25.361705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.031 [2024-12-09 06:29:25.361716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.031 qpair failed and we were unable to recover it. 00:30:31.031 [2024-12-09 06:29:25.362091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.031 [2024-12-09 06:29:25.362100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.031 qpair failed and we were unable to recover it. 00:30:31.031 [2024-12-09 06:29:25.362415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.031 [2024-12-09 06:29:25.362424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.031 qpair failed and we were unable to recover it. 00:30:31.031 [2024-12-09 06:29:25.362723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.031 [2024-12-09 06:29:25.362734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.031 qpair failed and we were unable to recover it. 00:30:31.031 [2024-12-09 06:29:25.363051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.031 [2024-12-09 06:29:25.363061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.031 qpair failed and we were unable to recover it. 00:30:31.031 [2024-12-09 06:29:25.363357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.031 [2024-12-09 06:29:25.363367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.031 qpair failed and we were unable to recover it. 00:30:31.031 [2024-12-09 06:29:25.363545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.031 [2024-12-09 06:29:25.363557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.031 qpair failed and we were unable to recover it. 00:30:31.031 [2024-12-09 06:29:25.363839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.031 [2024-12-09 06:29:25.363849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.031 qpair failed and we were unable to recover it. 00:30:31.031 [2024-12-09 06:29:25.364147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.031 [2024-12-09 06:29:25.364156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.031 qpair failed and we were unable to recover it. 00:30:31.031 [2024-12-09 06:29:25.364493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.031 [2024-12-09 06:29:25.364503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.031 qpair failed and we were unable to recover it. 00:30:31.031 [2024-12-09 06:29:25.364806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.031 [2024-12-09 06:29:25.364818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.031 qpair failed and we were unable to recover it. 00:30:31.031 [2024-12-09 06:29:25.365142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.031 [2024-12-09 06:29:25.365152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.031 qpair failed and we were unable to recover it. 00:30:31.031 [2024-12-09 06:29:25.365452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.031 [2024-12-09 06:29:25.365463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.031 qpair failed and we were unable to recover it. 00:30:31.031 [2024-12-09 06:29:25.365796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.031 [2024-12-09 06:29:25.365806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.031 qpair failed and we were unable to recover it. 00:30:31.031 [2024-12-09 06:29:25.366138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.031 [2024-12-09 06:29:25.366147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.031 qpair failed and we were unable to recover it. 00:30:31.031 [2024-12-09 06:29:25.366465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.031 [2024-12-09 06:29:25.366475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.031 qpair failed and we were unable to recover it. 00:30:31.031 [2024-12-09 06:29:25.366732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.031 [2024-12-09 06:29:25.366741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.031 qpair failed and we were unable to recover it. 00:30:31.031 [2024-12-09 06:29:25.367043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.031 [2024-12-09 06:29:25.367054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.031 qpair failed and we were unable to recover it. 00:30:31.031 [2024-12-09 06:29:25.367430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.031 [2024-12-09 06:29:25.367440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.031 qpair failed and we were unable to recover it. 00:30:31.031 [2024-12-09 06:29:25.367754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.031 [2024-12-09 06:29:25.367764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.031 qpair failed and we were unable to recover it. 00:30:31.031 [2024-12-09 06:29:25.368078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.031 [2024-12-09 06:29:25.368088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.031 qpair failed and we were unable to recover it. 00:30:31.031 [2024-12-09 06:29:25.368370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.031 [2024-12-09 06:29:25.368379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.031 qpair failed and we were unable to recover it. 00:30:31.031 [2024-12-09 06:29:25.368704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.031 [2024-12-09 06:29:25.368714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.031 qpair failed and we were unable to recover it. 00:30:31.031 [2024-12-09 06:29:25.369036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.031 [2024-12-09 06:29:25.369046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.031 qpair failed and we were unable to recover it. 00:30:31.031 [2024-12-09 06:29:25.369383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.031 [2024-12-09 06:29:25.369392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.031 qpair failed and we were unable to recover it. 00:30:31.031 [2024-12-09 06:29:25.369712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.031 [2024-12-09 06:29:25.369722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.031 qpair failed and we were unable to recover it. 00:30:31.031 [2024-12-09 06:29:25.370050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.031 [2024-12-09 06:29:25.370061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.031 qpair failed and we were unable to recover it. 00:30:31.031 [2024-12-09 06:29:25.370389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.031 [2024-12-09 06:29:25.370399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.031 qpair failed and we were unable to recover it. 00:30:31.031 [2024-12-09 06:29:25.370684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.031 [2024-12-09 06:29:25.370695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.031 qpair failed and we were unable to recover it. 00:30:31.031 [2024-12-09 06:29:25.370998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.031 [2024-12-09 06:29:25.371008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.031 qpair failed and we were unable to recover it. 00:30:31.031 [2024-12-09 06:29:25.371318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.031 [2024-12-09 06:29:25.371328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.031 qpair failed and we were unable to recover it. 00:30:31.031 [2024-12-09 06:29:25.371619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.031 [2024-12-09 06:29:25.371629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.031 qpair failed and we were unable to recover it. 00:30:31.031 [2024-12-09 06:29:25.371963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.031 [2024-12-09 06:29:25.371973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.031 qpair failed and we were unable to recover it. 00:30:31.031 [2024-12-09 06:29:25.372165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.031 [2024-12-09 06:29:25.372175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.031 qpair failed and we were unable to recover it. 00:30:31.031 [2024-12-09 06:29:25.372469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.031 [2024-12-09 06:29:25.372481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.031 qpair failed and we were unable to recover it. 00:30:31.031 [2024-12-09 06:29:25.372812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.031 [2024-12-09 06:29:25.372823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.031 qpair failed and we were unable to recover it. 00:30:31.032 [2024-12-09 06:29:25.373008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.032 [2024-12-09 06:29:25.373019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.032 qpair failed and we were unable to recover it. 00:30:31.032 [2024-12-09 06:29:25.373342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.032 [2024-12-09 06:29:25.373352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.032 qpair failed and we were unable to recover it. 00:30:31.032 [2024-12-09 06:29:25.373678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.032 [2024-12-09 06:29:25.373690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.032 qpair failed and we were unable to recover it. 00:30:31.032 [2024-12-09 06:29:25.374019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.032 [2024-12-09 06:29:25.374029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.032 qpair failed and we were unable to recover it. 00:30:31.032 [2024-12-09 06:29:25.374373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.032 [2024-12-09 06:29:25.374383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.032 qpair failed and we were unable to recover it. 00:30:31.032 [2024-12-09 06:29:25.374709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.032 [2024-12-09 06:29:25.374719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.032 qpair failed and we were unable to recover it. 00:30:31.032 [2024-12-09 06:29:25.375052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.032 [2024-12-09 06:29:25.375061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.032 qpair failed and we were unable to recover it. 00:30:31.032 [2024-12-09 06:29:25.375386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.032 [2024-12-09 06:29:25.375396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.032 qpair failed and we were unable to recover it. 00:30:31.032 [2024-12-09 06:29:25.375679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.032 [2024-12-09 06:29:25.375689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.032 qpair failed and we were unable to recover it. 00:30:31.032 [2024-12-09 06:29:25.375883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.032 [2024-12-09 06:29:25.375893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.032 qpair failed and we were unable to recover it. 00:30:31.032 [2024-12-09 06:29:25.376166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.032 [2024-12-09 06:29:25.376176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.032 qpair failed and we were unable to recover it. 00:30:31.032 [2024-12-09 06:29:25.376442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.032 [2024-12-09 06:29:25.376457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.032 qpair failed and we were unable to recover it. 00:30:31.032 [2024-12-09 06:29:25.376747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.032 [2024-12-09 06:29:25.376757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.032 qpair failed and we were unable to recover it. 00:30:31.032 [2024-12-09 06:29:25.377065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.032 [2024-12-09 06:29:25.377075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.032 qpair failed and we were unable to recover it. 00:30:31.032 [2024-12-09 06:29:25.377259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.032 [2024-12-09 06:29:25.377272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.032 qpair failed and we were unable to recover it. 00:30:31.032 [2024-12-09 06:29:25.377591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.032 [2024-12-09 06:29:25.377601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.032 qpair failed and we were unable to recover it. 00:30:31.032 [2024-12-09 06:29:25.377777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.032 [2024-12-09 06:29:25.377787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.032 qpair failed and we were unable to recover it. 00:30:31.032 [2024-12-09 06:29:25.378101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.032 [2024-12-09 06:29:25.378110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.032 qpair failed and we were unable to recover it. 00:30:31.032 [2024-12-09 06:29:25.378446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.032 [2024-12-09 06:29:25.378459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.032 qpair failed and we were unable to recover it. 00:30:31.032 [2024-12-09 06:29:25.378795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.032 [2024-12-09 06:29:25.378804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.032 qpair failed and we were unable to recover it. 00:30:31.032 [2024-12-09 06:29:25.378990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.032 [2024-12-09 06:29:25.378999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.032 qpair failed and we were unable to recover it. 00:30:31.032 [2024-12-09 06:29:25.379318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.032 [2024-12-09 06:29:25.379328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.032 qpair failed and we were unable to recover it. 00:30:31.032 [2024-12-09 06:29:25.379674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.032 [2024-12-09 06:29:25.379684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.032 qpair failed and we were unable to recover it. 00:30:31.032 [2024-12-09 06:29:25.380023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.032 [2024-12-09 06:29:25.380033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.032 qpair failed and we were unable to recover it. 00:30:31.032 [2024-12-09 06:29:25.380320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.032 [2024-12-09 06:29:25.380331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.032 qpair failed and we were unable to recover it. 00:30:31.032 [2024-12-09 06:29:25.380662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.032 [2024-12-09 06:29:25.380672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.032 qpair failed and we were unable to recover it. 00:30:31.032 [2024-12-09 06:29:25.381010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.032 [2024-12-09 06:29:25.381019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.032 qpair failed and we were unable to recover it. 00:30:31.032 [2024-12-09 06:29:25.381365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.032 [2024-12-09 06:29:25.381374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.032 qpair failed and we were unable to recover it. 00:30:31.032 [2024-12-09 06:29:25.381754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.032 [2024-12-09 06:29:25.381763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.032 qpair failed and we were unable to recover it. 00:30:31.032 [2024-12-09 06:29:25.382078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.032 [2024-12-09 06:29:25.382088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.032 qpair failed and we were unable to recover it. 00:30:31.032 [2024-12-09 06:29:25.382418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.032 [2024-12-09 06:29:25.382428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.032 qpair failed and we were unable to recover it. 00:30:31.032 [2024-12-09 06:29:25.382730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.032 [2024-12-09 06:29:25.382739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.032 qpair failed and we were unable to recover it. 00:30:31.032 [2024-12-09 06:29:25.383077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.032 [2024-12-09 06:29:25.383086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.032 qpair failed and we were unable to recover it. 00:30:31.032 [2024-12-09 06:29:25.383299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.032 [2024-12-09 06:29:25.383309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.032 qpair failed and we were unable to recover it. 00:30:31.032 [2024-12-09 06:29:25.383617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.032 [2024-12-09 06:29:25.383627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.032 qpair failed and we were unable to recover it. 00:30:31.032 [2024-12-09 06:29:25.383871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.032 [2024-12-09 06:29:25.383882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.032 qpair failed and we were unable to recover it. 00:30:31.032 [2024-12-09 06:29:25.384058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.032 [2024-12-09 06:29:25.384067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.032 qpair failed and we were unable to recover it. 00:30:31.032 [2024-12-09 06:29:25.384294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.032 [2024-12-09 06:29:25.384303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.032 qpair failed and we were unable to recover it. 00:30:31.033 [2024-12-09 06:29:25.384582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.033 [2024-12-09 06:29:25.384594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.033 qpair failed and we were unable to recover it. 00:30:31.033 [2024-12-09 06:29:25.384911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.033 [2024-12-09 06:29:25.384921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.033 qpair failed and we were unable to recover it. 00:30:31.033 [2024-12-09 06:29:25.385285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.033 [2024-12-09 06:29:25.385294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.033 qpair failed and we were unable to recover it. 00:30:31.033 [2024-12-09 06:29:25.385619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.033 [2024-12-09 06:29:25.385629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.033 qpair failed and we were unable to recover it. 00:30:31.033 [2024-12-09 06:29:25.385956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.033 [2024-12-09 06:29:25.385966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.033 qpair failed and we were unable to recover it. 00:30:31.033 [2024-12-09 06:29:25.386251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.033 [2024-12-09 06:29:25.386261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.033 qpair failed and we were unable to recover it. 00:30:31.033 [2024-12-09 06:29:25.386444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.033 [2024-12-09 06:29:25.386459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.033 qpair failed and we were unable to recover it. 00:30:31.033 [2024-12-09 06:29:25.386855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.033 [2024-12-09 06:29:25.386865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.033 qpair failed and we were unable to recover it. 00:30:31.033 [2024-12-09 06:29:25.387047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.033 [2024-12-09 06:29:25.387057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.033 qpair failed and we were unable to recover it. 00:30:31.033 [2024-12-09 06:29:25.387376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.033 [2024-12-09 06:29:25.387386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.033 qpair failed and we were unable to recover it. 00:30:31.033 [2024-12-09 06:29:25.387714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.033 [2024-12-09 06:29:25.387724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.033 qpair failed and we were unable to recover it. 00:30:31.033 [2024-12-09 06:29:25.388046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.033 [2024-12-09 06:29:25.388056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.033 qpair failed and we were unable to recover it. 00:30:31.033 [2024-12-09 06:29:25.388371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.033 [2024-12-09 06:29:25.388380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.033 qpair failed and we were unable to recover it. 00:30:31.033 [2024-12-09 06:29:25.388666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.033 [2024-12-09 06:29:25.388675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.033 qpair failed and we were unable to recover it. 00:30:31.033 [2024-12-09 06:29:25.389005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.033 [2024-12-09 06:29:25.389015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.033 qpair failed and we were unable to recover it. 00:30:31.033 [2024-12-09 06:29:25.389317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.033 [2024-12-09 06:29:25.389326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.033 qpair failed and we were unable to recover it. 00:30:31.033 [2024-12-09 06:29:25.389530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.033 [2024-12-09 06:29:25.389543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.033 qpair failed and we were unable to recover it. 00:30:31.033 [2024-12-09 06:29:25.389923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.033 [2024-12-09 06:29:25.389933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.033 qpair failed and we were unable to recover it. 00:30:31.033 [2024-12-09 06:29:25.390247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.033 [2024-12-09 06:29:25.390257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.033 qpair failed and we were unable to recover it. 00:30:31.033 [2024-12-09 06:29:25.390560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.033 [2024-12-09 06:29:25.390571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.033 qpair failed and we were unable to recover it. 00:30:31.033 [2024-12-09 06:29:25.390901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.033 [2024-12-09 06:29:25.390911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.033 qpair failed and we were unable to recover it. 00:30:31.033 [2024-12-09 06:29:25.391217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.033 [2024-12-09 06:29:25.391227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.033 qpair failed and we were unable to recover it. 00:30:31.033 [2024-12-09 06:29:25.391542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.033 [2024-12-09 06:29:25.391552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.033 qpair failed and we were unable to recover it. 00:30:31.033 [2024-12-09 06:29:25.391852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.033 [2024-12-09 06:29:25.391862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.033 qpair failed and we were unable to recover it. 00:30:31.033 [2024-12-09 06:29:25.392198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.033 [2024-12-09 06:29:25.392207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.033 qpair failed and we were unable to recover it. 00:30:31.033 [2024-12-09 06:29:25.392546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.033 [2024-12-09 06:29:25.392556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.033 qpair failed and we were unable to recover it. 00:30:31.033 [2024-12-09 06:29:25.392903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.033 [2024-12-09 06:29:25.392912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.033 qpair failed and we were unable to recover it. 00:30:31.033 [2024-12-09 06:29:25.393212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.033 [2024-12-09 06:29:25.393222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.033 qpair failed and we were unable to recover it. 00:30:31.033 [2024-12-09 06:29:25.393537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.033 [2024-12-09 06:29:25.393547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.033 qpair failed and we were unable to recover it. 00:30:31.033 [2024-12-09 06:29:25.393885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.033 [2024-12-09 06:29:25.393895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.033 qpair failed and we were unable to recover it. 00:30:31.033 [2024-12-09 06:29:25.394190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.033 [2024-12-09 06:29:25.394201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.033 qpair failed and we were unable to recover it. 00:30:31.033 [2024-12-09 06:29:25.394473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.033 [2024-12-09 06:29:25.394483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.033 qpair failed and we were unable to recover it. 00:30:31.033 [2024-12-09 06:29:25.394667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.033 [2024-12-09 06:29:25.394676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.033 qpair failed and we were unable to recover it. 00:30:31.033 [2024-12-09 06:29:25.394936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.033 [2024-12-09 06:29:25.394946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.033 qpair failed and we were unable to recover it. 00:30:31.033 [2024-12-09 06:29:25.395264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.033 [2024-12-09 06:29:25.395273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.033 qpair failed and we were unable to recover it. 00:30:31.033 [2024-12-09 06:29:25.395605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.033 [2024-12-09 06:29:25.395615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.033 qpair failed and we were unable to recover it. 00:30:31.033 [2024-12-09 06:29:25.395928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.034 [2024-12-09 06:29:25.395938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.034 qpair failed and we were unable to recover it. 00:30:31.034 [2024-12-09 06:29:25.396135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.034 [2024-12-09 06:29:25.396147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.034 qpair failed and we were unable to recover it. 00:30:31.034 [2024-12-09 06:29:25.396468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.034 [2024-12-09 06:29:25.396479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.034 qpair failed and we were unable to recover it. 00:30:31.034 [2024-12-09 06:29:25.396679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.034 [2024-12-09 06:29:25.396688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.034 qpair failed and we were unable to recover it. 00:30:31.034 [2024-12-09 06:29:25.397002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.034 [2024-12-09 06:29:25.397012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.034 qpair failed and we were unable to recover it. 00:30:31.034 [2024-12-09 06:29:25.397348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.034 [2024-12-09 06:29:25.397357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.034 qpair failed and we were unable to recover it. 00:30:31.034 [2024-12-09 06:29:25.397704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.034 [2024-12-09 06:29:25.397714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.034 qpair failed and we were unable to recover it. 00:30:31.034 [2024-12-09 06:29:25.398053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.034 [2024-12-09 06:29:25.398063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.034 qpair failed and we were unable to recover it. 00:30:31.034 [2024-12-09 06:29:25.398401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.034 [2024-12-09 06:29:25.398410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.034 qpair failed and we were unable to recover it. 00:30:31.034 [2024-12-09 06:29:25.398732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.034 [2024-12-09 06:29:25.398742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.034 qpair failed and we were unable to recover it. 00:30:31.034 [2024-12-09 06:29:25.398930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.034 [2024-12-09 06:29:25.398940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.034 qpair failed and we were unable to recover it. 00:30:31.034 [2024-12-09 06:29:25.399269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.034 [2024-12-09 06:29:25.399278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.034 qpair failed and we were unable to recover it. 00:30:31.034 [2024-12-09 06:29:25.399474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.034 [2024-12-09 06:29:25.399484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.034 qpair failed and we were unable to recover it. 00:30:31.034 [2024-12-09 06:29:25.399762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.034 [2024-12-09 06:29:25.399773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.034 qpair failed and we were unable to recover it. 00:30:31.034 [2024-12-09 06:29:25.400110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.034 [2024-12-09 06:29:25.400120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.034 qpair failed and we were unable to recover it. 00:30:31.034 [2024-12-09 06:29:25.400436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.034 [2024-12-09 06:29:25.400445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.034 qpair failed and we were unable to recover it. 00:30:31.034 [2024-12-09 06:29:25.400785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.034 [2024-12-09 06:29:25.400794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.034 qpair failed and we were unable to recover it. 00:30:31.034 [2024-12-09 06:29:25.401124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.034 [2024-12-09 06:29:25.401134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.034 qpair failed and we were unable to recover it. 00:30:31.034 [2024-12-09 06:29:25.401326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.034 [2024-12-09 06:29:25.401337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.034 qpair failed and we were unable to recover it. 00:30:31.034 [2024-12-09 06:29:25.401629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.034 [2024-12-09 06:29:25.401639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.034 qpair failed and we were unable to recover it. 00:30:31.034 [2024-12-09 06:29:25.401940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.034 [2024-12-09 06:29:25.401949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.034 qpair failed and we were unable to recover it. 00:30:31.034 [2024-12-09 06:29:25.402253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.034 [2024-12-09 06:29:25.402264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.034 qpair failed and we were unable to recover it. 00:30:31.034 [2024-12-09 06:29:25.402637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.034 [2024-12-09 06:29:25.402646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.034 qpair failed and we were unable to recover it. 00:30:31.034 [2024-12-09 06:29:25.402979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.034 [2024-12-09 06:29:25.402989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.034 qpair failed and we were unable to recover it. 00:30:31.034 [2024-12-09 06:29:25.403199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.034 [2024-12-09 06:29:25.403208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.034 qpair failed and we were unable to recover it. 00:30:31.034 [2024-12-09 06:29:25.403490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.034 [2024-12-09 06:29:25.403506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.034 qpair failed and we were unable to recover it. 00:30:31.034 [2024-12-09 06:29:25.403830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.034 [2024-12-09 06:29:25.403842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.034 qpair failed and we were unable to recover it. 00:30:31.034 [2024-12-09 06:29:25.404176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.034 [2024-12-09 06:29:25.404186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.034 qpair failed and we were unable to recover it. 00:30:31.034 [2024-12-09 06:29:25.404472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.034 [2024-12-09 06:29:25.404482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.034 qpair failed and we were unable to recover it. 00:30:31.034 [2024-12-09 06:29:25.404778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.034 [2024-12-09 06:29:25.404789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.034 qpair failed and we were unable to recover it. 00:30:31.034 [2024-12-09 06:29:25.405119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.034 [2024-12-09 06:29:25.405128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.034 qpair failed and we were unable to recover it. 00:30:31.034 [2024-12-09 06:29:25.405473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.034 [2024-12-09 06:29:25.405483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.034 qpair failed and we were unable to recover it. 00:30:31.034 [2024-12-09 06:29:25.405795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.034 [2024-12-09 06:29:25.405804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.034 qpair failed and we were unable to recover it. 00:30:31.034 [2024-12-09 06:29:25.406122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.034 [2024-12-09 06:29:25.406132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.034 qpair failed and we were unable to recover it. 00:30:31.034 [2024-12-09 06:29:25.406412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.034 [2024-12-09 06:29:25.406422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.034 qpair failed and we were unable to recover it. 00:30:31.034 [2024-12-09 06:29:25.406831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.034 [2024-12-09 06:29:25.406841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.034 qpair failed and we were unable to recover it. 00:30:31.034 [2024-12-09 06:29:25.407177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.034 [2024-12-09 06:29:25.407187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.034 qpair failed and we were unable to recover it. 00:30:31.034 [2024-12-09 06:29:25.407490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.034 [2024-12-09 06:29:25.407499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.034 qpair failed and we were unable to recover it. 00:30:31.034 [2024-12-09 06:29:25.407807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.035 [2024-12-09 06:29:25.407817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.035 qpair failed and we were unable to recover it. 00:30:31.035 [2024-12-09 06:29:25.408136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.035 [2024-12-09 06:29:25.408147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.035 qpair failed and we were unable to recover it. 00:30:31.035 [2024-12-09 06:29:25.408464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.035 [2024-12-09 06:29:25.408475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.035 qpair failed and we were unable to recover it. 00:30:31.035 [2024-12-09 06:29:25.408786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.035 [2024-12-09 06:29:25.408796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.035 qpair failed and we were unable to recover it. 00:30:31.035 [2024-12-09 06:29:25.409134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.035 [2024-12-09 06:29:25.409143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.035 qpair failed and we were unable to recover it. 00:30:31.035 [2024-12-09 06:29:25.409466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.035 [2024-12-09 06:29:25.409477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.035 qpair failed and we were unable to recover it. 00:30:31.035 [2024-12-09 06:29:25.409692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.035 [2024-12-09 06:29:25.409702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.035 qpair failed and we were unable to recover it. 00:30:31.035 [2024-12-09 06:29:25.410035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.035 [2024-12-09 06:29:25.410045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.035 qpair failed and we were unable to recover it. 00:30:31.035 [2024-12-09 06:29:25.410347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.035 [2024-12-09 06:29:25.410358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.035 qpair failed and we were unable to recover it. 00:30:31.035 [2024-12-09 06:29:25.410697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.035 [2024-12-09 06:29:25.410710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.035 qpair failed and we were unable to recover it. 00:30:31.035 [2024-12-09 06:29:25.411038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.035 [2024-12-09 06:29:25.411047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.035 qpair failed and we were unable to recover it. 00:30:31.035 [2024-12-09 06:29:25.411325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.035 [2024-12-09 06:29:25.411335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.035 qpair failed and we were unable to recover it. 00:30:31.035 [2024-12-09 06:29:25.411691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.035 [2024-12-09 06:29:25.411700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.035 qpair failed and we were unable to recover it. 00:30:31.035 [2024-12-09 06:29:25.412013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.035 [2024-12-09 06:29:25.412023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.035 qpair failed and we were unable to recover it. 00:30:31.035 [2024-12-09 06:29:25.412213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.035 [2024-12-09 06:29:25.412223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.035 qpair failed and we were unable to recover it. 00:30:31.035 [2024-12-09 06:29:25.412427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.035 [2024-12-09 06:29:25.412437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.035 qpair failed and we were unable to recover it. 00:30:31.035 [2024-12-09 06:29:25.412722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.035 [2024-12-09 06:29:25.412733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.035 qpair failed and we were unable to recover it. 00:30:31.035 [2024-12-09 06:29:25.413066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.035 [2024-12-09 06:29:25.413076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.035 qpair failed and we were unable to recover it. 00:30:31.035 [2024-12-09 06:29:25.413416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.035 [2024-12-09 06:29:25.413426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.035 qpair failed and we were unable to recover it. 00:30:31.035 [2024-12-09 06:29:25.413573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.035 [2024-12-09 06:29:25.413584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.035 qpair failed and we were unable to recover it. 00:30:31.035 [2024-12-09 06:29:25.413884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.035 [2024-12-09 06:29:25.413895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.035 qpair failed and we were unable to recover it. 00:30:31.035 [2024-12-09 06:29:25.414218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.035 [2024-12-09 06:29:25.414229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.035 qpair failed and we were unable to recover it. 00:30:31.035 [2024-12-09 06:29:25.414543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.035 [2024-12-09 06:29:25.414553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.035 qpair failed and we were unable to recover it. 00:30:31.035 [2024-12-09 06:29:25.414891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.035 [2024-12-09 06:29:25.414901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.035 qpair failed and we were unable to recover it. 00:30:31.035 [2024-12-09 06:29:25.415270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.035 [2024-12-09 06:29:25.415280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.035 qpair failed and we were unable to recover it. 00:30:31.035 [2024-12-09 06:29:25.415410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.035 [2024-12-09 06:29:25.415419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.035 qpair failed and we were unable to recover it. 00:30:31.035 [2024-12-09 06:29:25.415748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.035 [2024-12-09 06:29:25.415758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.035 qpair failed and we were unable to recover it. 00:30:31.035 [2024-12-09 06:29:25.416092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.035 [2024-12-09 06:29:25.416102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.035 qpair failed and we were unable to recover it. 00:30:31.035 [2024-12-09 06:29:25.416404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.035 [2024-12-09 06:29:25.416413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.035 qpair failed and we were unable to recover it. 00:30:31.035 [2024-12-09 06:29:25.416629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.035 [2024-12-09 06:29:25.416640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.035 qpair failed and we were unable to recover it. 00:30:31.035 [2024-12-09 06:29:25.416868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.035 [2024-12-09 06:29:25.416879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.035 qpair failed and we were unable to recover it. 00:30:31.035 [2024-12-09 06:29:25.417191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.035 [2024-12-09 06:29:25.417202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.035 qpair failed and we were unable to recover it. 00:30:31.035 [2024-12-09 06:29:25.417528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.035 [2024-12-09 06:29:25.417537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.035 qpair failed and we were unable to recover it. 00:30:31.035 [2024-12-09 06:29:25.417836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.035 [2024-12-09 06:29:25.417845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.035 qpair failed and we were unable to recover it. 00:30:31.035 [2024-12-09 06:29:25.418185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.036 [2024-12-09 06:29:25.418194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.036 qpair failed and we were unable to recover it. 00:30:31.036 [2024-12-09 06:29:25.418490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.036 [2024-12-09 06:29:25.418499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.036 qpair failed and we were unable to recover it. 00:30:31.036 [2024-12-09 06:29:25.418803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.036 [2024-12-09 06:29:25.418812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.036 qpair failed and we were unable to recover it. 00:30:31.036 [2024-12-09 06:29:25.419152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.036 [2024-12-09 06:29:25.419161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.036 qpair failed and we were unable to recover it. 00:30:31.036 [2024-12-09 06:29:25.419461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.036 [2024-12-09 06:29:25.419472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.036 qpair failed and we were unable to recover it. 00:30:31.036 [2024-12-09 06:29:25.419811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.036 [2024-12-09 06:29:25.419821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.036 qpair failed and we were unable to recover it. 00:30:31.036 [2024-12-09 06:29:25.420144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.036 [2024-12-09 06:29:25.420153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.036 qpair failed and we were unable to recover it. 00:30:31.036 [2024-12-09 06:29:25.420489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.036 [2024-12-09 06:29:25.420499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.036 qpair failed and we were unable to recover it. 00:30:31.036 [2024-12-09 06:29:25.420813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.036 [2024-12-09 06:29:25.420822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.036 qpair failed and we were unable to recover it. 00:30:31.036 [2024-12-09 06:29:25.421162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.036 [2024-12-09 06:29:25.421171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.036 qpair failed and we were unable to recover it. 00:30:31.036 [2024-12-09 06:29:25.421460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.036 [2024-12-09 06:29:25.421470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.036 qpair failed and we were unable to recover it. 00:30:31.036 [2024-12-09 06:29:25.421804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.036 [2024-12-09 06:29:25.421814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.036 qpair failed and we were unable to recover it. 00:30:31.036 [2024-12-09 06:29:25.422147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.036 [2024-12-09 06:29:25.422156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.036 qpair failed and we were unable to recover it. 00:30:31.036 [2024-12-09 06:29:25.422490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.036 [2024-12-09 06:29:25.422501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.036 qpair failed and we were unable to recover it. 00:30:31.036 [2024-12-09 06:29:25.422664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.036 [2024-12-09 06:29:25.422674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.036 qpair failed and we were unable to recover it. 00:30:31.036 [2024-12-09 06:29:25.422987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.036 [2024-12-09 06:29:25.422999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.036 qpair failed and we were unable to recover it. 00:30:31.036 [2024-12-09 06:29:25.423335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.036 [2024-12-09 06:29:25.423345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.036 qpair failed and we were unable to recover it. 00:30:31.036 [2024-12-09 06:29:25.423659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.036 [2024-12-09 06:29:25.423668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.036 qpair failed and we were unable to recover it. 00:30:31.036 [2024-12-09 06:29:25.423979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.036 [2024-12-09 06:29:25.423988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.036 qpair failed and we were unable to recover it. 00:30:31.036 [2024-12-09 06:29:25.424189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.036 [2024-12-09 06:29:25.424199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.036 qpair failed and we were unable to recover it. 00:30:31.036 [2024-12-09 06:29:25.424536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.036 [2024-12-09 06:29:25.424546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.036 qpair failed and we were unable to recover it. 00:30:31.036 [2024-12-09 06:29:25.424846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.036 [2024-12-09 06:29:25.424856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.036 qpair failed and we were unable to recover it. 00:30:31.036 [2024-12-09 06:29:25.425158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.036 [2024-12-09 06:29:25.425168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.036 qpair failed and we were unable to recover it. 00:30:31.036 [2024-12-09 06:29:25.425443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.036 [2024-12-09 06:29:25.425458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.036 qpair failed and we were unable to recover it. 00:30:31.036 [2024-12-09 06:29:25.425784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.036 [2024-12-09 06:29:25.425794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.036 qpair failed and we were unable to recover it. 00:30:31.036 [2024-12-09 06:29:25.426120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.036 [2024-12-09 06:29:25.426130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.036 qpair failed and we were unable to recover it. 00:30:31.036 [2024-12-09 06:29:25.426471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.036 [2024-12-09 06:29:25.426482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.036 qpair failed and we were unable to recover it. 00:30:31.036 [2024-12-09 06:29:25.426820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.036 [2024-12-09 06:29:25.426829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.036 qpair failed and we were unable to recover it. 00:30:31.036 [2024-12-09 06:29:25.427170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.036 [2024-12-09 06:29:25.427180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.036 qpair failed and we were unable to recover it. 00:30:31.036 [2024-12-09 06:29:25.427498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.036 [2024-12-09 06:29:25.427508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.036 qpair failed and we were unable to recover it. 00:30:31.036 [2024-12-09 06:29:25.427882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.036 [2024-12-09 06:29:25.427891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.036 qpair failed and we were unable to recover it. 00:30:31.036 [2024-12-09 06:29:25.428204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.036 [2024-12-09 06:29:25.428213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.036 qpair failed and we were unable to recover it. 00:30:31.036 [2024-12-09 06:29:25.428584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.036 [2024-12-09 06:29:25.428594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.036 qpair failed and we were unable to recover it. 00:30:31.036 [2024-12-09 06:29:25.428922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.036 [2024-12-09 06:29:25.428933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.036 qpair failed and we were unable to recover it. 00:30:31.036 [2024-12-09 06:29:25.429248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.036 [2024-12-09 06:29:25.429257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.036 qpair failed and we were unable to recover it. 00:30:31.036 [2024-12-09 06:29:25.429597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.036 [2024-12-09 06:29:25.429606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.036 qpair failed and we were unable to recover it. 00:30:31.036 [2024-12-09 06:29:25.429943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.036 [2024-12-09 06:29:25.429952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.036 qpair failed and we were unable to recover it. 00:30:31.036 [2024-12-09 06:29:25.430264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.036 [2024-12-09 06:29:25.430274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.036 qpair failed and we were unable to recover it. 00:30:31.036 [2024-12-09 06:29:25.430568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.037 [2024-12-09 06:29:25.430578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.037 qpair failed and we were unable to recover it. 00:30:31.037 [2024-12-09 06:29:25.430912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.037 [2024-12-09 06:29:25.430921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.037 qpair failed and we were unable to recover it. 00:30:31.037 [2024-12-09 06:29:25.431239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.037 [2024-12-09 06:29:25.431249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.037 qpair failed and we were unable to recover it. 00:30:31.037 [2024-12-09 06:29:25.431436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.037 [2024-12-09 06:29:25.431445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.037 qpair failed and we were unable to recover it. 00:30:31.037 [2024-12-09 06:29:25.431730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.037 [2024-12-09 06:29:25.431740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.037 qpair failed and we were unable to recover it. 00:30:31.037 [2024-12-09 06:29:25.432071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.037 [2024-12-09 06:29:25.432081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.037 qpair failed and we were unable to recover it. 00:30:31.037 [2024-12-09 06:29:25.432398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.037 [2024-12-09 06:29:25.432409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.037 qpair failed and we were unable to recover it. 00:30:31.037 [2024-12-09 06:29:25.432717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.037 [2024-12-09 06:29:25.432727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.037 qpair failed and we were unable to recover it. 00:30:31.037 [2024-12-09 06:29:25.433096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.037 [2024-12-09 06:29:25.433106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.037 qpair failed and we were unable to recover it. 00:30:31.037 [2024-12-09 06:29:25.433430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.037 [2024-12-09 06:29:25.433439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.037 qpair failed and we were unable to recover it. 00:30:31.037 [2024-12-09 06:29:25.433759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.037 [2024-12-09 06:29:25.433769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.037 qpair failed and we were unable to recover it. 00:30:31.037 [2024-12-09 06:29:25.434106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.037 [2024-12-09 06:29:25.434116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.037 qpair failed and we were unable to recover it. 00:30:31.037 [2024-12-09 06:29:25.434413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.037 [2024-12-09 06:29:25.434423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.037 qpair failed and we were unable to recover it. 00:30:31.037 [2024-12-09 06:29:25.434727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.037 [2024-12-09 06:29:25.434737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.037 qpair failed and we were unable to recover it. 00:30:31.037 [2024-12-09 06:29:25.435124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.037 [2024-12-09 06:29:25.435133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.037 qpair failed and we were unable to recover it. 00:30:31.037 [2024-12-09 06:29:25.435476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.037 [2024-12-09 06:29:25.435486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.037 qpair failed and we were unable to recover it. 00:30:31.037 [2024-12-09 06:29:25.435812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.037 [2024-12-09 06:29:25.435822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.037 qpair failed and we were unable to recover it. 00:30:31.037 [2024-12-09 06:29:25.436008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.037 [2024-12-09 06:29:25.436022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.037 qpair failed and we were unable to recover it. 00:30:31.037 [2024-12-09 06:29:25.436296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.037 [2024-12-09 06:29:25.436306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.037 qpair failed and we were unable to recover it. 00:30:31.037 [2024-12-09 06:29:25.436617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.037 [2024-12-09 06:29:25.436627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.037 qpair failed and we were unable to recover it. 00:30:31.037 [2024-12-09 06:29:25.436945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.037 [2024-12-09 06:29:25.436955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.037 qpair failed and we were unable to recover it. 00:30:31.037 [2024-12-09 06:29:25.437289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.037 [2024-12-09 06:29:25.437299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.037 qpair failed and we were unable to recover it. 00:30:31.037 [2024-12-09 06:29:25.437677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.037 [2024-12-09 06:29:25.437687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.037 qpair failed and we were unable to recover it. 00:30:31.037 [2024-12-09 06:29:25.438014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.037 [2024-12-09 06:29:25.438024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.037 qpair failed and we were unable to recover it. 00:30:31.037 [2024-12-09 06:29:25.438193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.037 [2024-12-09 06:29:25.438203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.037 qpair failed and we were unable to recover it. 00:30:31.037 [2024-12-09 06:29:25.438409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.037 [2024-12-09 06:29:25.438420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.037 qpair failed and we were unable to recover it. 00:30:31.037 [2024-12-09 06:29:25.438753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.037 [2024-12-09 06:29:25.438763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.037 qpair failed and we were unable to recover it. 00:30:31.037 [2024-12-09 06:29:25.439057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.037 [2024-12-09 06:29:25.439068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.037 qpair failed and we were unable to recover it. 00:30:31.037 [2024-12-09 06:29:25.439397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.037 [2024-12-09 06:29:25.439408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.037 qpair failed and we were unable to recover it. 00:30:31.037 [2024-12-09 06:29:25.439740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.037 [2024-12-09 06:29:25.439750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.037 qpair failed and we were unable to recover it. 00:30:31.037 [2024-12-09 06:29:25.440090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.037 [2024-12-09 06:29:25.440100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.037 qpair failed and we were unable to recover it. 00:30:31.037 [2024-12-09 06:29:25.440418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.037 [2024-12-09 06:29:25.440428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.037 qpair failed and we were unable to recover it. 00:30:31.037 [2024-12-09 06:29:25.440720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.037 [2024-12-09 06:29:25.440731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.037 qpair failed and we were unable to recover it. 00:30:31.037 [2024-12-09 06:29:25.441042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.037 [2024-12-09 06:29:25.441052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.037 qpair failed and we were unable to recover it. 00:30:31.037 [2024-12-09 06:29:25.441374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.037 [2024-12-09 06:29:25.441385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.037 qpair failed and we were unable to recover it. 00:30:31.037 [2024-12-09 06:29:25.441689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.037 [2024-12-09 06:29:25.441700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.037 qpair failed and we were unable to recover it. 00:30:31.037 [2024-12-09 06:29:25.442028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.037 [2024-12-09 06:29:25.442038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.037 qpair failed and we were unable to recover it. 00:30:31.037 [2024-12-09 06:29:25.442348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.037 [2024-12-09 06:29:25.442358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.037 qpair failed and we were unable to recover it. 00:30:31.037 [2024-12-09 06:29:25.442689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.038 [2024-12-09 06:29:25.442700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.038 qpair failed and we were unable to recover it. 00:30:31.038 [2024-12-09 06:29:25.443011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.038 [2024-12-09 06:29:25.443021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.038 qpair failed and we were unable to recover it. 00:30:31.038 [2024-12-09 06:29:25.443222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.038 [2024-12-09 06:29:25.443232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.038 qpair failed and we were unable to recover it. 00:30:31.038 [2024-12-09 06:29:25.443505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.038 [2024-12-09 06:29:25.443517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.038 qpair failed and we were unable to recover it. 00:30:31.038 [2024-12-09 06:29:25.443871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.038 [2024-12-09 06:29:25.443882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.038 qpair failed and we were unable to recover it. 00:30:31.038 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 504357 Killed "${NVMF_APP[@]}" "$@" 00:30:31.038 [2024-12-09 06:29:25.444211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.038 [2024-12-09 06:29:25.444224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.038 qpair failed and we were unable to recover it. 00:30:31.038 [2024-12-09 06:29:25.444540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.038 [2024-12-09 06:29:25.444550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.038 qpair failed and we were unable to recover it. 00:30:31.038 06:29:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:30:31.038 [2024-12-09 06:29:25.444893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.038 [2024-12-09 06:29:25.444904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.038 qpair failed and we were unable to recover it. 00:30:31.038 06:29:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:31.038 [2024-12-09 06:29:25.445230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.038 [2024-12-09 06:29:25.445241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.038 qpair failed and we were unable to recover it. 00:30:31.038 06:29:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:31.038 [2024-12-09 06:29:25.445582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.038 [2024-12-09 06:29:25.445593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.038 qpair failed and we were unable to recover it. 00:30:31.038 06:29:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:31.038 [2024-12-09 06:29:25.445920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.038 [2024-12-09 06:29:25.445930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.038 qpair failed and we were unable to recover it. 00:30:31.038 06:29:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:31.038 [2024-12-09 06:29:25.446245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.038 [2024-12-09 06:29:25.446257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.038 qpair failed and we were unable to recover it. 00:30:31.038 [2024-12-09 06:29:25.446633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.038 [2024-12-09 06:29:25.446645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.038 qpair failed and we were unable to recover it. 00:30:31.038 [2024-12-09 06:29:25.446975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.038 [2024-12-09 06:29:25.446985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.038 qpair failed and we were unable to recover it. 00:30:31.038 [2024-12-09 06:29:25.447319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.038 [2024-12-09 06:29:25.447330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.038 qpair failed and we were unable to recover it. 00:30:31.038 [2024-12-09 06:29:25.447672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.038 [2024-12-09 06:29:25.447682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.038 qpair failed and we were unable to recover it. 00:30:31.038 [2024-12-09 06:29:25.447895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.038 [2024-12-09 06:29:25.447906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.038 qpair failed and we were unable to recover it. 00:30:31.038 [2024-12-09 06:29:25.448244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.038 [2024-12-09 06:29:25.448253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.038 qpair failed and we were unable to recover it. 00:30:31.038 [2024-12-09 06:29:25.448466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.038 [2024-12-09 06:29:25.448479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.038 qpair failed and we were unable to recover it. 00:30:31.038 [2024-12-09 06:29:25.448796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.038 [2024-12-09 06:29:25.448806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.038 qpair failed and we were unable to recover it. 00:30:31.038 [2024-12-09 06:29:25.449135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.038 [2024-12-09 06:29:25.449145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.038 qpair failed and we were unable to recover it. 00:30:31.038 [2024-12-09 06:29:25.449522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.038 [2024-12-09 06:29:25.449534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.038 qpair failed and we were unable to recover it. 00:30:31.038 [2024-12-09 06:29:25.449856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.038 [2024-12-09 06:29:25.449865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.038 qpair failed and we were unable to recover it. 00:30:31.038 [2024-12-09 06:29:25.450175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.038 [2024-12-09 06:29:25.450185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.038 qpair failed and we were unable to recover it. 00:30:31.038 [2024-12-09 06:29:25.450512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.038 [2024-12-09 06:29:25.450523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.038 qpair failed and we were unable to recover it. 00:30:31.038 [2024-12-09 06:29:25.450843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.038 [2024-12-09 06:29:25.450855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.038 qpair failed and we were unable to recover it. 00:30:31.038 [2024-12-09 06:29:25.451077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.038 [2024-12-09 06:29:25.451087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.038 qpair failed and we were unable to recover it. 00:30:31.038 [2024-12-09 06:29:25.451419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.038 [2024-12-09 06:29:25.451430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.038 qpair failed and we were unable to recover it. 00:30:31.038 [2024-12-09 06:29:25.451756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.038 [2024-12-09 06:29:25.451767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.038 qpair failed and we were unable to recover it. 00:30:31.038 [2024-12-09 06:29:25.452070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.038 [2024-12-09 06:29:25.452080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.038 qpair failed and we were unable to recover it. 00:30:31.038 [2024-12-09 06:29:25.452415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.038 [2024-12-09 06:29:25.452425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.038 qpair failed and we were unable to recover it. 00:30:31.038 [2024-12-09 06:29:25.452593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.038 [2024-12-09 06:29:25.452603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.038 qpair failed and we were unable to recover it. 00:30:31.038 [2024-12-09 06:29:25.452911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.038 [2024-12-09 06:29:25.452922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.038 qpair failed and we were unable to recover it. 00:30:31.038 [2024-12-09 06:29:25.453107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.038 [2024-12-09 06:29:25.453117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.038 qpair failed and we were unable to recover it. 00:30:31.038 [2024-12-09 06:29:25.453440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.038 [2024-12-09 06:29:25.453456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.038 qpair failed and we were unable to recover it. 00:30:31.038 [2024-12-09 06:29:25.453872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.038 [2024-12-09 06:29:25.453882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.038 qpair failed and we were unable to recover it. 00:30:31.039 [2024-12-09 06:29:25.454180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.039 06:29:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=505124 00:30:31.039 [2024-12-09 06:29:25.454192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.039 qpair failed and we were unable to recover it. 00:30:31.039 [2024-12-09 06:29:25.454417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.039 [2024-12-09 06:29:25.454427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.039 qpair failed and we were unable to recover it. 00:30:31.039 06:29:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 505124 00:30:31.039 [2024-12-09 06:29:25.454776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.039 [2024-12-09 06:29:25.454788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.039 qpair failed and we were unable to recover it. 00:30:31.039 06:29:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:31.039 06:29:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 505124 ']' 00:30:31.039 [2024-12-09 06:29:25.454999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.039 [2024-12-09 06:29:25.455010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.039 qpair failed and we were unable to recover it. 00:30:31.039 06:29:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:31.039 [2024-12-09 06:29:25.455309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.039 [2024-12-09 06:29:25.455321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.039 qpair failed and we were unable to recover it. 00:30:31.039 06:29:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:31.039 [2024-12-09 06:29:25.455669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.039 [2024-12-09 06:29:25.455681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.039 06:29:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:31.039 qpair failed and we were unable to recover it. 00:30:31.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:31.039 [2024-12-09 06:29:25.455874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.039 [2024-12-09 06:29:25.455886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.039 qpair failed and we were unable to recover it. 00:30:31.039 06:29:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:31.039 06:29:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:31.039 [2024-12-09 06:29:25.456209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.039 [2024-12-09 06:29:25.456221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.039 qpair failed and we were unable to recover it. 00:30:31.039 [2024-12-09 06:29:25.456561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.039 [2024-12-09 06:29:25.456576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.039 qpair failed and we were unable to recover it. 00:30:31.039 [2024-12-09 06:29:25.456850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.039 [2024-12-09 06:29:25.456862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.039 qpair failed and we were unable to recover it. 00:30:31.039 [2024-12-09 06:29:25.457194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.039 [2024-12-09 06:29:25.457206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.039 qpair failed and we were unable to recover it. 00:30:31.039 [2024-12-09 06:29:25.457500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.039 [2024-12-09 06:29:25.457512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.039 qpair failed and we were unable to recover it. 00:30:31.039 [2024-12-09 06:29:25.457705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.039 [2024-12-09 06:29:25.457716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.039 qpair failed and we were unable to recover it. 00:30:31.039 [2024-12-09 06:29:25.457995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.039 [2024-12-09 06:29:25.458006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.039 qpair failed and we were unable to recover it. 00:30:31.039 [2024-12-09 06:29:25.458206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.039 [2024-12-09 06:29:25.458216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.039 qpair failed and we were unable to recover it. 00:30:31.039 [2024-12-09 06:29:25.458443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.039 [2024-12-09 06:29:25.458460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.039 qpair failed and we were unable to recover it. 00:30:31.039 [2024-12-09 06:29:25.458649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.039 [2024-12-09 06:29:25.458661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.039 qpair failed and we were unable to recover it. 00:30:31.039 [2024-12-09 06:29:25.458863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.039 [2024-12-09 06:29:25.458874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.039 qpair failed and we were unable to recover it. 00:30:31.039 [2024-12-09 06:29:25.459201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.039 [2024-12-09 06:29:25.459214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.039 qpair failed and we were unable to recover it. 00:30:31.039 [2024-12-09 06:29:25.459303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.039 [2024-12-09 06:29:25.459314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.039 qpair failed and we were unable to recover it. 00:30:31.039 [2024-12-09 06:29:25.459512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.039 [2024-12-09 06:29:25.459524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.039 qpair failed and we were unable to recover it. 00:30:31.039 [2024-12-09 06:29:25.459835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.039 [2024-12-09 06:29:25.459846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.039 qpair failed and we were unable to recover it. 00:30:31.039 [2024-12-09 06:29:25.460132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.039 [2024-12-09 06:29:25.460145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.039 qpair failed and we were unable to recover it. 00:30:31.039 [2024-12-09 06:29:25.460364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.039 [2024-12-09 06:29:25.460376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.039 qpair failed and we were unable to recover it. 00:30:31.039 [2024-12-09 06:29:25.460687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.039 [2024-12-09 06:29:25.460699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.039 qpair failed and we were unable to recover it. 00:30:31.039 [2024-12-09 06:29:25.461014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.039 [2024-12-09 06:29:25.461025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.039 qpair failed and we were unable to recover it. 00:30:31.039 [2024-12-09 06:29:25.461231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.039 [2024-12-09 06:29:25.461243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.039 qpair failed and we were unable to recover it. 00:30:31.039 [2024-12-09 06:29:25.461567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.039 [2024-12-09 06:29:25.461579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.039 qpair failed and we were unable to recover it. 00:30:31.039 [2024-12-09 06:29:25.461746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.039 [2024-12-09 06:29:25.461758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.039 qpair failed and we were unable to recover it. 00:30:31.039 [2024-12-09 06:29:25.462086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.039 [2024-12-09 06:29:25.462102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.039 qpair failed and we were unable to recover it. 00:30:31.040 [2024-12-09 06:29:25.462412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.040 [2024-12-09 06:29:25.462424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.040 qpair failed and we were unable to recover it. 00:30:31.040 [2024-12-09 06:29:25.462792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.040 [2024-12-09 06:29:25.462805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.040 qpair failed and we were unable to recover it. 00:30:31.040 [2024-12-09 06:29:25.463124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.040 [2024-12-09 06:29:25.463137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.040 qpair failed and we were unable to recover it. 00:30:31.040 [2024-12-09 06:29:25.463311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.040 [2024-12-09 06:29:25.463325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.040 qpair failed and we were unable to recover it. 00:30:31.040 [2024-12-09 06:29:25.463511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.040 [2024-12-09 06:29:25.463523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.040 qpair failed and we were unable to recover it. 00:30:31.040 [2024-12-09 06:29:25.463701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.040 [2024-12-09 06:29:25.463712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.040 qpair failed and we were unable to recover it. 00:30:31.040 [2024-12-09 06:29:25.464021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.040 [2024-12-09 06:29:25.464032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.040 qpair failed and we were unable to recover it. 00:30:31.040 [2024-12-09 06:29:25.464350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.040 [2024-12-09 06:29:25.464362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.040 qpair failed and we were unable to recover it. 00:30:31.040 [2024-12-09 06:29:25.464681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.040 [2024-12-09 06:29:25.464697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.040 qpair failed and we were unable to recover it. 00:30:31.040 [2024-12-09 06:29:25.464922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.040 [2024-12-09 06:29:25.464934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.040 qpair failed and we were unable to recover it. 00:30:31.040 [2024-12-09 06:29:25.465103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.040 [2024-12-09 06:29:25.465114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.040 qpair failed and we were unable to recover it. 00:30:31.040 [2024-12-09 06:29:25.465325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.040 [2024-12-09 06:29:25.465338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.040 qpair failed and we were unable to recover it. 00:30:31.040 [2024-12-09 06:29:25.465479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.040 [2024-12-09 06:29:25.465491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.040 qpair failed and we were unable to recover it. 00:30:31.040 [2024-12-09 06:29:25.465816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.040 [2024-12-09 06:29:25.465827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.040 qpair failed and we were unable to recover it. 00:30:31.040 [2024-12-09 06:29:25.465993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.040 [2024-12-09 06:29:25.466005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.040 qpair failed and we were unable to recover it. 00:30:31.040 [2024-12-09 06:29:25.466329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.040 [2024-12-09 06:29:25.466341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.040 qpair failed and we were unable to recover it. 00:30:31.040 [2024-12-09 06:29:25.466694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.040 [2024-12-09 06:29:25.466706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.040 qpair failed and we were unable to recover it. 00:30:31.040 [2024-12-09 06:29:25.467023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.040 [2024-12-09 06:29:25.467034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.040 qpair failed and we were unable to recover it. 00:30:31.040 [2024-12-09 06:29:25.467349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.040 [2024-12-09 06:29:25.467361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.040 qpair failed and we were unable to recover it. 00:30:31.040 [2024-12-09 06:29:25.467664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.040 [2024-12-09 06:29:25.467676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.040 qpair failed and we were unable to recover it. 00:30:31.040 [2024-12-09 06:29:25.467994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.040 [2024-12-09 06:29:25.468005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.040 qpair failed and we were unable to recover it. 00:30:31.040 [2024-12-09 06:29:25.468331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.040 [2024-12-09 06:29:25.468343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.040 qpair failed and we were unable to recover it. 00:30:31.040 [2024-12-09 06:29:25.468727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.040 [2024-12-09 06:29:25.468737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.040 qpair failed and we were unable to recover it. 00:30:31.040 [2024-12-09 06:29:25.469067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.040 [2024-12-09 06:29:25.469079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.040 qpair failed and we were unable to recover it. 00:30:31.040 [2024-12-09 06:29:25.469402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.040 [2024-12-09 06:29:25.469414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.040 qpair failed and we were unable to recover it. 00:30:31.040 [2024-12-09 06:29:25.469724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.040 [2024-12-09 06:29:25.469736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.040 qpair failed and we were unable to recover it. 00:30:31.040 [2024-12-09 06:29:25.469946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.040 [2024-12-09 06:29:25.469957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.040 qpair failed and we were unable to recover it. 00:30:31.040 [2024-12-09 06:29:25.470296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.040 [2024-12-09 06:29:25.470308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.040 qpair failed and we were unable to recover it. 00:30:31.040 [2024-12-09 06:29:25.470619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.040 [2024-12-09 06:29:25.470632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.040 qpair failed and we were unable to recover it. 00:30:31.040 [2024-12-09 06:29:25.470963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.040 [2024-12-09 06:29:25.470974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.040 qpair failed and we were unable to recover it. 00:30:31.040 [2024-12-09 06:29:25.471161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.040 [2024-12-09 06:29:25.471175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.040 qpair failed and we were unable to recover it. 00:30:31.040 [2024-12-09 06:29:25.471495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.040 [2024-12-09 06:29:25.471507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.040 qpair failed and we were unable to recover it. 00:30:31.040 [2024-12-09 06:29:25.471817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.040 [2024-12-09 06:29:25.471828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.040 qpair failed and we were unable to recover it. 00:30:31.040 [2024-12-09 06:29:25.472138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.040 [2024-12-09 06:29:25.472149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.040 qpair failed and we were unable to recover it. 00:30:31.040 [2024-12-09 06:29:25.472366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.040 [2024-12-09 06:29:25.472376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.040 qpair failed and we were unable to recover it. 00:30:31.040 [2024-12-09 06:29:25.472705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.040 [2024-12-09 06:29:25.472715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.040 qpair failed and we were unable to recover it. 00:30:31.040 [2024-12-09 06:29:25.472938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.040 [2024-12-09 06:29:25.472948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.040 qpair failed and we were unable to recover it. 00:30:31.040 [2024-12-09 06:29:25.473270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.040 [2024-12-09 06:29:25.473280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.040 qpair failed and we were unable to recover it. 00:30:31.041 [2024-12-09 06:29:25.473559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.041 [2024-12-09 06:29:25.473569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.041 qpair failed and we were unable to recover it. 00:30:31.041 [2024-12-09 06:29:25.473785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.041 [2024-12-09 06:29:25.473801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.041 qpair failed and we were unable to recover it. 00:30:31.041 [2024-12-09 06:29:25.474128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.041 [2024-12-09 06:29:25.474140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.041 qpair failed and we were unable to recover it. 00:30:31.041 [2024-12-09 06:29:25.474362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.041 [2024-12-09 06:29:25.474374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.041 qpair failed and we were unable to recover it. 00:30:31.041 [2024-12-09 06:29:25.474555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.041 [2024-12-09 06:29:25.474566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.041 qpair failed and we were unable to recover it. 00:30:31.041 [2024-12-09 06:29:25.474874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.041 [2024-12-09 06:29:25.474884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.041 qpair failed and we were unable to recover it. 00:30:31.041 [2024-12-09 06:29:25.475070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.041 [2024-12-09 06:29:25.475083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.041 qpair failed and we were unable to recover it. 00:30:31.041 [2024-12-09 06:29:25.475389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.041 [2024-12-09 06:29:25.475400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.041 qpair failed and we were unable to recover it. 00:30:31.041 [2024-12-09 06:29:25.475637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.041 [2024-12-09 06:29:25.475648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.041 qpair failed and we were unable to recover it. 00:30:31.041 [2024-12-09 06:29:25.475981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.041 [2024-12-09 06:29:25.475992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.041 qpair failed and we were unable to recover it. 00:30:31.041 [2024-12-09 06:29:25.476188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.041 [2024-12-09 06:29:25.476199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.041 qpair failed and we were unable to recover it. 00:30:31.041 [2024-12-09 06:29:25.476553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.041 [2024-12-09 06:29:25.476563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.041 qpair failed and we were unable to recover it. 00:30:31.041 [2024-12-09 06:29:25.476765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.041 [2024-12-09 06:29:25.476774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.041 qpair failed and we were unable to recover it. 00:30:31.041 [2024-12-09 06:29:25.477084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.041 [2024-12-09 06:29:25.477094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.041 qpair failed and we were unable to recover it. 00:30:31.041 [2024-12-09 06:29:25.477410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.041 [2024-12-09 06:29:25.477420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.041 qpair failed and we were unable to recover it. 00:30:31.041 [2024-12-09 06:29:25.477805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.041 [2024-12-09 06:29:25.477816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.041 qpair failed and we were unable to recover it. 00:30:31.041 [2024-12-09 06:29:25.478074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.041 [2024-12-09 06:29:25.478085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.041 qpair failed and we were unable to recover it. 00:30:31.041 [2024-12-09 06:29:25.478291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.041 [2024-12-09 06:29:25.478304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.041 qpair failed and we were unable to recover it. 00:30:31.041 [2024-12-09 06:29:25.478582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.041 [2024-12-09 06:29:25.478592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.041 qpair failed and we were unable to recover it. 00:30:31.041 [2024-12-09 06:29:25.478919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.041 [2024-12-09 06:29:25.478929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.041 qpair failed and we were unable to recover it. 00:30:31.041 [2024-12-09 06:29:25.479241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.041 [2024-12-09 06:29:25.479251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.041 qpair failed and we were unable to recover it. 00:30:31.041 [2024-12-09 06:29:25.479564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.041 [2024-12-09 06:29:25.479576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.041 qpair failed and we were unable to recover it. 00:30:31.041 [2024-12-09 06:29:25.479922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.041 [2024-12-09 06:29:25.479936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.041 qpair failed and we were unable to recover it. 00:30:31.041 [2024-12-09 06:29:25.480272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.041 [2024-12-09 06:29:25.480282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.041 qpair failed and we were unable to recover it. 00:30:31.041 [2024-12-09 06:29:25.480513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.041 [2024-12-09 06:29:25.480529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.041 qpair failed and we were unable to recover it. 00:30:31.041 [2024-12-09 06:29:25.480820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.041 [2024-12-09 06:29:25.480831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.041 qpair failed and we were unable to recover it. 00:30:31.041 [2024-12-09 06:29:25.481026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.041 [2024-12-09 06:29:25.481036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.041 qpair failed and we were unable to recover it. 00:30:31.041 [2024-12-09 06:29:25.481392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.041 [2024-12-09 06:29:25.481403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.041 qpair failed and we were unable to recover it. 00:30:31.041 [2024-12-09 06:29:25.481596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.041 [2024-12-09 06:29:25.481607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.041 qpair failed and we were unable to recover it. 00:30:31.041 [2024-12-09 06:29:25.481948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.041 [2024-12-09 06:29:25.481959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.041 qpair failed and we were unable to recover it. 00:30:31.041 [2024-12-09 06:29:25.482265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.041 [2024-12-09 06:29:25.482275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.041 qpair failed and we were unable to recover it. 00:30:31.041 [2024-12-09 06:29:25.482476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.041 [2024-12-09 06:29:25.482487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.041 qpair failed and we were unable to recover it. 00:30:31.041 [2024-12-09 06:29:25.482858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.041 [2024-12-09 06:29:25.482868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.041 qpair failed and we were unable to recover it. 00:30:31.041 [2024-12-09 06:29:25.483199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.041 [2024-12-09 06:29:25.483208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.041 qpair failed and we were unable to recover it. 00:30:31.041 [2024-12-09 06:29:25.483533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.041 [2024-12-09 06:29:25.483544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.041 qpair failed and we were unable to recover it. 00:30:31.041 [2024-12-09 06:29:25.483880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.041 [2024-12-09 06:29:25.483890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.041 qpair failed and we were unable to recover it. 00:30:31.041 [2024-12-09 06:29:25.484190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.041 [2024-12-09 06:29:25.484201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.041 qpair failed and we were unable to recover it. 00:30:31.041 [2024-12-09 06:29:25.484514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.041 [2024-12-09 06:29:25.484524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.041 qpair failed and we were unable to recover it. 00:30:31.041 [2024-12-09 06:29:25.484838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.042 [2024-12-09 06:29:25.484848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.042 qpair failed and we were unable to recover it. 00:30:31.042 [2024-12-09 06:29:25.485195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.042 [2024-12-09 06:29:25.485207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.042 qpair failed and we were unable to recover it. 00:30:31.042 [2024-12-09 06:29:25.485553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.042 [2024-12-09 06:29:25.485564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.042 qpair failed and we were unable to recover it. 00:30:31.042 [2024-12-09 06:29:25.485900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.042 [2024-12-09 06:29:25.485914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.042 qpair failed and we were unable to recover it. 00:30:31.042 [2024-12-09 06:29:25.486227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.042 [2024-12-09 06:29:25.486237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.042 qpair failed and we were unable to recover it. 00:30:31.042 [2024-12-09 06:29:25.486441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.042 [2024-12-09 06:29:25.486458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.042 qpair failed and we were unable to recover it. 00:30:31.042 [2024-12-09 06:29:25.486634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.042 [2024-12-09 06:29:25.486644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.042 qpair failed and we were unable to recover it. 00:30:31.042 [2024-12-09 06:29:25.486966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.042 [2024-12-09 06:29:25.486977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.042 qpair failed and we were unable to recover it. 00:30:31.042 [2024-12-09 06:29:25.487314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.042 [2024-12-09 06:29:25.487325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.042 qpair failed and we were unable to recover it. 00:30:31.042 [2024-12-09 06:29:25.487665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.042 [2024-12-09 06:29:25.487676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.042 qpair failed and we were unable to recover it. 00:30:31.042 [2024-12-09 06:29:25.487861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.042 [2024-12-09 06:29:25.487871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.042 qpair failed and we were unable to recover it. 00:30:31.042 [2024-12-09 06:29:25.488216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.042 [2024-12-09 06:29:25.488226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.042 qpair failed and we were unable to recover it. 00:30:31.042 [2024-12-09 06:29:25.488570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.042 [2024-12-09 06:29:25.488581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.042 qpair failed and we were unable to recover it. 00:30:31.042 [2024-12-09 06:29:25.488892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.042 [2024-12-09 06:29:25.488903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.042 qpair failed and we were unable to recover it. 00:30:31.042 [2024-12-09 06:29:25.489237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.042 [2024-12-09 06:29:25.489248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.042 qpair failed and we were unable to recover it. 00:30:31.042 [2024-12-09 06:29:25.489532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.042 [2024-12-09 06:29:25.489543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.042 qpair failed and we were unable to recover it. 00:30:31.042 [2024-12-09 06:29:25.489854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.042 [2024-12-09 06:29:25.489863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.042 qpair failed and we were unable to recover it. 00:30:31.042 [2024-12-09 06:29:25.490171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.042 [2024-12-09 06:29:25.490181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.042 qpair failed and we were unable to recover it. 00:30:31.042 [2024-12-09 06:29:25.490429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.042 [2024-12-09 06:29:25.490440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.042 qpair failed and we were unable to recover it. 00:30:31.042 [2024-12-09 06:29:25.490744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.042 [2024-12-09 06:29:25.490754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.042 qpair failed and we were unable to recover it. 00:30:31.042 [2024-12-09 06:29:25.491075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.042 [2024-12-09 06:29:25.491086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.042 qpair failed and we were unable to recover it. 00:30:31.042 [2024-12-09 06:29:25.491271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.042 [2024-12-09 06:29:25.491282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.042 qpair failed and we were unable to recover it. 00:30:31.042 [2024-12-09 06:29:25.491471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.042 [2024-12-09 06:29:25.491481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.042 qpair failed and we were unable to recover it. 00:30:31.042 [2024-12-09 06:29:25.491801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.042 [2024-12-09 06:29:25.491811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.042 qpair failed and we were unable to recover it. 00:30:31.042 [2024-12-09 06:29:25.492184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.042 [2024-12-09 06:29:25.492195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.042 qpair failed and we were unable to recover it. 00:30:31.042 [2024-12-09 06:29:25.492370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.042 [2024-12-09 06:29:25.492381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.042 qpair failed and we were unable to recover it. 00:30:31.042 [2024-12-09 06:29:25.492755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.042 [2024-12-09 06:29:25.492766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.042 qpair failed and we were unable to recover it. 00:30:31.042 [2024-12-09 06:29:25.493062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.042 [2024-12-09 06:29:25.493072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.042 qpair failed and we were unable to recover it. 00:30:31.042 [2024-12-09 06:29:25.493398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.042 [2024-12-09 06:29:25.493408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.042 qpair failed and we were unable to recover it. 00:30:31.042 [2024-12-09 06:29:25.493732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.042 [2024-12-09 06:29:25.493744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.042 qpair failed and we were unable to recover it. 00:30:31.042 [2024-12-09 06:29:25.494061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.042 [2024-12-09 06:29:25.494072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.042 qpair failed and we were unable to recover it. 00:30:31.042 [2024-12-09 06:29:25.494309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.042 [2024-12-09 06:29:25.494320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.042 qpair failed and we were unable to recover it. 00:30:31.042 [2024-12-09 06:29:25.494559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.042 [2024-12-09 06:29:25.494569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.042 qpair failed and we were unable to recover it. 00:30:31.042 [2024-12-09 06:29:25.494956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.042 [2024-12-09 06:29:25.494966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.042 qpair failed and we were unable to recover it. 00:30:31.042 [2024-12-09 06:29:25.495166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.042 [2024-12-09 06:29:25.495179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.042 qpair failed and we were unable to recover it. 00:30:31.042 [2024-12-09 06:29:25.495485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.042 [2024-12-09 06:29:25.495497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.042 qpair failed and we were unable to recover it. 00:30:31.042 [2024-12-09 06:29:25.495828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.042 [2024-12-09 06:29:25.495838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.042 qpair failed and we were unable to recover it. 00:30:31.042 [2024-12-09 06:29:25.496149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.042 [2024-12-09 06:29:25.496160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.042 qpair failed and we were unable to recover it. 00:30:31.042 [2024-12-09 06:29:25.496490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.043 [2024-12-09 06:29:25.496500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.043 qpair failed and we were unable to recover it. 00:30:31.043 [2024-12-09 06:29:25.496872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.043 [2024-12-09 06:29:25.496885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.043 qpair failed and we were unable to recover it. 00:30:31.043 [2024-12-09 06:29:25.497191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.043 [2024-12-09 06:29:25.497201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.043 qpair failed and we were unable to recover it. 00:30:31.043 [2024-12-09 06:29:25.497485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.043 [2024-12-09 06:29:25.497495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.043 qpair failed and we were unable to recover it. 00:30:31.043 [2024-12-09 06:29:25.497801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.043 [2024-12-09 06:29:25.497812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.043 qpair failed and we were unable to recover it. 00:30:31.043 [2024-12-09 06:29:25.497989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.043 [2024-12-09 06:29:25.498002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.043 qpair failed and we were unable to recover it. 00:30:31.043 [2024-12-09 06:29:25.498319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.043 [2024-12-09 06:29:25.498330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.043 qpair failed and we were unable to recover it. 00:30:31.043 [2024-12-09 06:29:25.498591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.043 [2024-12-09 06:29:25.498601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.043 qpair failed and we were unable to recover it. 00:30:31.043 [2024-12-09 06:29:25.498908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.043 [2024-12-09 06:29:25.498918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.043 qpair failed and we were unable to recover it. 00:30:31.043 [2024-12-09 06:29:25.499225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.043 [2024-12-09 06:29:25.499235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.043 qpair failed and we were unable to recover it. 00:30:31.043 [2024-12-09 06:29:25.499534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.043 [2024-12-09 06:29:25.499546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.043 qpair failed and we were unable to recover it. 00:30:31.043 [2024-12-09 06:29:25.499720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.043 [2024-12-09 06:29:25.499730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.043 qpair failed and we were unable to recover it. 00:30:31.043 [2024-12-09 06:29:25.500064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.043 [2024-12-09 06:29:25.500074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.043 qpair failed and we were unable to recover it. 00:30:31.043 [2024-12-09 06:29:25.500384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.043 [2024-12-09 06:29:25.500394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.043 qpair failed and we were unable to recover it. 00:30:31.043 [2024-12-09 06:29:25.500640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.043 [2024-12-09 06:29:25.500652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.043 qpair failed and we were unable to recover it. 00:30:31.043 [2024-12-09 06:29:25.500960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.043 [2024-12-09 06:29:25.500970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.043 qpair failed and we were unable to recover it. 00:30:31.043 [2024-12-09 06:29:25.501271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.043 [2024-12-09 06:29:25.501281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.043 qpair failed and we were unable to recover it. 00:30:31.043 [2024-12-09 06:29:25.501626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.043 [2024-12-09 06:29:25.501637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.043 qpair failed and we were unable to recover it. 00:30:31.043 [2024-12-09 06:29:25.501941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.043 [2024-12-09 06:29:25.501951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.043 qpair failed and we were unable to recover it. 00:30:31.043 [2024-12-09 06:29:25.502137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.043 [2024-12-09 06:29:25.502146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.043 qpair failed and we were unable to recover it. 00:30:31.043 [2024-12-09 06:29:25.502385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.043 [2024-12-09 06:29:25.502394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.043 qpair failed and we were unable to recover it. 00:30:31.043 [2024-12-09 06:29:25.502599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.043 [2024-12-09 06:29:25.502610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.043 qpair failed and we were unable to recover it. 00:30:31.043 [2024-12-09 06:29:25.502933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.043 [2024-12-09 06:29:25.502943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.043 qpair failed and we were unable to recover it. 00:30:31.043 [2024-12-09 06:29:25.503288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.043 [2024-12-09 06:29:25.503298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.043 qpair failed and we were unable to recover it. 00:30:31.043 [2024-12-09 06:29:25.503651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.043 [2024-12-09 06:29:25.503663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.043 qpair failed and we were unable to recover it. 00:30:31.043 [2024-12-09 06:29:25.503978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.043 [2024-12-09 06:29:25.503988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.043 qpair failed and we were unable to recover it. 00:30:31.043 [2024-12-09 06:29:25.504291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.043 [2024-12-09 06:29:25.504301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.043 qpair failed and we were unable to recover it. 00:30:31.043 [2024-12-09 06:29:25.504585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.043 [2024-12-09 06:29:25.504596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.043 qpair failed and we were unable to recover it. 00:30:31.043 [2024-12-09 06:29:25.504950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.043 [2024-12-09 06:29:25.504959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.043 qpair failed and we were unable to recover it. 00:30:31.043 [2024-12-09 06:29:25.505244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.043 [2024-12-09 06:29:25.505254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.043 qpair failed and we were unable to recover it. 00:30:31.043 [2024-12-09 06:29:25.505584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.043 [2024-12-09 06:29:25.505594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.043 qpair failed and we were unable to recover it. 00:30:31.043 [2024-12-09 06:29:25.505971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.043 [2024-12-09 06:29:25.505980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.043 qpair failed and we were unable to recover it. 00:30:31.043 [2024-12-09 06:29:25.506175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.043 [2024-12-09 06:29:25.506185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.043 qpair failed and we were unable to recover it. 00:30:31.043 [2024-12-09 06:29:25.506456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.043 [2024-12-09 06:29:25.506466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.043 qpair failed and we were unable to recover it. 00:30:31.043 [2024-12-09 06:29:25.506744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.043 [2024-12-09 06:29:25.506754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.043 qpair failed and we were unable to recover it. 00:30:31.043 [2024-12-09 06:29:25.507108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.043 [2024-12-09 06:29:25.507120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.043 qpair failed and we were unable to recover it. 00:30:31.043 [2024-12-09 06:29:25.507320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.043 [2024-12-09 06:29:25.507330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.043 qpair failed and we were unable to recover it. 00:30:31.043 [2024-12-09 06:29:25.507621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.043 [2024-12-09 06:29:25.507632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.043 qpair failed and we were unable to recover it. 00:30:31.043 [2024-12-09 06:29:25.507933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.044 [2024-12-09 06:29:25.507944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.044 qpair failed and we were unable to recover it. 00:30:31.044 [2024-12-09 06:29:25.508274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.044 [2024-12-09 06:29:25.508284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.044 qpair failed and we were unable to recover it. 00:30:31.044 [2024-12-09 06:29:25.508476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.044 [2024-12-09 06:29:25.508487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.044 qpair failed and we were unable to recover it. 00:30:31.044 [2024-12-09 06:29:25.508793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.044 [2024-12-09 06:29:25.508802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.044 qpair failed and we were unable to recover it. 00:30:31.044 [2024-12-09 06:29:25.508970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.044 [2024-12-09 06:29:25.508980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.044 qpair failed and we were unable to recover it. 00:30:31.044 [2024-12-09 06:29:25.509309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.044 [2024-12-09 06:29:25.509320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.044 qpair failed and we were unable to recover it. 00:30:31.044 [2024-12-09 06:29:25.509772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.044 [2024-12-09 06:29:25.509782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.044 qpair failed and we were unable to recover it. 00:30:31.044 [2024-12-09 06:29:25.509961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.044 [2024-12-09 06:29:25.509975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.044 qpair failed and we were unable to recover it. 00:30:31.044 [2024-12-09 06:29:25.510202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.044 [2024-12-09 06:29:25.510212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.044 qpair failed and we were unable to recover it. 00:30:31.044 [2024-12-09 06:29:25.510528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.044 [2024-12-09 06:29:25.510538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.044 qpair failed and we were unable to recover it. 00:30:31.044 [2024-12-09 06:29:25.510736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.044 [2024-12-09 06:29:25.510746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.044 qpair failed and we were unable to recover it. 00:30:31.044 [2024-12-09 06:29:25.511051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.044 [2024-12-09 06:29:25.511061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.044 qpair failed and we were unable to recover it. 00:30:31.044 [2024-12-09 06:29:25.511268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.044 [2024-12-09 06:29:25.511278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.044 qpair failed and we were unable to recover it. 00:30:31.044 [2024-12-09 06:29:25.511565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.044 [2024-12-09 06:29:25.511576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.044 qpair failed and we were unable to recover it. 00:30:31.044 [2024-12-09 06:29:25.511777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.044 [2024-12-09 06:29:25.511787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.044 qpair failed and we were unable to recover it. 00:30:31.044 [2024-12-09 06:29:25.511893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.044 [2024-12-09 06:29:25.511904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.044 qpair failed and we were unable to recover it. 00:30:31.044 [2024-12-09 06:29:25.512240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.044 [2024-12-09 06:29:25.512249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.044 qpair failed and we were unable to recover it. 00:30:31.044 [2024-12-09 06:29:25.512399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.044 [2024-12-09 06:29:25.512409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.044 qpair failed and we were unable to recover it. 00:30:31.044 [2024-12-09 06:29:25.512614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.044 [2024-12-09 06:29:25.512624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.044 qpair failed and we were unable to recover it. 00:30:31.044 [2024-12-09 06:29:25.512793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.044 [2024-12-09 06:29:25.512804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.044 qpair failed and we were unable to recover it. 00:30:31.044 [2024-12-09 06:29:25.513031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.044 [2024-12-09 06:29:25.513041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.044 qpair failed and we were unable to recover it. 00:30:31.044 [2024-12-09 06:29:25.513233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.044 [2024-12-09 06:29:25.513243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.044 qpair failed and we were unable to recover it. 00:30:31.044 [2024-12-09 06:29:25.513575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.044 [2024-12-09 06:29:25.513586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.044 qpair failed and we were unable to recover it. 00:30:31.044 [2024-12-09 06:29:25.513901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.044 [2024-12-09 06:29:25.513911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.044 qpair failed and we were unable to recover it. 00:30:31.044 [2024-12-09 06:29:25.514117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.044 [2024-12-09 06:29:25.514126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.044 qpair failed and we were unable to recover it. 00:30:31.044 [2024-12-09 06:29:25.514404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.044 [2024-12-09 06:29:25.514414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.044 qpair failed and we were unable to recover it. 00:30:31.044 [2024-12-09 06:29:25.514620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.044 [2024-12-09 06:29:25.514629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.044 qpair failed and we were unable to recover it. 00:30:31.044 [2024-12-09 06:29:25.514901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.044 [2024-12-09 06:29:25.514911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.044 qpair failed and we were unable to recover it. 00:30:31.044 [2024-12-09 06:29:25.515249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.044 [2024-12-09 06:29:25.515260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.044 qpair failed and we were unable to recover it. 00:30:31.044 [2024-12-09 06:29:25.515611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.044 [2024-12-09 06:29:25.515622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.044 qpair failed and we were unable to recover it. 00:30:31.044 [2024-12-09 06:29:25.515818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.044 [2024-12-09 06:29:25.515829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.044 qpair failed and we were unable to recover it. 00:30:31.044 [2024-12-09 06:29:25.516068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.044 [2024-12-09 06:29:25.516079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.044 qpair failed and we were unable to recover it. 00:30:31.044 [2024-12-09 06:29:25.516209] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:30:31.044 [2024-12-09 06:29:25.516285] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:31.044 [2024-12-09 06:29:25.516381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.044 [2024-12-09 06:29:25.516396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.044 qpair failed and we were unable to recover it. 00:30:31.044 [2024-12-09 06:29:25.516713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.045 [2024-12-09 06:29:25.516725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.045 qpair failed and we were unable to recover it. 00:30:31.045 [2024-12-09 06:29:25.517039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.045 [2024-12-09 06:29:25.517050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.045 qpair failed and we were unable to recover it. 00:30:31.045 [2024-12-09 06:29:25.517387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.045 [2024-12-09 06:29:25.517399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.045 qpair failed and we were unable to recover it. 00:30:31.045 [2024-12-09 06:29:25.517701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.045 [2024-12-09 06:29:25.517712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.045 qpair failed and we were unable to recover it. 00:30:31.045 [2024-12-09 06:29:25.518024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.045 [2024-12-09 06:29:25.518036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.045 qpair failed and we were unable to recover it. 00:30:31.045 [2024-12-09 06:29:25.518237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.045 [2024-12-09 06:29:25.518248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.045 qpair failed and we were unable to recover it. 00:30:31.045 [2024-12-09 06:29:25.518549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.045 [2024-12-09 06:29:25.518562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.045 qpair failed and we were unable to recover it. 00:30:31.045 [2024-12-09 06:29:25.518876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.045 [2024-12-09 06:29:25.518887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.045 qpair failed and we were unable to recover it. 00:30:31.045 [2024-12-09 06:29:25.519206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.045 [2024-12-09 06:29:25.519218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.045 qpair failed and we were unable to recover it. 00:30:31.045 [2024-12-09 06:29:25.519526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.045 [2024-12-09 06:29:25.519538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.045 qpair failed and we were unable to recover it. 00:30:31.045 [2024-12-09 06:29:25.519875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.045 [2024-12-09 06:29:25.519888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.045 qpair failed and we were unable to recover it. 00:30:31.045 [2024-12-09 06:29:25.520220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.045 [2024-12-09 06:29:25.520232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.045 qpair failed and we were unable to recover it. 00:30:31.045 [2024-12-09 06:29:25.520552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.045 [2024-12-09 06:29:25.520565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.045 qpair failed and we were unable to recover it. 00:30:31.045 [2024-12-09 06:29:25.520780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.045 [2024-12-09 06:29:25.520791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.045 qpair failed and we were unable to recover it. 00:30:31.045 [2024-12-09 06:29:25.521130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.045 [2024-12-09 06:29:25.521142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.045 qpair failed and we were unable to recover it. 00:30:31.045 [2024-12-09 06:29:25.521478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.045 [2024-12-09 06:29:25.521489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.045 qpair failed and we were unable to recover it. 00:30:31.045 [2024-12-09 06:29:25.521807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.045 [2024-12-09 06:29:25.521817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.045 qpair failed and we were unable to recover it. 00:30:31.045 [2024-12-09 06:29:25.522133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.045 [2024-12-09 06:29:25.522145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.045 qpair failed and we were unable to recover it. 00:30:31.045 [2024-12-09 06:29:25.522471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.045 [2024-12-09 06:29:25.522484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.045 qpair failed and we were unable to recover it. 00:30:31.045 [2024-12-09 06:29:25.522846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.045 [2024-12-09 06:29:25.522857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.045 qpair failed and we were unable to recover it. 00:30:31.045 [2024-12-09 06:29:25.523195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.045 [2024-12-09 06:29:25.523205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.045 qpair failed and we were unable to recover it. 00:30:31.045 [2024-12-09 06:29:25.523522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.045 [2024-12-09 06:29:25.523533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.045 qpair failed and we were unable to recover it. 00:30:31.045 [2024-12-09 06:29:25.523862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.045 [2024-12-09 06:29:25.523872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.045 qpair failed and we were unable to recover it. 00:30:31.045 [2024-12-09 06:29:25.524177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.045 [2024-12-09 06:29:25.524188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.045 qpair failed and we were unable to recover it. 00:30:31.045 [2024-12-09 06:29:25.524387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.045 [2024-12-09 06:29:25.524399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.045 qpair failed and we were unable to recover it. 00:30:31.045 [2024-12-09 06:29:25.525420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.045 [2024-12-09 06:29:25.525472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.045 qpair failed and we were unable to recover it. 00:30:31.045 [2024-12-09 06:29:25.525856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.045 [2024-12-09 06:29:25.525868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.045 qpair failed and we were unable to recover it. 00:30:31.045 [2024-12-09 06:29:25.526098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.045 [2024-12-09 06:29:25.526108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.045 qpair failed and we were unable to recover it. 00:30:31.045 [2024-12-09 06:29:25.526426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.045 [2024-12-09 06:29:25.526437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.045 qpair failed and we were unable to recover it. 00:30:31.045 [2024-12-09 06:29:25.526624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.045 [2024-12-09 06:29:25.526637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.045 qpair failed and we were unable to recover it. 00:30:31.045 [2024-12-09 06:29:25.526977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.045 [2024-12-09 06:29:25.526988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.045 qpair failed and we were unable to recover it. 00:30:31.045 [2024-12-09 06:29:25.527303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.045 [2024-12-09 06:29:25.527314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.045 qpair failed and we were unable to recover it. 00:30:31.045 [2024-12-09 06:29:25.527645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.045 [2024-12-09 06:29:25.527656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.045 qpair failed and we were unable to recover it. 00:30:31.045 [2024-12-09 06:29:25.528040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.045 [2024-12-09 06:29:25.528051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.045 qpair failed and we were unable to recover it. 00:30:31.045 [2024-12-09 06:29:25.528391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.045 [2024-12-09 06:29:25.528403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.045 qpair failed and we were unable to recover it. 00:30:31.045 [2024-12-09 06:29:25.528602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.045 [2024-12-09 06:29:25.528619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.045 qpair failed and we were unable to recover it. 00:30:31.045 [2024-12-09 06:29:25.528859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.045 [2024-12-09 06:29:25.528871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.045 qpair failed and we were unable to recover it. 00:30:31.045 [2024-12-09 06:29:25.529218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.045 [2024-12-09 06:29:25.529229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.045 qpair failed and we were unable to recover it. 00:30:31.045 [2024-12-09 06:29:25.529421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.045 [2024-12-09 06:29:25.529434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.046 qpair failed and we were unable to recover it. 00:30:31.046 [2024-12-09 06:29:25.529715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.046 [2024-12-09 06:29:25.529726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.046 qpair failed and we were unable to recover it. 00:30:31.046 [2024-12-09 06:29:25.530089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.046 [2024-12-09 06:29:25.530101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.046 qpair failed and we were unable to recover it. 00:30:31.046 [2024-12-09 06:29:25.530432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.046 [2024-12-09 06:29:25.530443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.046 qpair failed and we were unable to recover it. 00:30:31.046 [2024-12-09 06:29:25.530808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.046 [2024-12-09 06:29:25.530819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.046 qpair failed and we were unable to recover it. 00:30:31.046 [2024-12-09 06:29:25.531133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.046 [2024-12-09 06:29:25.531145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.046 qpair failed and we were unable to recover it. 00:30:31.046 [2024-12-09 06:29:25.531359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.046 [2024-12-09 06:29:25.531371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.046 qpair failed and we were unable to recover it. 00:30:31.046 [2024-12-09 06:29:25.531671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.046 [2024-12-09 06:29:25.531682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.046 qpair failed and we were unable to recover it. 00:30:31.046 [2024-12-09 06:29:25.532023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.046 [2024-12-09 06:29:25.532034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.046 qpair failed and we were unable to recover it. 00:30:31.046 [2024-12-09 06:29:25.532239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.046 [2024-12-09 06:29:25.532251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.046 qpair failed and we were unable to recover it. 00:30:31.046 [2024-12-09 06:29:25.532542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.046 [2024-12-09 06:29:25.532554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.046 qpair failed and we were unable to recover it. 00:30:31.046 [2024-12-09 06:29:25.532907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.046 [2024-12-09 06:29:25.532917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.046 qpair failed and we were unable to recover it. 00:30:31.046 [2024-12-09 06:29:25.533120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.046 [2024-12-09 06:29:25.533133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.046 qpair failed and we were unable to recover it. 00:30:31.046 [2024-12-09 06:29:25.533324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.046 [2024-12-09 06:29:25.533334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.046 qpair failed and we were unable to recover it. 00:30:31.046 [2024-12-09 06:29:25.533645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.046 [2024-12-09 06:29:25.533657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.046 qpair failed and we were unable to recover it. 00:30:31.046 [2024-12-09 06:29:25.533978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.046 [2024-12-09 06:29:25.533993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.046 qpair failed and we were unable to recover it. 00:30:31.046 [2024-12-09 06:29:25.534389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.046 [2024-12-09 06:29:25.534399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.046 qpair failed and we were unable to recover it. 00:30:31.046 [2024-12-09 06:29:25.534726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.046 [2024-12-09 06:29:25.534737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.046 qpair failed and we were unable to recover it. 00:30:31.046 [2024-12-09 06:29:25.534954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.046 [2024-12-09 06:29:25.534967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.046 qpair failed and we were unable to recover it. 00:30:31.046 [2024-12-09 06:29:25.535299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.046 [2024-12-09 06:29:25.535309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.046 qpair failed and we were unable to recover it. 00:30:31.046 [2024-12-09 06:29:25.535603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.046 [2024-12-09 06:29:25.535614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.046 qpair failed and we were unable to recover it. 00:30:31.046 [2024-12-09 06:29:25.535923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.046 [2024-12-09 06:29:25.535934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.046 qpair failed and we were unable to recover it. 00:30:31.046 [2024-12-09 06:29:25.536236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.046 [2024-12-09 06:29:25.536246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.046 qpair failed and we were unable to recover it. 00:30:31.046 [2024-12-09 06:29:25.536455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.046 [2024-12-09 06:29:25.536466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.046 qpair failed and we were unable to recover it. 00:30:31.046 [2024-12-09 06:29:25.536639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.046 [2024-12-09 06:29:25.536650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.046 qpair failed and we were unable to recover it. 00:30:31.046 [2024-12-09 06:29:25.536983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.046 [2024-12-09 06:29:25.536994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.046 qpair failed and we were unable to recover it. 00:30:31.046 [2024-12-09 06:29:25.537331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.046 [2024-12-09 06:29:25.537341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.046 qpair failed and we were unable to recover it. 00:30:31.046 [2024-12-09 06:29:25.537546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.046 [2024-12-09 06:29:25.537557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.046 qpair failed and we were unable to recover it. 00:30:31.046 [2024-12-09 06:29:25.537932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.046 [2024-12-09 06:29:25.537943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.046 qpair failed and we were unable to recover it. 00:30:31.046 [2024-12-09 06:29:25.538122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.046 [2024-12-09 06:29:25.538134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.046 qpair failed and we were unable to recover it. 00:30:31.046 [2024-12-09 06:29:25.538313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.046 [2024-12-09 06:29:25.538325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.046 qpair failed and we were unable to recover it. 00:30:31.046 [2024-12-09 06:29:25.538652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.046 [2024-12-09 06:29:25.538664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.046 qpair failed and we were unable to recover it. 00:30:31.046 [2024-12-09 06:29:25.538991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.046 [2024-12-09 06:29:25.539001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.046 qpair failed and we were unable to recover it. 00:30:31.046 [2024-12-09 06:29:25.539211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.046 [2024-12-09 06:29:25.539222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.046 qpair failed and we were unable to recover it. 00:30:31.046 [2024-12-09 06:29:25.539541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.046 [2024-12-09 06:29:25.539552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.046 qpair failed and we were unable to recover it. 00:30:31.046 [2024-12-09 06:29:25.539871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.046 [2024-12-09 06:29:25.539882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.046 qpair failed and we were unable to recover it. 00:30:31.046 [2024-12-09 06:29:25.540154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.046 [2024-12-09 06:29:25.540166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.046 qpair failed and we were unable to recover it. 00:30:31.046 [2024-12-09 06:29:25.540367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.046 [2024-12-09 06:29:25.540378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.046 qpair failed and we were unable to recover it. 00:30:31.046 [2024-12-09 06:29:25.540576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.046 [2024-12-09 06:29:25.540588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.046 qpair failed and we were unable to recover it. 00:30:31.047 [2024-12-09 06:29:25.540927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.047 [2024-12-09 06:29:25.540938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.047 qpair failed and we were unable to recover it. 00:30:31.047 [2024-12-09 06:29:25.541238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.047 [2024-12-09 06:29:25.541249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.047 qpair failed and we were unable to recover it. 00:30:31.047 [2024-12-09 06:29:25.541556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.047 [2024-12-09 06:29:25.541567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.047 qpair failed and we were unable to recover it. 00:30:31.047 [2024-12-09 06:29:25.541963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.047 [2024-12-09 06:29:25.541975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.047 qpair failed and we were unable to recover it. 00:30:31.047 [2024-12-09 06:29:25.542275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.047 [2024-12-09 06:29:25.542286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.047 qpair failed and we were unable to recover it. 00:30:31.047 [2024-12-09 06:29:25.542626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.047 [2024-12-09 06:29:25.542637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.047 qpair failed and we were unable to recover it. 00:30:31.047 [2024-12-09 06:29:25.542935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.047 [2024-12-09 06:29:25.542945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.047 qpair failed and we were unable to recover it. 00:30:31.047 [2024-12-09 06:29:25.543160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.047 [2024-12-09 06:29:25.543172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.047 qpair failed and we were unable to recover it. 00:30:31.047 [2024-12-09 06:29:25.543354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.047 [2024-12-09 06:29:25.543365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.047 qpair failed and we were unable to recover it. 00:30:31.047 [2024-12-09 06:29:25.543714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.047 [2024-12-09 06:29:25.543725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.047 qpair failed and we were unable to recover it. 00:30:31.047 [2024-12-09 06:29:25.544081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.047 [2024-12-09 06:29:25.544092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.047 qpair failed and we were unable to recover it. 00:30:31.047 [2024-12-09 06:29:25.544415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.047 [2024-12-09 06:29:25.544425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.047 qpair failed and we were unable to recover it. 00:30:31.047 [2024-12-09 06:29:25.544751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.047 [2024-12-09 06:29:25.544762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.047 qpair failed and we were unable to recover it. 00:30:31.047 [2024-12-09 06:29:25.545085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.047 [2024-12-09 06:29:25.545096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.047 qpair failed and we were unable to recover it. 00:30:31.047 [2024-12-09 06:29:25.545395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.047 [2024-12-09 06:29:25.545406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.047 qpair failed and we were unable to recover it. 00:30:31.047 [2024-12-09 06:29:25.545690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.047 [2024-12-09 06:29:25.545701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.047 qpair failed and we were unable to recover it. 00:30:31.047 [2024-12-09 06:29:25.546019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.047 [2024-12-09 06:29:25.546032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.047 qpair failed and we were unable to recover it. 00:30:31.047 [2024-12-09 06:29:25.546357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.047 [2024-12-09 06:29:25.546369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.047 qpair failed and we were unable to recover it. 00:30:31.047 [2024-12-09 06:29:25.546616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.047 [2024-12-09 06:29:25.546628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.047 qpair failed and we were unable to recover it. 00:30:31.047 [2024-12-09 06:29:25.546840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.047 [2024-12-09 06:29:25.546851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.047 qpair failed and we were unable to recover it. 00:30:31.047 [2024-12-09 06:29:25.547194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.047 [2024-12-09 06:29:25.547204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.047 qpair failed and we were unable to recover it. 00:30:31.047 [2024-12-09 06:29:25.547523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.047 [2024-12-09 06:29:25.547535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.047 qpair failed and we were unable to recover it. 00:30:31.047 [2024-12-09 06:29:25.547835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.047 [2024-12-09 06:29:25.547849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.047 qpair failed and we were unable to recover it. 00:30:31.047 [2024-12-09 06:29:25.548128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.047 [2024-12-09 06:29:25.548139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.047 qpair failed and we were unable to recover it. 00:30:31.047 [2024-12-09 06:29:25.548458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.047 [2024-12-09 06:29:25.548471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.047 qpair failed and we were unable to recover it. 00:30:31.047 [2024-12-09 06:29:25.548673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.047 [2024-12-09 06:29:25.548685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.047 qpair failed and we were unable to recover it. 00:30:31.047 [2024-12-09 06:29:25.548986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.047 [2024-12-09 06:29:25.548996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.047 qpair failed and we were unable to recover it. 00:30:31.047 [2024-12-09 06:29:25.549338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.047 [2024-12-09 06:29:25.549349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.047 qpair failed and we were unable to recover it. 00:30:31.047 [2024-12-09 06:29:25.549657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.047 [2024-12-09 06:29:25.549668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.047 qpair failed and we were unable to recover it. 00:30:31.047 [2024-12-09 06:29:25.550005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.047 [2024-12-09 06:29:25.550016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.047 qpair failed and we were unable to recover it. 00:30:31.047 [2024-12-09 06:29:25.550375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.047 [2024-12-09 06:29:25.550387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.047 qpair failed and we were unable to recover it. 00:30:31.047 [2024-12-09 06:29:25.550722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.047 [2024-12-09 06:29:25.550733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.047 qpair failed and we were unable to recover it. 00:30:31.047 [2024-12-09 06:29:25.551052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.047 [2024-12-09 06:29:25.551062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.047 qpair failed and we were unable to recover it. 00:30:31.047 [2024-12-09 06:29:25.551383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.047 [2024-12-09 06:29:25.551393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.047 qpair failed and we were unable to recover it. 00:30:31.047 [2024-12-09 06:29:25.551685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.047 [2024-12-09 06:29:25.551696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.047 qpair failed and we were unable to recover it. 00:30:31.047 [2024-12-09 06:29:25.552012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.047 [2024-12-09 06:29:25.552021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.047 qpair failed and we were unable to recover it. 00:30:31.047 [2024-12-09 06:29:25.552347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.047 [2024-12-09 06:29:25.552356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.047 qpair failed and we were unable to recover it. 00:30:31.047 [2024-12-09 06:29:25.552682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.047 [2024-12-09 06:29:25.552692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.047 qpair failed and we were unable to recover it. 00:30:31.048 [2024-12-09 06:29:25.553059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.048 [2024-12-09 06:29:25.553071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.048 qpair failed and we were unable to recover it. 00:30:31.048 [2024-12-09 06:29:25.553271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.048 [2024-12-09 06:29:25.553282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.048 qpair failed and we were unable to recover it. 00:30:31.048 [2024-12-09 06:29:25.553617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.048 [2024-12-09 06:29:25.553629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.048 qpair failed and we were unable to recover it. 00:30:31.048 [2024-12-09 06:29:25.553824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.048 [2024-12-09 06:29:25.553834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.048 qpair failed and we were unable to recover it. 00:30:31.048 [2024-12-09 06:29:25.554121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.048 [2024-12-09 06:29:25.554131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.048 qpair failed and we were unable to recover it. 00:30:31.048 [2024-12-09 06:29:25.554478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.048 [2024-12-09 06:29:25.554488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.048 qpair failed and we were unable to recover it. 00:30:31.048 [2024-12-09 06:29:25.554591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.048 [2024-12-09 06:29:25.554601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.048 qpair failed and we were unable to recover it. 00:30:31.048 [2024-12-09 06:29:25.554934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.048 [2024-12-09 06:29:25.554944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.048 qpair failed and we were unable to recover it. 00:30:31.048 [2024-12-09 06:29:25.555290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.048 [2024-12-09 06:29:25.555299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.048 qpair failed and we were unable to recover it. 00:30:31.048 [2024-12-09 06:29:25.555626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.048 [2024-12-09 06:29:25.555638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.048 qpair failed and we were unable to recover it. 00:30:31.048 [2024-12-09 06:29:25.555846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.048 [2024-12-09 06:29:25.555858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.048 qpair failed and we were unable to recover it. 00:30:31.048 [2024-12-09 06:29:25.556184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.048 [2024-12-09 06:29:25.556195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.048 qpair failed and we were unable to recover it. 00:30:31.048 [2024-12-09 06:29:25.556518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.048 [2024-12-09 06:29:25.556530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.048 qpair failed and we were unable to recover it. 00:30:31.048 [2024-12-09 06:29:25.556882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.048 [2024-12-09 06:29:25.556892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.048 qpair failed and we were unable to recover it. 00:30:31.048 [2024-12-09 06:29:25.557235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.048 [2024-12-09 06:29:25.557246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.048 qpair failed and we were unable to recover it. 00:30:31.048 [2024-12-09 06:29:25.557469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.048 [2024-12-09 06:29:25.557483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.048 qpair failed and we were unable to recover it. 00:30:31.048 [2024-12-09 06:29:25.557780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.048 [2024-12-09 06:29:25.557792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.048 qpair failed and we were unable to recover it. 00:30:31.048 [2024-12-09 06:29:25.558147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.048 [2024-12-09 06:29:25.558158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.048 qpair failed and we were unable to recover it. 00:30:31.048 [2024-12-09 06:29:25.558519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.048 [2024-12-09 06:29:25.558534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.048 qpair failed and we were unable to recover it. 00:30:31.048 [2024-12-09 06:29:25.558785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.048 [2024-12-09 06:29:25.558797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.048 qpair failed and we were unable to recover it. 00:30:31.048 [2024-12-09 06:29:25.558989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.048 [2024-12-09 06:29:25.559002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.048 qpair failed and we were unable to recover it. 00:30:31.048 [2024-12-09 06:29:25.559216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.048 [2024-12-09 06:29:25.559226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.048 qpair failed and we were unable to recover it. 00:30:31.048 [2024-12-09 06:29:25.559566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.048 [2024-12-09 06:29:25.559577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.048 qpair failed and we were unable to recover it. 00:30:31.048 [2024-12-09 06:29:25.559905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.048 [2024-12-09 06:29:25.559917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.048 qpair failed and we were unable to recover it. 00:30:31.048 [2024-12-09 06:29:25.560244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.048 [2024-12-09 06:29:25.560254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.048 qpair failed and we were unable to recover it. 00:30:31.048 [2024-12-09 06:29:25.560619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.048 [2024-12-09 06:29:25.560631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.048 qpair failed and we were unable to recover it. 00:30:31.048 [2024-12-09 06:29:25.560937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.048 [2024-12-09 06:29:25.560948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.048 qpair failed and we were unable to recover it. 00:30:31.048 [2024-12-09 06:29:25.561139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.048 [2024-12-09 06:29:25.561150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.048 qpair failed and we were unable to recover it. 00:30:31.048 [2024-12-09 06:29:25.561318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.048 [2024-12-09 06:29:25.561329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.048 qpair failed and we were unable to recover it. 00:30:31.048 [2024-12-09 06:29:25.561640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.048 [2024-12-09 06:29:25.561652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.048 qpair failed and we were unable to recover it. 00:30:31.048 [2024-12-09 06:29:25.561994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.048 [2024-12-09 06:29:25.562006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.048 qpair failed and we were unable to recover it. 00:30:31.048 [2024-12-09 06:29:25.562341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.048 [2024-12-09 06:29:25.562354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.048 qpair failed and we were unable to recover it. 00:30:31.048 [2024-12-09 06:29:25.562548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.048 [2024-12-09 06:29:25.562559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.048 qpair failed and we were unable to recover it. 00:30:31.048 [2024-12-09 06:29:25.562853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.048 [2024-12-09 06:29:25.562863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.048 qpair failed and we were unable to recover it. 00:30:31.048 [2024-12-09 06:29:25.563200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.049 [2024-12-09 06:29:25.563214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.049 qpair failed and we were unable to recover it. 00:30:31.049 [2024-12-09 06:29:25.563536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.049 [2024-12-09 06:29:25.563547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.049 qpair failed and we were unable to recover it. 00:30:31.049 [2024-12-09 06:29:25.563882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.049 [2024-12-09 06:29:25.563894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.049 qpair failed and we were unable to recover it. 00:30:31.049 [2024-12-09 06:29:25.564225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.049 [2024-12-09 06:29:25.564237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.049 qpair failed and we were unable to recover it. 00:30:31.049 [2024-12-09 06:29:25.564305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.049 [2024-12-09 06:29:25.564316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.049 qpair failed and we were unable to recover it. 00:30:31.049 [2024-12-09 06:29:25.564441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.049 [2024-12-09 06:29:25.564456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.049 qpair failed and we were unable to recover it. 00:30:31.049 [2024-12-09 06:29:25.564779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.049 [2024-12-09 06:29:25.564789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.049 qpair failed and we were unable to recover it. 00:30:31.049 [2024-12-09 06:29:25.564989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.049 [2024-12-09 06:29:25.564999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.049 qpair failed and we were unable to recover it. 00:30:31.049 [2024-12-09 06:29:25.565264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.049 [2024-12-09 06:29:25.565275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.049 qpair failed and we were unable to recover it. 00:30:31.049 [2024-12-09 06:29:25.565563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.049 [2024-12-09 06:29:25.565574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.049 qpair failed and we were unable to recover it. 00:30:31.049 [2024-12-09 06:29:25.565888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.049 [2024-12-09 06:29:25.565900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.049 qpair failed and we were unable to recover it. 00:30:31.049 [2024-12-09 06:29:25.566089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.049 [2024-12-09 06:29:25.566101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.049 qpair failed and we were unable to recover it. 00:30:31.049 [2024-12-09 06:29:25.566383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.049 [2024-12-09 06:29:25.566395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.049 qpair failed and we were unable to recover it. 00:30:31.049 [2024-12-09 06:29:25.566652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.049 [2024-12-09 06:29:25.566662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.049 qpair failed and we were unable to recover it. 00:30:31.049 [2024-12-09 06:29:25.566952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.049 [2024-12-09 06:29:25.566963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.049 qpair failed and we were unable to recover it. 00:30:31.049 [2024-12-09 06:29:25.567291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.049 [2024-12-09 06:29:25.567302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.049 qpair failed and we were unable to recover it. 00:30:31.049 [2024-12-09 06:29:25.567617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.049 [2024-12-09 06:29:25.567628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.049 qpair failed and we were unable to recover it. 00:30:31.049 [2024-12-09 06:29:25.567981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.049 [2024-12-09 06:29:25.567992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.049 qpair failed and we were unable to recover it. 00:30:31.049 [2024-12-09 06:29:25.568314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.049 [2024-12-09 06:29:25.568327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.049 qpair failed and we were unable to recover it. 00:30:31.049 [2024-12-09 06:29:25.568664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.049 [2024-12-09 06:29:25.568675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.049 qpair failed and we were unable to recover it. 00:30:31.049 [2024-12-09 06:29:25.569004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.049 [2024-12-09 06:29:25.569015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.049 qpair failed and we were unable to recover it. 00:30:31.049 [2024-12-09 06:29:25.569326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.049 [2024-12-09 06:29:25.569337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.049 qpair failed and we were unable to recover it. 00:30:31.049 [2024-12-09 06:29:25.569707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.049 [2024-12-09 06:29:25.569718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.049 qpair failed and we were unable to recover it. 00:30:31.049 [2024-12-09 06:29:25.570043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.049 [2024-12-09 06:29:25.570054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.049 qpair failed and we were unable to recover it. 00:30:31.049 [2024-12-09 06:29:25.570388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.049 [2024-12-09 06:29:25.570405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.049 qpair failed and we were unable to recover it. 00:30:31.049 [2024-12-09 06:29:25.570792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.049 [2024-12-09 06:29:25.570803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.049 qpair failed and we were unable to recover it. 00:30:31.049 [2024-12-09 06:29:25.570988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.049 [2024-12-09 06:29:25.570999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.049 qpair failed and we were unable to recover it. 00:30:31.049 [2024-12-09 06:29:25.571284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.049 [2024-12-09 06:29:25.571295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.049 qpair failed and we were unable to recover it. 00:30:31.049 [2024-12-09 06:29:25.571621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.049 [2024-12-09 06:29:25.571632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.049 qpair failed and we were unable to recover it. 00:30:31.049 [2024-12-09 06:29:25.571912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.049 [2024-12-09 06:29:25.571922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.049 qpair failed and we were unable to recover it. 00:30:31.049 [2024-12-09 06:29:25.572174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.049 [2024-12-09 06:29:25.572184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.049 qpair failed and we were unable to recover it. 00:30:31.049 [2024-12-09 06:29:25.572512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.049 [2024-12-09 06:29:25.572523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.049 qpair failed and we were unable to recover it. 00:30:31.049 [2024-12-09 06:29:25.572872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.049 [2024-12-09 06:29:25.572882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.049 qpair failed and we were unable to recover it. 00:30:31.049 [2024-12-09 06:29:25.573219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.049 [2024-12-09 06:29:25.573228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.049 qpair failed and we were unable to recover it. 00:30:31.049 [2024-12-09 06:29:25.573544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.049 [2024-12-09 06:29:25.573555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.049 qpair failed and we were unable to recover it. 00:30:31.049 [2024-12-09 06:29:25.573826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.049 [2024-12-09 06:29:25.573837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.049 qpair failed and we were unable to recover it. 00:30:31.049 [2024-12-09 06:29:25.574182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.049 [2024-12-09 06:29:25.574192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.049 qpair failed and we were unable to recover it. 00:30:31.049 [2024-12-09 06:29:25.574507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.049 [2024-12-09 06:29:25.574518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.049 qpair failed and we were unable to recover it. 00:30:31.050 [2024-12-09 06:29:25.574811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.050 [2024-12-09 06:29:25.574822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.050 qpair failed and we were unable to recover it. 00:30:31.050 [2024-12-09 06:29:25.575141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.050 [2024-12-09 06:29:25.575152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.050 qpair failed and we were unable to recover it. 00:30:31.050 [2024-12-09 06:29:25.575499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.050 [2024-12-09 06:29:25.575511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.050 qpair failed and we were unable to recover it. 00:30:31.050 [2024-12-09 06:29:25.575838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.050 [2024-12-09 06:29:25.575848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.050 qpair failed and we were unable to recover it. 00:30:31.050 [2024-12-09 06:29:25.576055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.050 [2024-12-09 06:29:25.576065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.050 qpair failed and we were unable to recover it. 00:30:31.050 [2024-12-09 06:29:25.576413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.050 [2024-12-09 06:29:25.576423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.050 qpair failed and we were unable to recover it. 00:30:31.050 [2024-12-09 06:29:25.576724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.050 [2024-12-09 06:29:25.576735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.050 qpair failed and we were unable to recover it. 00:30:31.050 [2024-12-09 06:29:25.577069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.050 [2024-12-09 06:29:25.577081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.050 qpair failed and we were unable to recover it. 00:30:31.050 [2024-12-09 06:29:25.577396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.050 [2024-12-09 06:29:25.577406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.050 qpair failed and we were unable to recover it. 00:30:31.050 [2024-12-09 06:29:25.577738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.050 [2024-12-09 06:29:25.577748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.050 qpair failed and we were unable to recover it. 00:30:31.050 [2024-12-09 06:29:25.578040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.050 [2024-12-09 06:29:25.578050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.050 qpair failed and we were unable to recover it. 00:30:31.050 [2024-12-09 06:29:25.578229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.050 [2024-12-09 06:29:25.578239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.050 qpair failed and we were unable to recover it. 00:30:31.050 [2024-12-09 06:29:25.578587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.050 [2024-12-09 06:29:25.578598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.050 qpair failed and we were unable to recover it. 00:30:31.050 [2024-12-09 06:29:25.578778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.050 [2024-12-09 06:29:25.578789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.050 qpair failed and we were unable to recover it. 00:30:31.050 [2024-12-09 06:29:25.579104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.050 [2024-12-09 06:29:25.579114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.050 qpair failed and we were unable to recover it. 00:30:31.050 [2024-12-09 06:29:25.579272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.050 [2024-12-09 06:29:25.579284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.050 qpair failed and we were unable to recover it. 00:30:31.050 Read completed with error (sct=0, sc=8) 00:30:31.050 starting I/O failed 00:30:31.050 Read completed with error (sct=0, sc=8) 00:30:31.050 starting I/O failed 00:30:31.050 Read completed with error (sct=0, sc=8) 00:30:31.050 starting I/O failed 00:30:31.050 Read completed with error (sct=0, sc=8) 00:30:31.050 starting I/O failed 00:30:31.050 Read completed with error (sct=0, sc=8) 00:30:31.050 starting I/O failed 00:30:31.050 Read completed with error (sct=0, sc=8) 00:30:31.050 starting I/O failed 00:30:31.050 Read completed with error (sct=0, sc=8) 00:30:31.050 starting I/O failed 00:30:31.050 Read completed with error (sct=0, sc=8) 00:30:31.050 starting I/O failed 00:30:31.050 Write completed with error (sct=0, sc=8) 00:30:31.050 starting I/O failed 00:30:31.050 Read completed with error (sct=0, sc=8) 00:30:31.050 starting I/O failed 00:30:31.050 Read completed with error (sct=0, sc=8) 00:30:31.050 starting I/O failed 00:30:31.050 Write completed with error (sct=0, sc=8) 00:30:31.050 starting I/O failed 00:30:31.050 Read completed with error (sct=0, sc=8) 00:30:31.050 starting I/O failed 00:30:31.050 Read completed with error (sct=0, sc=8) 00:30:31.050 starting I/O failed 00:30:31.050 Read completed with error (sct=0, sc=8) 00:30:31.050 starting I/O failed 00:30:31.050 Read completed with error (sct=0, sc=8) 00:30:31.050 starting I/O failed 00:30:31.050 Write completed with error (sct=0, sc=8) 00:30:31.050 starting I/O failed 00:30:31.050 Write completed with error (sct=0, sc=8) 00:30:31.050 starting I/O failed 00:30:31.050 Read completed with error (sct=0, sc=8) 00:30:31.050 starting I/O failed 00:30:31.050 Read completed with error (sct=0, sc=8) 00:30:31.050 starting I/O failed 00:30:31.050 Write completed with error (sct=0, sc=8) 00:30:31.050 starting I/O failed 00:30:31.050 Read completed with error (sct=0, sc=8) 00:30:31.050 starting I/O failed 00:30:31.050 Write completed with error (sct=0, sc=8) 00:30:31.050 starting I/O failed 00:30:31.050 Read completed with error (sct=0, sc=8) 00:30:31.050 starting I/O failed 00:30:31.050 Read completed with error (sct=0, sc=8) 00:30:31.050 starting I/O failed 00:30:31.050 Read completed with error (sct=0, sc=8) 00:30:31.050 starting I/O failed 00:30:31.050 Read completed with error (sct=0, sc=8) 00:30:31.050 starting I/O failed 00:30:31.050 Read completed with error (sct=0, sc=8) 00:30:31.050 starting I/O failed 00:30:31.050 Write completed with error (sct=0, sc=8) 00:30:31.050 starting I/O failed 00:30:31.050 Read completed with error (sct=0, sc=8) 00:30:31.050 starting I/O failed 00:30:31.050 Read completed with error (sct=0, sc=8) 00:30:31.050 starting I/O failed 00:30:31.050 Write completed with error (sct=0, sc=8) 00:30:31.050 starting I/O failed 00:30:31.050 [2024-12-09 06:29:25.579941] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.050 [2024-12-09 06:29:25.580378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.050 [2024-12-09 06:29:25.580416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.050 qpair failed and we were unable to recover it. 00:30:31.050 [2024-12-09 06:29:25.580757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.050 [2024-12-09 06:29:25.580769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.050 qpair failed and we were unable to recover it. 00:30:31.050 [2024-12-09 06:29:25.581112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.050 [2024-12-09 06:29:25.581124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.050 qpair failed and we were unable to recover it. 00:30:31.050 [2024-12-09 06:29:25.581447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.050 [2024-12-09 06:29:25.581470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.050 qpair failed and we were unable to recover it. 00:30:31.050 [2024-12-09 06:29:25.581859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.050 [2024-12-09 06:29:25.581872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.050 qpair failed and we were unable to recover it. 00:30:31.050 [2024-12-09 06:29:25.582190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.050 [2024-12-09 06:29:25.582203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.050 qpair failed and we were unable to recover it. 00:30:31.050 [2024-12-09 06:29:25.582385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.050 [2024-12-09 06:29:25.582398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.050 qpair failed and we were unable to recover it. 00:30:31.050 [2024-12-09 06:29:25.582584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.050 [2024-12-09 06:29:25.582597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.050 qpair failed and we were unable to recover it. 00:30:31.050 [2024-12-09 06:29:25.582920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.050 [2024-12-09 06:29:25.582932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.050 qpair failed and we were unable to recover it. 00:30:31.050 [2024-12-09 06:29:25.583307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.050 [2024-12-09 06:29:25.583320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.050 qpair failed and we were unable to recover it. 00:30:31.050 [2024-12-09 06:29:25.583565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.050 [2024-12-09 06:29:25.583577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.051 qpair failed and we were unable to recover it. 00:30:31.051 [2024-12-09 06:29:25.583900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.051 [2024-12-09 06:29:25.583912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.051 qpair failed and we were unable to recover it. 00:30:31.051 [2024-12-09 06:29:25.584235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.051 [2024-12-09 06:29:25.584246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.051 qpair failed and we were unable to recover it. 00:30:31.051 [2024-12-09 06:29:25.584548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.051 [2024-12-09 06:29:25.584559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.051 qpair failed and we were unable to recover it. 00:30:31.051 [2024-12-09 06:29:25.584663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.051 [2024-12-09 06:29:25.584675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.051 qpair failed and we were unable to recover it. 00:30:31.051 [2024-12-09 06:29:25.584872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.051 [2024-12-09 06:29:25.584885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.051 qpair failed and we were unable to recover it. 00:30:31.051 [2024-12-09 06:29:25.585165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.051 [2024-12-09 06:29:25.585176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.051 qpair failed and we were unable to recover it. 00:30:31.051 [2024-12-09 06:29:25.585403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.051 [2024-12-09 06:29:25.585415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.051 qpair failed and we were unable to recover it. 00:30:31.051 [2024-12-09 06:29:25.585691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.051 [2024-12-09 06:29:25.585703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.051 qpair failed and we were unable to recover it. 00:30:31.051 [2024-12-09 06:29:25.585895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.051 [2024-12-09 06:29:25.585908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.051 qpair failed and we were unable to recover it. 00:30:31.051 [2024-12-09 06:29:25.586252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.051 [2024-12-09 06:29:25.586264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.051 qpair failed and we were unable to recover it. 00:30:31.051 [2024-12-09 06:29:25.586495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.051 [2024-12-09 06:29:25.586509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.051 qpair failed and we were unable to recover it. 00:30:31.051 [2024-12-09 06:29:25.586841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.051 [2024-12-09 06:29:25.586852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.051 qpair failed and we were unable to recover it. 00:30:31.051 [2024-12-09 06:29:25.587151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.051 [2024-12-09 06:29:25.587162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.051 qpair failed and we were unable to recover it. 00:30:31.051 [2024-12-09 06:29:25.587470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.051 [2024-12-09 06:29:25.587482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.051 qpair failed and we were unable to recover it. 00:30:31.051 [2024-12-09 06:29:25.587787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.051 [2024-12-09 06:29:25.587797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.051 qpair failed and we were unable to recover it. 00:30:31.051 [2024-12-09 06:29:25.588061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.051 [2024-12-09 06:29:25.588073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.051 qpair failed and we were unable to recover it. 00:30:31.051 [2024-12-09 06:29:25.588393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.051 [2024-12-09 06:29:25.588404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.051 qpair failed and we were unable to recover it. 00:30:31.051 [2024-12-09 06:29:25.588680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.051 [2024-12-09 06:29:25.588692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.051 qpair failed and we were unable to recover it. 00:30:31.051 [2024-12-09 06:29:25.589019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.051 [2024-12-09 06:29:25.589030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.051 qpair failed and we were unable to recover it. 00:30:31.051 [2024-12-09 06:29:25.589360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.051 [2024-12-09 06:29:25.589370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.051 qpair failed and we were unable to recover it. 00:30:31.051 [2024-12-09 06:29:25.589561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.051 [2024-12-09 06:29:25.589574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.051 qpair failed and we were unable to recover it. 00:30:31.051 [2024-12-09 06:29:25.589779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.051 [2024-12-09 06:29:25.589792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.051 qpair failed and we were unable to recover it. 00:30:31.051 [2024-12-09 06:29:25.590152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.051 [2024-12-09 06:29:25.590163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.051 qpair failed and we were unable to recover it. 00:30:31.051 [2024-12-09 06:29:25.590519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.051 [2024-12-09 06:29:25.590531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.051 qpair failed and we were unable to recover it. 00:30:31.051 [2024-12-09 06:29:25.590849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.051 [2024-12-09 06:29:25.590860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.051 qpair failed and we were unable to recover it. 00:30:31.051 [2024-12-09 06:29:25.591199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.051 [2024-12-09 06:29:25.591210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.051 qpair failed and we were unable to recover it. 00:30:31.051 [2024-12-09 06:29:25.591534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.051 [2024-12-09 06:29:25.591546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.051 qpair failed and we were unable to recover it. 00:30:31.051 [2024-12-09 06:29:25.591895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.051 [2024-12-09 06:29:25.591907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.051 qpair failed and we were unable to recover it. 00:30:31.051 [2024-12-09 06:29:25.591931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:31.051 [2024-12-09 06:29:25.592121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.051 [2024-12-09 06:29:25.592132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.051 qpair failed and we were unable to recover it. 00:30:31.051 [2024-12-09 06:29:25.592481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.051 [2024-12-09 06:29:25.592493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.051 qpair failed and we were unable to recover it. 00:30:31.051 [2024-12-09 06:29:25.592801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.051 [2024-12-09 06:29:25.592815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.051 qpair failed and we were unable to recover it. 00:30:31.051 [2024-12-09 06:29:25.593002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.051 [2024-12-09 06:29:25.593013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.051 qpair failed and we were unable to recover it. 00:30:31.051 [2024-12-09 06:29:25.593355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.052 [2024-12-09 06:29:25.593368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.052 qpair failed and we were unable to recover it. 00:30:31.052 [2024-12-09 06:29:25.593725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.052 [2024-12-09 06:29:25.593737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.052 qpair failed and we were unable to recover it. 00:30:31.052 [2024-12-09 06:29:25.594043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.052 [2024-12-09 06:29:25.594054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.052 qpair failed and we were unable to recover it. 00:30:31.052 [2024-12-09 06:29:25.594261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.052 [2024-12-09 06:29:25.594272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.052 qpair failed and we were unable to recover it. 00:30:31.052 [2024-12-09 06:29:25.594590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.052 [2024-12-09 06:29:25.594602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.052 qpair failed and we were unable to recover it. 00:30:31.052 [2024-12-09 06:29:25.594894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.052 [2024-12-09 06:29:25.594905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.052 qpair failed and we were unable to recover it. 00:30:31.052 [2024-12-09 06:29:25.595213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.052 [2024-12-09 06:29:25.595223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.052 qpair failed and we were unable to recover it. 00:30:31.052 [2024-12-09 06:29:25.595548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.052 [2024-12-09 06:29:25.595560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.052 qpair failed and we were unable to recover it. 00:30:31.052 [2024-12-09 06:29:25.595926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.052 [2024-12-09 06:29:25.595937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.052 qpair failed and we were unable to recover it. 00:30:31.052 [2024-12-09 06:29:25.596022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.052 [2024-12-09 06:29:25.596032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.052 qpair failed and we were unable to recover it. 00:30:31.052 [2024-12-09 06:29:25.596341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.052 [2024-12-09 06:29:25.596351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.052 qpair failed and we were unable to recover it. 00:30:31.052 [2024-12-09 06:29:25.596691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.052 [2024-12-09 06:29:25.596703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.052 qpair failed and we were unable to recover it. 00:30:31.328 [2024-12-09 06:29:25.597029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.328 [2024-12-09 06:29:25.597043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.328 qpair failed and we were unable to recover it. 00:30:31.328 [2024-12-09 06:29:25.597380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.328 [2024-12-09 06:29:25.597397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.328 qpair failed and we were unable to recover it. 00:30:31.328 [2024-12-09 06:29:25.597738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.328 [2024-12-09 06:29:25.597751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.328 qpair failed and we were unable to recover it. 00:30:31.328 [2024-12-09 06:29:25.597961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.328 [2024-12-09 06:29:25.597972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.328 qpair failed and we were unable to recover it. 00:30:31.328 [2024-12-09 06:29:25.598254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.328 [2024-12-09 06:29:25.598266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.328 qpair failed and we were unable to recover it. 00:30:31.328 [2024-12-09 06:29:25.598456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.328 [2024-12-09 06:29:25.598469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.328 qpair failed and we were unable to recover it. 00:30:31.328 [2024-12-09 06:29:25.598773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.328 [2024-12-09 06:29:25.598784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.328 qpair failed and we were unable to recover it. 00:30:31.328 [2024-12-09 06:29:25.599090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.328 [2024-12-09 06:29:25.599102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.328 qpair failed and we were unable to recover it. 00:30:31.328 [2024-12-09 06:29:25.599299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.328 [2024-12-09 06:29:25.599309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.328 qpair failed and we were unable to recover it. 00:30:31.328 [2024-12-09 06:29:25.599590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.328 [2024-12-09 06:29:25.599601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.328 qpair failed and we were unable to recover it. 00:30:31.328 [2024-12-09 06:29:25.599946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.328 [2024-12-09 06:29:25.599956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.328 qpair failed and we were unable to recover it. 00:30:31.328 [2024-12-09 06:29:25.600180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.328 [2024-12-09 06:29:25.600194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.328 qpair failed and we were unable to recover it. 00:30:31.328 [2024-12-09 06:29:25.600515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.328 [2024-12-09 06:29:25.600526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.328 qpair failed and we were unable to recover it. 00:30:31.328 [2024-12-09 06:29:25.600849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.328 [2024-12-09 06:29:25.600860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.328 qpair failed and we were unable to recover it. 00:30:31.328 [2024-12-09 06:29:25.601203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.329 [2024-12-09 06:29:25.601213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.329 qpair failed and we were unable to recover it. 00:30:31.329 [2024-12-09 06:29:25.601483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.329 [2024-12-09 06:29:25.601494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.329 qpair failed and we were unable to recover it. 00:30:31.329 [2024-12-09 06:29:25.601796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.329 [2024-12-09 06:29:25.601807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.329 qpair failed and we were unable to recover it. 00:30:31.329 [2024-12-09 06:29:25.601916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.329 [2024-12-09 06:29:25.601926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.329 qpair failed and we were unable to recover it. 00:30:31.329 [2024-12-09 06:29:25.602238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.329 [2024-12-09 06:29:25.602249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.329 qpair failed and we were unable to recover it. 00:30:31.329 [2024-12-09 06:29:25.602588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.329 [2024-12-09 06:29:25.602600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.329 qpair failed and we were unable to recover it. 00:30:31.329 [2024-12-09 06:29:25.602901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.329 [2024-12-09 06:29:25.602912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.329 qpair failed and we were unable to recover it. 00:30:31.329 [2024-12-09 06:29:25.603093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.329 [2024-12-09 06:29:25.603104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.329 qpair failed and we were unable to recover it. 00:30:31.329 [2024-12-09 06:29:25.603398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.329 [2024-12-09 06:29:25.603410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.329 qpair failed and we were unable to recover it. 00:30:31.329 [2024-12-09 06:29:25.603740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.329 [2024-12-09 06:29:25.603752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.329 qpair failed and we were unable to recover it. 00:30:31.329 [2024-12-09 06:29:25.603832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.329 [2024-12-09 06:29:25.603843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.329 qpair failed and we were unable to recover it. 00:30:31.329 [2024-12-09 06:29:25.603984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.329 [2024-12-09 06:29:25.603995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.329 qpair failed and we were unable to recover it. 00:30:31.329 [2024-12-09 06:29:25.604287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.329 [2024-12-09 06:29:25.604299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.329 qpair failed and we were unable to recover it. 00:30:31.329 [2024-12-09 06:29:25.604613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.329 [2024-12-09 06:29:25.604626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.329 qpair failed and we were unable to recover it. 00:30:31.329 [2024-12-09 06:29:25.604943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.329 [2024-12-09 06:29:25.604954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.329 qpair failed and we were unable to recover it. 00:30:31.329 [2024-12-09 06:29:25.605299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.329 [2024-12-09 06:29:25.605312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.329 qpair failed and we were unable to recover it. 00:30:31.329 [2024-12-09 06:29:25.605640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.329 [2024-12-09 06:29:25.605651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.329 qpair failed and we were unable to recover it. 00:30:31.329 [2024-12-09 06:29:25.605860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.329 [2024-12-09 06:29:25.605875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.329 qpair failed and we were unable to recover it. 00:30:31.329 [2024-12-09 06:29:25.606202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.329 [2024-12-09 06:29:25.606215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.329 qpair failed and we were unable to recover it. 00:30:31.329 [2024-12-09 06:29:25.606518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.329 [2024-12-09 06:29:25.606529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.329 qpair failed and we were unable to recover it. 00:30:31.329 [2024-12-09 06:29:25.606908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.329 [2024-12-09 06:29:25.606921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.329 qpair failed and we were unable to recover it. 00:30:31.329 [2024-12-09 06:29:25.607255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.329 [2024-12-09 06:29:25.607267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.329 qpair failed and we were unable to recover it. 00:30:31.329 [2024-12-09 06:29:25.607560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.329 [2024-12-09 06:29:25.607571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.329 qpair failed and we were unable to recover it. 00:30:31.329 [2024-12-09 06:29:25.607746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.329 [2024-12-09 06:29:25.607757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.329 qpair failed and we were unable to recover it. 00:30:31.329 [2024-12-09 06:29:25.608075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.329 [2024-12-09 06:29:25.608086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.329 qpair failed and we were unable to recover it. 00:30:31.329 [2024-12-09 06:29:25.608387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.329 [2024-12-09 06:29:25.608397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.329 qpair failed and we were unable to recover it. 00:30:31.329 [2024-12-09 06:29:25.608608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.329 [2024-12-09 06:29:25.608619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.329 qpair failed and we were unable to recover it. 00:30:31.329 [2024-12-09 06:29:25.608980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.329 [2024-12-09 06:29:25.608995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.329 qpair failed and we were unable to recover it. 00:30:31.329 [2024-12-09 06:29:25.609313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.329 [2024-12-09 06:29:25.609324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.329 qpair failed and we were unable to recover it. 00:30:31.329 [2024-12-09 06:29:25.609550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.329 [2024-12-09 06:29:25.609561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.329 qpair failed and we were unable to recover it. 00:30:31.329 [2024-12-09 06:29:25.609736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.329 [2024-12-09 06:29:25.609746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.329 qpair failed and we were unable to recover it. 00:30:31.329 [2024-12-09 06:29:25.610029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.329 [2024-12-09 06:29:25.610040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.329 qpair failed and we were unable to recover it. 00:30:31.329 [2024-12-09 06:29:25.610334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.329 [2024-12-09 06:29:25.610345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.329 qpair failed and we were unable to recover it. 00:30:31.329 [2024-12-09 06:29:25.610652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.329 [2024-12-09 06:29:25.610663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.329 qpair failed and we were unable to recover it. 00:30:31.329 [2024-12-09 06:29:25.610948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.329 [2024-12-09 06:29:25.610959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.329 qpair failed and we were unable to recover it. 00:30:31.329 [2024-12-09 06:29:25.611299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.329 [2024-12-09 06:29:25.611310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.329 qpair failed and we were unable to recover it. 00:30:31.329 [2024-12-09 06:29:25.611607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.329 [2024-12-09 06:29:25.611618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.329 qpair failed and we were unable to recover it. 00:30:31.329 [2024-12-09 06:29:25.611902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.329 [2024-12-09 06:29:25.611912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.330 qpair failed and we were unable to recover it. 00:30:31.330 [2024-12-09 06:29:25.612225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.330 [2024-12-09 06:29:25.612238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.330 qpair failed and we were unable to recover it. 00:30:31.330 [2024-12-09 06:29:25.612551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.330 [2024-12-09 06:29:25.612562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.330 qpair failed and we were unable to recover it. 00:30:31.330 [2024-12-09 06:29:25.612756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.330 [2024-12-09 06:29:25.612766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.330 qpair failed and we were unable to recover it. 00:30:31.330 [2024-12-09 06:29:25.612943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.330 [2024-12-09 06:29:25.612954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.330 qpair failed and we were unable to recover it. 00:30:31.330 [2024-12-09 06:29:25.613220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.330 [2024-12-09 06:29:25.613231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.330 qpair failed and we were unable to recover it. 00:30:31.330 [2024-12-09 06:29:25.613550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.330 [2024-12-09 06:29:25.613562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.330 qpair failed and we were unable to recover it. 00:30:31.330 [2024-12-09 06:29:25.613872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.330 [2024-12-09 06:29:25.613882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.330 qpair failed and we were unable to recover it. 00:30:31.330 [2024-12-09 06:29:25.614193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.330 [2024-12-09 06:29:25.614205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.330 qpair failed and we were unable to recover it. 00:30:31.330 [2024-12-09 06:29:25.614404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.330 [2024-12-09 06:29:25.614415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.330 qpair failed and we were unable to recover it. 00:30:31.330 [2024-12-09 06:29:25.614743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.330 [2024-12-09 06:29:25.614754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.330 qpair failed and we were unable to recover it. 00:30:31.330 [2024-12-09 06:29:25.615052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.330 [2024-12-09 06:29:25.615063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.330 qpair failed and we were unable to recover it. 00:30:31.330 [2024-12-09 06:29:25.615403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.330 [2024-12-09 06:29:25.615414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.330 qpair failed and we were unable to recover it. 00:30:31.330 [2024-12-09 06:29:25.615781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.330 [2024-12-09 06:29:25.615793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.330 qpair failed and we were unable to recover it. 00:30:31.330 [2024-12-09 06:29:25.616128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.330 [2024-12-09 06:29:25.616139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.330 qpair failed and we were unable to recover it. 00:30:31.330 [2024-12-09 06:29:25.616383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.330 [2024-12-09 06:29:25.616394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.330 qpair failed and we were unable to recover it. 00:30:31.330 [2024-12-09 06:29:25.616725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.330 [2024-12-09 06:29:25.616736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.330 qpair failed and we were unable to recover it. 00:30:31.330 [2024-12-09 06:29:25.617107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.330 [2024-12-09 06:29:25.617120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.330 qpair failed and we were unable to recover it. 00:30:31.330 [2024-12-09 06:29:25.617458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.330 [2024-12-09 06:29:25.617469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.330 qpair failed and we were unable to recover it. 00:30:31.330 [2024-12-09 06:29:25.617839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.330 [2024-12-09 06:29:25.617851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.330 qpair failed and we were unable to recover it. 00:30:31.330 [2024-12-09 06:29:25.618051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.330 [2024-12-09 06:29:25.618061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.330 qpair failed and we were unable to recover it. 00:30:31.330 [2024-12-09 06:29:25.618375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.330 [2024-12-09 06:29:25.618386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.330 qpair failed and we were unable to recover it. 00:30:31.330 [2024-12-09 06:29:25.618713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.330 [2024-12-09 06:29:25.618725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.330 qpair failed and we were unable to recover it. 00:30:31.330 [2024-12-09 06:29:25.619027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.330 [2024-12-09 06:29:25.619039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.330 qpair failed and we were unable to recover it. 00:30:31.330 [2024-12-09 06:29:25.619356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.330 [2024-12-09 06:29:25.619368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.330 qpair failed and we were unable to recover it. 00:30:31.330 [2024-12-09 06:29:25.619691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.330 [2024-12-09 06:29:25.619702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.330 qpair failed and we were unable to recover it. 00:30:31.330 [2024-12-09 06:29:25.620022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.330 [2024-12-09 06:29:25.620033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.330 qpair failed and we were unable to recover it. 00:30:31.330 [2024-12-09 06:29:25.620211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.330 [2024-12-09 06:29:25.620223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.330 qpair failed and we were unable to recover it. 00:30:31.330 [2024-12-09 06:29:25.620591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.330 [2024-12-09 06:29:25.620604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.330 qpair failed and we were unable to recover it. 00:30:31.330 [2024-12-09 06:29:25.620775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.330 [2024-12-09 06:29:25.620787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.330 qpair failed and we were unable to recover it. 00:30:31.330 [2024-12-09 06:29:25.621009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.330 [2024-12-09 06:29:25.621020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.330 qpair failed and we were unable to recover it. 00:30:31.330 [2024-12-09 06:29:25.621348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.330 [2024-12-09 06:29:25.621359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.330 qpair failed and we were unable to recover it. 00:30:31.330 [2024-12-09 06:29:25.621676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.330 [2024-12-09 06:29:25.621688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.330 qpair failed and we were unable to recover it. 00:30:31.330 [2024-12-09 06:29:25.621976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.330 [2024-12-09 06:29:25.621988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.330 qpair failed and we were unable to recover it. 00:30:31.330 [2024-12-09 06:29:25.622319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.330 [2024-12-09 06:29:25.622331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.330 qpair failed and we were unable to recover it. 00:30:31.330 [2024-12-09 06:29:25.622656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.330 [2024-12-09 06:29:25.622668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.330 qpair failed and we were unable to recover it. 00:30:31.330 [2024-12-09 06:29:25.623033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.330 [2024-12-09 06:29:25.623045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.330 qpair failed and we were unable to recover it. 00:30:31.330 [2024-12-09 06:29:25.623351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.330 [2024-12-09 06:29:25.623362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.330 qpair failed and we were unable to recover it. 00:30:31.330 [2024-12-09 06:29:25.623583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.330 [2024-12-09 06:29:25.623596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.331 qpair failed and we were unable to recover it. 00:30:31.331 [2024-12-09 06:29:25.623926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.331 [2024-12-09 06:29:25.623940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.331 qpair failed and we were unable to recover it. 00:30:31.331 [2024-12-09 06:29:25.624114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.331 [2024-12-09 06:29:25.624127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.331 qpair failed and we were unable to recover it. 00:30:31.331 [2024-12-09 06:29:25.624430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.331 [2024-12-09 06:29:25.624443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.331 qpair failed and we were unable to recover it. 00:30:31.331 [2024-12-09 06:29:25.624780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.331 [2024-12-09 06:29:25.624793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.331 qpair failed and we were unable to recover it. 00:30:31.331 [2024-12-09 06:29:25.625079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.331 [2024-12-09 06:29:25.625091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.331 qpair failed and we were unable to recover it. 00:30:31.331 [2024-12-09 06:29:25.625282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.331 [2024-12-09 06:29:25.625295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.331 qpair failed and we were unable to recover it. 00:30:31.331 [2024-12-09 06:29:25.625483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.331 [2024-12-09 06:29:25.625495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.331 qpair failed and we were unable to recover it. 00:30:31.331 [2024-12-09 06:29:25.625815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.331 [2024-12-09 06:29:25.625825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.331 qpair failed and we were unable to recover it. 00:30:31.331 [2024-12-09 06:29:25.626109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.331 [2024-12-09 06:29:25.626121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.331 qpair failed and we were unable to recover it. 00:30:31.331 [2024-12-09 06:29:25.626304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.331 [2024-12-09 06:29:25.626315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.331 qpair failed and we were unable to recover it. 00:30:31.331 [2024-12-09 06:29:25.626658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.331 [2024-12-09 06:29:25.626669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.331 qpair failed and we were unable to recover it. 00:30:31.331 [2024-12-09 06:29:25.626974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.331 [2024-12-09 06:29:25.626986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.331 qpair failed and we were unable to recover it. 00:30:31.331 [2024-12-09 06:29:25.627310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.331 [2024-12-09 06:29:25.627321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.331 qpair failed and we were unable to recover it. 00:30:31.331 [2024-12-09 06:29:25.627563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.331 [2024-12-09 06:29:25.627575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.331 qpair failed and we were unable to recover it. 00:30:31.331 [2024-12-09 06:29:25.627768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.331 [2024-12-09 06:29:25.627779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.331 qpair failed and we were unable to recover it. 00:30:31.331 [2024-12-09 06:29:25.628109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.331 [2024-12-09 06:29:25.628120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.331 qpair failed and we were unable to recover it. 00:30:31.331 [2024-12-09 06:29:25.628428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.331 [2024-12-09 06:29:25.628439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.331 qpair failed and we were unable to recover it. 00:30:31.331 [2024-12-09 06:29:25.628766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.331 [2024-12-09 06:29:25.628777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.331 qpair failed and we were unable to recover it. 00:30:31.331 [2024-12-09 06:29:25.629093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.331 [2024-12-09 06:29:25.629108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.331 qpair failed and we were unable to recover it. 00:30:31.331 [2024-12-09 06:29:25.629432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.331 [2024-12-09 06:29:25.629443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.331 qpair failed and we were unable to recover it. 00:30:31.331 [2024-12-09 06:29:25.629811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.331 [2024-12-09 06:29:25.629823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.331 qpair failed and we were unable to recover it. 00:30:31.331 [2024-12-09 06:29:25.630003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.331 [2024-12-09 06:29:25.630015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.331 qpair failed and we were unable to recover it. 00:30:31.331 [2024-12-09 06:29:25.630231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.331 [2024-12-09 06:29:25.630242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.331 qpair failed and we were unable to recover it. 00:30:31.331 [2024-12-09 06:29:25.630607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.331 [2024-12-09 06:29:25.630620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.331 qpair failed and we were unable to recover it. 00:30:31.331 [2024-12-09 06:29:25.630939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.331 [2024-12-09 06:29:25.630950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.331 qpair failed and we were unable to recover it. 00:30:31.331 [2024-12-09 06:29:25.631244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.331 [2024-12-09 06:29:25.631255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.331 qpair failed and we were unable to recover it. 00:30:31.331 [2024-12-09 06:29:25.631593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.331 [2024-12-09 06:29:25.631604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.331 qpair failed and we were unable to recover it. 00:30:31.331 [2024-12-09 06:29:25.631905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.331 [2024-12-09 06:29:25.631916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.331 qpair failed and we were unable to recover it. 00:30:31.331 [2024-12-09 06:29:25.632157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.331 [2024-12-09 06:29:25.632168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.331 qpair failed and we were unable to recover it. 00:30:31.331 [2024-12-09 06:29:25.632504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.331 [2024-12-09 06:29:25.632516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.331 qpair failed and we were unable to recover it. 00:30:31.331 [2024-12-09 06:29:25.632729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.331 [2024-12-09 06:29:25.632740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.331 qpair failed and we were unable to recover it. 00:30:31.331 [2024-12-09 06:29:25.633056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.331 [2024-12-09 06:29:25.633068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.331 qpair failed and we were unable to recover it. 00:30:31.331 [2024-12-09 06:29:25.633259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.331 [2024-12-09 06:29:25.633270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.331 qpair failed and we were unable to recover it. 00:30:31.331 [2024-12-09 06:29:25.633605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.331 [2024-12-09 06:29:25.633616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.331 qpair failed and we were unable to recover it. 00:30:31.331 [2024-12-09 06:29:25.633926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.331 [2024-12-09 06:29:25.633937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.331 qpair failed and we were unable to recover it. 00:30:31.331 [2024-12-09 06:29:25.634239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.331 [2024-12-09 06:29:25.634249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.331 qpair failed and we were unable to recover it. 00:30:31.331 [2024-12-09 06:29:25.634566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.331 [2024-12-09 06:29:25.634578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.331 qpair failed and we were unable to recover it. 00:30:31.331 [2024-12-09 06:29:25.634763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.331 [2024-12-09 06:29:25.634774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.332 qpair failed and we were unable to recover it. 00:30:31.332 [2024-12-09 06:29:25.634961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.332 [2024-12-09 06:29:25.634971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.332 qpair failed and we were unable to recover it. 00:30:31.332 [2024-12-09 06:29:25.635301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.332 [2024-12-09 06:29:25.635314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.332 qpair failed and we were unable to recover it. 00:30:31.332 [2024-12-09 06:29:25.635611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.332 [2024-12-09 06:29:25.635624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.332 qpair failed and we were unable to recover it. 00:30:31.332 [2024-12-09 06:29:25.635821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.332 [2024-12-09 06:29:25.635834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.332 qpair failed and we were unable to recover it. 00:30:31.332 [2024-12-09 06:29:25.636050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.332 [2024-12-09 06:29:25.636061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.332 qpair failed and we were unable to recover it. 00:30:31.332 [2024-12-09 06:29:25.636376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.332 [2024-12-09 06:29:25.636386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.332 qpair failed and we were unable to recover it. 00:30:31.332 [2024-12-09 06:29:25.636671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.332 [2024-12-09 06:29:25.636684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.332 qpair failed and we were unable to recover it. 00:30:31.332 [2024-12-09 06:29:25.637004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.332 [2024-12-09 06:29:25.637017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.332 qpair failed and we were unable to recover it. 00:30:31.332 [2024-12-09 06:29:25.637347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.332 [2024-12-09 06:29:25.637360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.332 qpair failed and we were unable to recover it. 00:30:31.332 [2024-12-09 06:29:25.637698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.332 [2024-12-09 06:29:25.637711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.332 qpair failed and we were unable to recover it. 00:30:31.332 [2024-12-09 06:29:25.638021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.332 [2024-12-09 06:29:25.638032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.332 qpair failed and we were unable to recover it. 00:30:31.332 [2024-12-09 06:29:25.638341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.332 [2024-12-09 06:29:25.638352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.332 qpair failed and we were unable to recover it. 00:30:31.332 [2024-12-09 06:29:25.638355] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:31.332 [2024-12-09 06:29:25.638393] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:31.332 [2024-12-09 06:29:25.638398] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:31.332 [2024-12-09 06:29:25.638403] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:31.332 [2024-12-09 06:29:25.638408] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:31.332 [2024-12-09 06:29:25.638685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.332 [2024-12-09 06:29:25.638697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.332 qpair failed and we were unable to recover it. 00:30:31.332 [2024-12-09 06:29:25.639033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.332 [2024-12-09 06:29:25.639046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.332 qpair failed and we were unable to recover it. 00:30:31.332 [2024-12-09 06:29:25.639338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.332 [2024-12-09 06:29:25.639349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.332 qpair failed and we were unable to recover it. 00:30:31.332 [2024-12-09 06:29:25.639677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.332 [2024-12-09 06:29:25.639687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.332 qpair failed and we were unable to recover it. 00:30:31.332 [2024-12-09 06:29:25.639901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.332 [2024-12-09 06:29:25.639910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.332 qpair failed and we were unable to recover it. 00:30:31.332 [2024-12-09 06:29:25.640220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.332 [2024-12-09 06:29:25.640229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.332 qpair failed and we were unable to recover it. 00:30:31.332 [2024-12-09 06:29:25.640417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:31.332 [2024-12-09 06:29:25.640555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.332 [2024-12-09 06:29:25.640579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.332 qpair failed and we were unable to recover it. 00:30:31.332 [2024-12-09 06:29:25.640602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:31.332 [2024-12-09 06:29:25.640785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.332 [2024-12-09 06:29:25.640796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.332 qpair failed and we were unable to recover it. 00:30:31.332 [2024-12-09 06:29:25.640872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:30:31.332 [2024-12-09 06:29:25.640977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.332 [2024-12-09 06:29:25.640873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:31.332 [2024-12-09 06:29:25.640992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.332 qpair failed and we were unable to recover it. 00:30:31.332 [2024-12-09 06:29:25.641293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.332 [2024-12-09 06:29:25.641303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.332 qpair failed and we were unable to recover it. 00:30:31.332 [2024-12-09 06:29:25.641622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.332 [2024-12-09 06:29:25.641633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.332 qpair failed and we were unable to recover it. 00:30:31.332 [2024-12-09 06:29:25.641978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.332 [2024-12-09 06:29:25.641988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.332 qpair failed and we were unable to recover it. 00:30:31.332 [2024-12-09 06:29:25.642180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.332 [2024-12-09 06:29:25.642192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.332 qpair failed and we were unable to recover it. 00:30:31.332 [2024-12-09 06:29:25.642415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.332 [2024-12-09 06:29:25.642424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.332 qpair failed and we were unable to recover it. 00:30:31.332 [2024-12-09 06:29:25.642740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.332 [2024-12-09 06:29:25.642750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.332 qpair failed and we were unable to recover it. 00:30:31.332 [2024-12-09 06:29:25.642930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.332 [2024-12-09 06:29:25.642940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.332 qpair failed and we were unable to recover it. 00:30:31.332 [2024-12-09 06:29:25.643274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.332 [2024-12-09 06:29:25.643284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.332 qpair failed and we were unable to recover it. 00:30:31.332 [2024-12-09 06:29:25.643489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.332 [2024-12-09 06:29:25.643500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.332 qpair failed and we were unable to recover it. 00:30:31.332 [2024-12-09 06:29:25.643866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.332 [2024-12-09 06:29:25.643879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.332 qpair failed and we were unable to recover it. 00:30:31.332 [2024-12-09 06:29:25.644164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.332 [2024-12-09 06:29:25.644174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.332 qpair failed and we were unable to recover it. 00:30:31.332 [2024-12-09 06:29:25.644354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.332 [2024-12-09 06:29:25.644368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.332 qpair failed and we were unable to recover it. 00:30:31.332 [2024-12-09 06:29:25.644576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.332 [2024-12-09 06:29:25.644586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.332 qpair failed and we were unable to recover it. 00:30:31.332 [2024-12-09 06:29:25.644886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.332 [2024-12-09 06:29:25.644896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.333 qpair failed and we were unable to recover it. 00:30:31.333 [2024-12-09 06:29:25.645095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.333 [2024-12-09 06:29:25.645106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.333 qpair failed and we were unable to recover it. 00:30:31.333 [2024-12-09 06:29:25.645169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.333 [2024-12-09 06:29:25.645179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.333 qpair failed and we were unable to recover it. 00:30:31.333 [2024-12-09 06:29:25.645475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.333 [2024-12-09 06:29:25.645485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.333 qpair failed and we were unable to recover it. 00:30:31.333 [2024-12-09 06:29:25.645872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.333 [2024-12-09 06:29:25.645882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.333 qpair failed and we were unable to recover it. 00:30:31.333 [2024-12-09 06:29:25.646294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.333 [2024-12-09 06:29:25.646305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.333 qpair failed and we were unable to recover it. 00:30:31.333 [2024-12-09 06:29:25.646505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.333 [2024-12-09 06:29:25.646516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.333 qpair failed and we were unable to recover it. 00:30:31.333 [2024-12-09 06:29:25.646709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.333 [2024-12-09 06:29:25.646719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.333 qpair failed and we were unable to recover it. 00:30:31.333 [2024-12-09 06:29:25.646980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.333 [2024-12-09 06:29:25.646989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.333 qpair failed and we were unable to recover it. 00:30:31.333 [2024-12-09 06:29:25.647147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.333 [2024-12-09 06:29:25.647158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.333 qpair failed and we were unable to recover it. 00:30:31.333 [2024-12-09 06:29:25.647521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.333 [2024-12-09 06:29:25.647532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.333 qpair failed and we were unable to recover it. 00:30:31.333 [2024-12-09 06:29:25.647873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.333 [2024-12-09 06:29:25.647883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.333 qpair failed and we were unable to recover it. 00:30:31.333 [2024-12-09 06:29:25.648190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.333 [2024-12-09 06:29:25.648200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.333 qpair failed and we were unable to recover it. 00:30:31.333 [2024-12-09 06:29:25.648373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.333 [2024-12-09 06:29:25.648382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.333 qpair failed and we were unable to recover it. 00:30:31.333 [2024-12-09 06:29:25.648567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.333 [2024-12-09 06:29:25.648577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.333 qpair failed and we were unable to recover it. 00:30:31.333 [2024-12-09 06:29:25.648908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.333 [2024-12-09 06:29:25.648918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.333 qpair failed and we were unable to recover it. 00:30:31.333 [2024-12-09 06:29:25.649228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.333 [2024-12-09 06:29:25.649238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.333 qpair failed and we were unable to recover it. 00:30:31.333 [2024-12-09 06:29:25.649521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.333 [2024-12-09 06:29:25.649531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.333 qpair failed and we were unable to recover it. 00:30:31.333 [2024-12-09 06:29:25.649720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.333 [2024-12-09 06:29:25.649731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.333 qpair failed and we were unable to recover it. 00:30:31.333 [2024-12-09 06:29:25.649785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.333 [2024-12-09 06:29:25.649795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.333 qpair failed and we were unable to recover it. 00:30:31.333 [2024-12-09 06:29:25.650092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.333 [2024-12-09 06:29:25.650102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.333 qpair failed and we were unable to recover it. 00:30:31.333 [2024-12-09 06:29:25.650288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.333 [2024-12-09 06:29:25.650298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.333 qpair failed and we were unable to recover it. 00:30:31.333 [2024-12-09 06:29:25.650637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.333 [2024-12-09 06:29:25.650648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.333 qpair failed and we were unable to recover it. 00:30:31.333 [2024-12-09 06:29:25.650991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.333 [2024-12-09 06:29:25.651001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.333 qpair failed and we were unable to recover it. 00:30:31.333 [2024-12-09 06:29:25.651296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.333 [2024-12-09 06:29:25.651305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.333 qpair failed and we were unable to recover it. 00:30:31.333 [2024-12-09 06:29:25.651613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.333 [2024-12-09 06:29:25.651623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.333 qpair failed and we were unable to recover it. 00:30:31.333 [2024-12-09 06:29:25.651938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.333 [2024-12-09 06:29:25.651949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.333 qpair failed and we were unable to recover it. 00:30:31.333 [2024-12-09 06:29:25.652177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.333 [2024-12-09 06:29:25.652189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.333 qpair failed and we were unable to recover it. 00:30:31.333 [2024-12-09 06:29:25.652364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.333 [2024-12-09 06:29:25.652375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.333 qpair failed and we were unable to recover it. 00:30:31.333 [2024-12-09 06:29:25.652612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.333 [2024-12-09 06:29:25.652622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.333 qpair failed and we were unable to recover it. 00:30:31.333 [2024-12-09 06:29:25.652974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.333 [2024-12-09 06:29:25.652984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.333 qpair failed and we were unable to recover it. 00:30:31.333 [2024-12-09 06:29:25.653227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.333 [2024-12-09 06:29:25.653238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.333 qpair failed and we were unable to recover it. 00:30:31.333 [2024-12-09 06:29:25.653434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.333 [2024-12-09 06:29:25.653444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.333 qpair failed and we were unable to recover it. 00:30:31.333 [2024-12-09 06:29:25.653744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.334 [2024-12-09 06:29:25.653756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.334 qpair failed and we were unable to recover it. 00:30:31.334 [2024-12-09 06:29:25.654026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.334 [2024-12-09 06:29:25.654037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.334 qpair failed and we were unable to recover it. 00:30:31.334 [2024-12-09 06:29:25.654347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.334 [2024-12-09 06:29:25.654359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.334 qpair failed and we were unable to recover it. 00:30:31.334 [2024-12-09 06:29:25.654672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.334 [2024-12-09 06:29:25.654686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.334 qpair failed and we were unable to recover it. 00:30:31.334 [2024-12-09 06:29:25.654891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.334 [2024-12-09 06:29:25.654902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.334 qpair failed and we were unable to recover it. 00:30:31.334 [2024-12-09 06:29:25.655114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.334 [2024-12-09 06:29:25.655126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.334 qpair failed and we were unable to recover it. 00:30:31.334 [2024-12-09 06:29:25.655401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.334 [2024-12-09 06:29:25.655412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.334 qpair failed and we were unable to recover it. 00:30:31.334 [2024-12-09 06:29:25.655742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.334 [2024-12-09 06:29:25.655753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.334 qpair failed and we were unable to recover it. 00:30:31.334 [2024-12-09 06:29:25.656070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.334 [2024-12-09 06:29:25.656081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.334 qpair failed and we were unable to recover it. 00:30:31.334 [2024-12-09 06:29:25.656154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.334 [2024-12-09 06:29:25.656164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.334 qpair failed and we were unable to recover it. 00:30:31.334 [2024-12-09 06:29:25.656369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.334 [2024-12-09 06:29:25.656380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.334 qpair failed and we were unable to recover it. 00:30:31.334 [2024-12-09 06:29:25.656665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.334 [2024-12-09 06:29:25.656676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.334 qpair failed and we were unable to recover it. 00:30:31.334 [2024-12-09 06:29:25.657004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.334 [2024-12-09 06:29:25.657014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.334 qpair failed and we were unable to recover it. 00:30:31.334 [2024-12-09 06:29:25.657270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.334 [2024-12-09 06:29:25.657282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.334 qpair failed and we were unable to recover it. 00:30:31.334 [2024-12-09 06:29:25.657468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.334 [2024-12-09 06:29:25.657480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.334 qpair failed and we were unable to recover it. 00:30:31.334 [2024-12-09 06:29:25.657726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.334 [2024-12-09 06:29:25.657736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.334 qpair failed and we were unable to recover it. 00:30:31.334 [2024-12-09 06:29:25.658026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.334 [2024-12-09 06:29:25.658037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.334 qpair failed and we were unable to recover it. 00:30:31.334 [2024-12-09 06:29:25.658299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.334 [2024-12-09 06:29:25.658310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.334 qpair failed and we were unable to recover it. 00:30:31.334 [2024-12-09 06:29:25.658507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.334 [2024-12-09 06:29:25.658517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.334 qpair failed and we were unable to recover it. 00:30:31.334 [2024-12-09 06:29:25.658734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.334 [2024-12-09 06:29:25.658744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.334 qpair failed and we were unable to recover it. 00:30:31.334 [2024-12-09 06:29:25.659091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.334 [2024-12-09 06:29:25.659101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.334 qpair failed and we were unable to recover it. 00:30:31.334 [2024-12-09 06:29:25.659293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.334 [2024-12-09 06:29:25.659305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.334 qpair failed and we were unable to recover it. 00:30:31.334 [2024-12-09 06:29:25.659589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.334 [2024-12-09 06:29:25.659601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.334 qpair failed and we were unable to recover it. 00:30:31.334 [2024-12-09 06:29:25.659803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.334 [2024-12-09 06:29:25.659813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.334 qpair failed and we were unable to recover it. 00:30:31.334 [2024-12-09 06:29:25.660132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.334 [2024-12-09 06:29:25.660143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.334 qpair failed and we were unable to recover it. 00:30:31.334 [2024-12-09 06:29:25.660440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.334 [2024-12-09 06:29:25.660454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.334 qpair failed and we were unable to recover it. 00:30:31.334 [2024-12-09 06:29:25.660782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.334 [2024-12-09 06:29:25.660793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.334 qpair failed and we were unable to recover it. 00:30:31.334 [2024-12-09 06:29:25.661092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.334 [2024-12-09 06:29:25.661103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.334 qpair failed and we were unable to recover it. 00:30:31.334 [2024-12-09 06:29:25.661437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.334 [2024-12-09 06:29:25.661451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.334 qpair failed and we were unable to recover it. 00:30:31.334 [2024-12-09 06:29:25.661691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.334 [2024-12-09 06:29:25.661702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.334 qpair failed and we were unable to recover it. 00:30:31.334 [2024-12-09 06:29:25.661867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.334 [2024-12-09 06:29:25.661886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.334 qpair failed and we were unable to recover it. 00:30:31.334 [2024-12-09 06:29:25.662188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.334 [2024-12-09 06:29:25.662198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.334 qpair failed and we were unable to recover it. 00:30:31.334 [2024-12-09 06:29:25.662545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.334 [2024-12-09 06:29:25.662556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.334 qpair failed and we were unable to recover it. 00:30:31.334 [2024-12-09 06:29:25.662886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.334 [2024-12-09 06:29:25.662896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.334 qpair failed and we were unable to recover it. 00:30:31.334 [2024-12-09 06:29:25.663105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.334 [2024-12-09 06:29:25.663116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.334 qpair failed and we were unable to recover it. 00:30:31.334 [2024-12-09 06:29:25.663424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.334 [2024-12-09 06:29:25.663434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.334 qpair failed and we were unable to recover it. 00:30:31.334 [2024-12-09 06:29:25.663639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.334 [2024-12-09 06:29:25.663650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.334 qpair failed and we were unable to recover it. 00:30:31.334 [2024-12-09 06:29:25.663963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.334 [2024-12-09 06:29:25.663973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.334 qpair failed and we were unable to recover it. 00:30:31.334 [2024-12-09 06:29:25.664153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.335 [2024-12-09 06:29:25.664163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.335 qpair failed and we were unable to recover it. 00:30:31.335 [2024-12-09 06:29:25.664380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.335 [2024-12-09 06:29:25.664390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.335 qpair failed and we were unable to recover it. 00:30:31.335 [2024-12-09 06:29:25.664467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.335 [2024-12-09 06:29:25.664477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.335 qpair failed and we were unable to recover it. 00:30:31.335 [2024-12-09 06:29:25.664663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.335 [2024-12-09 06:29:25.664673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.335 qpair failed and we were unable to recover it. 00:30:31.335 [2024-12-09 06:29:25.664999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.335 [2024-12-09 06:29:25.665009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.335 qpair failed and we were unable to recover it. 00:30:31.335 [2024-12-09 06:29:25.665293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.335 [2024-12-09 06:29:25.665306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.335 qpair failed and we were unable to recover it. 00:30:31.335 [2024-12-09 06:29:25.665500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.335 [2024-12-09 06:29:25.665511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.335 qpair failed and we were unable to recover it. 00:30:31.335 [2024-12-09 06:29:25.665707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.335 [2024-12-09 06:29:25.665717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.335 qpair failed and we were unable to recover it. 00:30:31.335 [2024-12-09 06:29:25.665895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.335 [2024-12-09 06:29:25.665906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.335 qpair failed and we were unable to recover it. 00:30:31.335 [2024-12-09 06:29:25.666098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.335 [2024-12-09 06:29:25.666108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.335 qpair failed and we were unable to recover it. 00:30:31.335 [2024-12-09 06:29:25.666317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.335 [2024-12-09 06:29:25.666327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.335 qpair failed and we were unable to recover it. 00:30:31.335 [2024-12-09 06:29:25.666661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.335 [2024-12-09 06:29:25.666672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.335 qpair failed and we were unable to recover it. 00:30:31.335 [2024-12-09 06:29:25.666944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.335 [2024-12-09 06:29:25.666954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.335 qpair failed and we were unable to recover it. 00:30:31.335 [2024-12-09 06:29:25.667242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.335 [2024-12-09 06:29:25.667252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.335 qpair failed and we were unable to recover it. 00:30:31.335 [2024-12-09 06:29:25.667437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.335 [2024-12-09 06:29:25.667451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.335 qpair failed and we were unable to recover it. 00:30:31.335 [2024-12-09 06:29:25.667818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.335 [2024-12-09 06:29:25.667828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.335 qpair failed and we were unable to recover it. 00:30:31.335 [2024-12-09 06:29:25.668077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.335 [2024-12-09 06:29:25.668087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.335 qpair failed and we were unable to recover it. 00:30:31.335 [2024-12-09 06:29:25.668404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.335 [2024-12-09 06:29:25.668414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.335 qpair failed and we were unable to recover it. 00:30:31.335 [2024-12-09 06:29:25.668768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.335 [2024-12-09 06:29:25.668779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.335 qpair failed and we were unable to recover it. 00:30:31.335 [2024-12-09 06:29:25.668986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.335 [2024-12-09 06:29:25.668996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.335 qpair failed and we were unable to recover it. 00:30:31.335 [2024-12-09 06:29:25.669168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.335 [2024-12-09 06:29:25.669177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.335 qpair failed and we were unable to recover it. 00:30:31.335 [2024-12-09 06:29:25.669512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.335 [2024-12-09 06:29:25.669522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.335 qpair failed and we were unable to recover it. 00:30:31.335 [2024-12-09 06:29:25.669873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.335 [2024-12-09 06:29:25.669883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.335 qpair failed and we were unable to recover it. 00:30:31.335 [2024-12-09 06:29:25.670259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.335 [2024-12-09 06:29:25.670269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.335 qpair failed and we were unable to recover it. 00:30:31.335 [2024-12-09 06:29:25.670340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.335 [2024-12-09 06:29:25.670349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.335 qpair failed and we were unable to recover it. 00:30:31.335 [2024-12-09 06:29:25.670624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.335 [2024-12-09 06:29:25.670635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.335 qpair failed and we were unable to recover it. 00:30:31.335 [2024-12-09 06:29:25.670966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.335 [2024-12-09 06:29:25.670976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.335 qpair failed and we were unable to recover it. 00:30:31.335 [2024-12-09 06:29:25.671154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.335 [2024-12-09 06:29:25.671163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.335 qpair failed and we were unable to recover it. 00:30:31.335 [2024-12-09 06:29:25.671518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.335 [2024-12-09 06:29:25.671528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.335 qpair failed and we were unable to recover it. 00:30:31.335 [2024-12-09 06:29:25.671892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.335 [2024-12-09 06:29:25.671901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.335 qpair failed and we were unable to recover it. 00:30:31.335 [2024-12-09 06:29:25.672177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.335 [2024-12-09 06:29:25.672186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.335 qpair failed and we were unable to recover it. 00:30:31.335 [2024-12-09 06:29:25.672465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.335 [2024-12-09 06:29:25.672475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.335 qpair failed and we were unable to recover it. 00:30:31.335 [2024-12-09 06:29:25.672822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.335 [2024-12-09 06:29:25.672832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.335 qpair failed and we were unable to recover it. 00:30:31.335 [2024-12-09 06:29:25.673150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.335 [2024-12-09 06:29:25.673160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.335 qpair failed and we were unable to recover it. 00:30:31.335 [2024-12-09 06:29:25.673491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.335 [2024-12-09 06:29:25.673501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.335 qpair failed and we were unable to recover it. 00:30:31.335 [2024-12-09 06:29:25.673831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.335 [2024-12-09 06:29:25.673842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.335 qpair failed and we were unable to recover it. 00:30:31.335 [2024-12-09 06:29:25.674030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.335 [2024-12-09 06:29:25.674041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.335 qpair failed and we were unable to recover it. 00:30:31.335 [2024-12-09 06:29:25.674257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.335 [2024-12-09 06:29:25.674267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.335 qpair failed and we were unable to recover it. 00:30:31.335 [2024-12-09 06:29:25.674592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.336 [2024-12-09 06:29:25.674602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.336 qpair failed and we were unable to recover it. 00:30:31.336 [2024-12-09 06:29:25.674703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.336 [2024-12-09 06:29:25.674713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.336 qpair failed and we were unable to recover it. 00:30:31.336 [2024-12-09 06:29:25.674891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.336 [2024-12-09 06:29:25.674900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.336 qpair failed and we were unable to recover it. 00:30:31.336 [2024-12-09 06:29:25.675187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.336 [2024-12-09 06:29:25.675198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.336 qpair failed and we were unable to recover it. 00:30:31.336 [2024-12-09 06:29:25.675243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.336 [2024-12-09 06:29:25.675252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.336 qpair failed and we were unable to recover it. 00:30:31.336 [2024-12-09 06:29:25.675406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.336 [2024-12-09 06:29:25.675417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.336 qpair failed and we were unable to recover it. 00:30:31.336 [2024-12-09 06:29:25.675719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.336 [2024-12-09 06:29:25.675728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.336 qpair failed and we were unable to recover it. 00:30:31.336 [2024-12-09 06:29:25.676008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.336 [2024-12-09 06:29:25.676022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.336 qpair failed and we were unable to recover it. 00:30:31.336 [2024-12-09 06:29:25.676207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.336 [2024-12-09 06:29:25.676217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.336 qpair failed and we were unable to recover it. 00:30:31.336 [2024-12-09 06:29:25.676434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.336 [2024-12-09 06:29:25.676444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.336 qpair failed and we were unable to recover it. 00:30:31.336 [2024-12-09 06:29:25.676628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.336 [2024-12-09 06:29:25.676638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.336 qpair failed and we were unable to recover it. 00:30:31.336 [2024-12-09 06:29:25.676681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.336 [2024-12-09 06:29:25.676689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.336 qpair failed and we were unable to recover it. 00:30:31.336 [2024-12-09 06:29:25.676984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.336 [2024-12-09 06:29:25.676994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.336 qpair failed and we were unable to recover it. 00:30:31.336 [2024-12-09 06:29:25.677303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.336 [2024-12-09 06:29:25.677313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.336 qpair failed and we were unable to recover it. 00:30:31.336 [2024-12-09 06:29:25.677630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.336 [2024-12-09 06:29:25.677639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.336 qpair failed and we were unable to recover it. 00:30:31.336 [2024-12-09 06:29:25.677957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.336 [2024-12-09 06:29:25.677968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.336 qpair failed and we were unable to recover it. 00:30:31.336 [2024-12-09 06:29:25.678303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.336 [2024-12-09 06:29:25.678313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.336 qpair failed and we were unable to recover it. 00:30:31.336 [2024-12-09 06:29:25.678749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.336 [2024-12-09 06:29:25.678758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.336 qpair failed and we were unable to recover it. 00:30:31.336 [2024-12-09 06:29:25.678952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.336 [2024-12-09 06:29:25.678962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.336 qpair failed and we were unable to recover it. 00:30:31.336 [2024-12-09 06:29:25.679288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.336 [2024-12-09 06:29:25.679297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.336 qpair failed and we were unable to recover it. 00:30:31.336 [2024-12-09 06:29:25.679563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.336 [2024-12-09 06:29:25.679573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.336 qpair failed and we were unable to recover it. 00:30:31.336 [2024-12-09 06:29:25.679940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.336 [2024-12-09 06:29:25.679951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.336 qpair failed and we were unable to recover it. 00:30:31.336 [2024-12-09 06:29:25.680136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.336 [2024-12-09 06:29:25.680146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.336 qpair failed and we were unable to recover it. 00:30:31.336 [2024-12-09 06:29:25.680467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.336 [2024-12-09 06:29:25.680477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.336 qpair failed and we were unable to recover it. 00:30:31.336 [2024-12-09 06:29:25.680746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.336 [2024-12-09 06:29:25.680755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.336 qpair failed and we were unable to recover it. 00:30:31.336 [2024-12-09 06:29:25.680980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.336 [2024-12-09 06:29:25.680989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.336 qpair failed and we were unable to recover it. 00:30:31.336 [2024-12-09 06:29:25.681372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.336 [2024-12-09 06:29:25.681381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.336 qpair failed and we were unable to recover it. 00:30:31.336 [2024-12-09 06:29:25.681671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.336 [2024-12-09 06:29:25.681681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.336 qpair failed and we were unable to recover it. 00:30:31.336 [2024-12-09 06:29:25.681850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.336 [2024-12-09 06:29:25.681860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.336 qpair failed and we were unable to recover it. 00:30:31.336 [2024-12-09 06:29:25.682052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.336 [2024-12-09 06:29:25.682061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.336 qpair failed and we were unable to recover it. 00:30:31.336 [2024-12-09 06:29:25.682427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.336 [2024-12-09 06:29:25.682437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.336 qpair failed and we were unable to recover it. 00:30:31.336 [2024-12-09 06:29:25.682735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.336 [2024-12-09 06:29:25.682745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.336 qpair failed and we were unable to recover it. 00:30:31.336 [2024-12-09 06:29:25.682920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.336 [2024-12-09 06:29:25.682929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.336 qpair failed and we were unable to recover it. 00:30:31.336 [2024-12-09 06:29:25.683210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.336 [2024-12-09 06:29:25.683219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.336 qpair failed and we were unable to recover it. 00:30:31.336 [2024-12-09 06:29:25.683524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.336 [2024-12-09 06:29:25.683534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.336 qpair failed and we were unable to recover it. 00:30:31.336 [2024-12-09 06:29:25.683851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.336 [2024-12-09 06:29:25.683861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.336 qpair failed and we were unable to recover it. 00:30:31.336 [2024-12-09 06:29:25.684185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.336 [2024-12-09 06:29:25.684194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.336 qpair failed and we were unable to recover it. 00:30:31.336 [2024-12-09 06:29:25.684485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.336 [2024-12-09 06:29:25.684495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.336 qpair failed and we were unable to recover it. 00:30:31.336 [2024-12-09 06:29:25.684669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.336 [2024-12-09 06:29:25.684678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.337 qpair failed and we were unable to recover it. 00:30:31.337 [2024-12-09 06:29:25.684965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.337 [2024-12-09 06:29:25.684974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.337 qpair failed and we were unable to recover it. 00:30:31.337 [2024-12-09 06:29:25.685321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.337 [2024-12-09 06:29:25.685330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.337 qpair failed and we were unable to recover it. 00:30:31.337 [2024-12-09 06:29:25.685661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.337 [2024-12-09 06:29:25.685672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.337 qpair failed and we were unable to recover it. 00:30:31.337 [2024-12-09 06:29:25.685979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.337 [2024-12-09 06:29:25.685989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.337 qpair failed and we were unable to recover it. 00:30:31.337 [2024-12-09 06:29:25.686293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.337 [2024-12-09 06:29:25.686303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.337 qpair failed and we were unable to recover it. 00:30:31.337 [2024-12-09 06:29:25.686616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.337 [2024-12-09 06:29:25.686626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.337 qpair failed and we were unable to recover it. 00:30:31.337 [2024-12-09 06:29:25.686918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.337 [2024-12-09 06:29:25.686928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.337 qpair failed and we were unable to recover it. 00:30:31.337 [2024-12-09 06:29:25.687108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.337 [2024-12-09 06:29:25.687119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.337 qpair failed and we were unable to recover it. 00:30:31.337 [2024-12-09 06:29:25.687453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.337 [2024-12-09 06:29:25.687467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.337 qpair failed and we were unable to recover it. 00:30:31.337 [2024-12-09 06:29:25.687518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.337 [2024-12-09 06:29:25.687528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.337 qpair failed and we were unable to recover it. 00:30:31.337 [2024-12-09 06:29:25.687717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.337 [2024-12-09 06:29:25.687728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.337 qpair failed and we were unable to recover it. 00:30:31.337 [2024-12-09 06:29:25.688040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.337 [2024-12-09 06:29:25.688049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.337 qpair failed and we were unable to recover it. 00:30:31.337 [2024-12-09 06:29:25.688354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.337 [2024-12-09 06:29:25.688364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.337 qpair failed and we were unable to recover it. 00:30:31.337 [2024-12-09 06:29:25.688693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.337 [2024-12-09 06:29:25.688703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.337 qpair failed and we were unable to recover it. 00:30:31.337 [2024-12-09 06:29:25.688862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.337 [2024-12-09 06:29:25.688872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.337 qpair failed and we were unable to recover it. 00:30:31.337 [2024-12-09 06:29:25.689051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.337 [2024-12-09 06:29:25.689061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.337 qpair failed and we were unable to recover it. 00:30:31.337 [2024-12-09 06:29:25.689381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.337 [2024-12-09 06:29:25.689391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.337 qpair failed and we were unable to recover it. 00:30:31.337 [2024-12-09 06:29:25.689749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.337 [2024-12-09 06:29:25.689758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.337 qpair failed and we were unable to recover it. 00:30:31.337 [2024-12-09 06:29:25.690066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.337 [2024-12-09 06:29:25.690076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.337 qpair failed and we were unable to recover it. 00:30:31.337 [2024-12-09 06:29:25.690380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.337 [2024-12-09 06:29:25.690389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.337 qpair failed and we were unable to recover it. 00:30:31.337 [2024-12-09 06:29:25.690672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.337 [2024-12-09 06:29:25.690682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.337 qpair failed and we were unable to recover it. 00:30:31.337 [2024-12-09 06:29:25.690845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.337 [2024-12-09 06:29:25.690855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.337 qpair failed and we were unable to recover it. 00:30:31.337 [2024-12-09 06:29:25.691032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.337 [2024-12-09 06:29:25.691041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.337 qpair failed and we were unable to recover it. 00:30:31.337 [2024-12-09 06:29:25.691309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.337 [2024-12-09 06:29:25.691319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.337 qpair failed and we were unable to recover it. 00:30:31.337 [2024-12-09 06:29:25.691614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.337 [2024-12-09 06:29:25.691624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.337 qpair failed and we were unable to recover it. 00:30:31.337 [2024-12-09 06:29:25.692006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.337 [2024-12-09 06:29:25.692017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.337 qpair failed and we were unable to recover it. 00:30:31.337 [2024-12-09 06:29:25.692324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.337 [2024-12-09 06:29:25.692333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.337 qpair failed and we were unable to recover it. 00:30:31.337 [2024-12-09 06:29:25.692610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.337 [2024-12-09 06:29:25.692620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.337 qpair failed and we were unable to recover it. 00:30:31.337 [2024-12-09 06:29:25.692930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.337 [2024-12-09 06:29:25.692940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.337 qpair failed and we were unable to recover it. 00:30:31.337 [2024-12-09 06:29:25.693123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.337 [2024-12-09 06:29:25.693133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.337 qpair failed and we were unable to recover it. 00:30:31.337 [2024-12-09 06:29:25.693335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.337 [2024-12-09 06:29:25.693345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.337 qpair failed and we were unable to recover it. 00:30:31.337 [2024-12-09 06:29:25.693676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.337 [2024-12-09 06:29:25.693686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.337 qpair failed and we were unable to recover it. 00:30:31.337 [2024-12-09 06:29:25.694002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.337 [2024-12-09 06:29:25.694012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.337 qpair failed and we were unable to recover it. 00:30:31.337 [2024-12-09 06:29:25.694354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.337 [2024-12-09 06:29:25.694364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.337 qpair failed and we were unable to recover it. 00:30:31.338 [2024-12-09 06:29:25.694580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.338 [2024-12-09 06:29:25.694591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.338 qpair failed and we were unable to recover it. 00:30:31.338 [2024-12-09 06:29:25.694744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.338 [2024-12-09 06:29:25.694753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.338 qpair failed and we were unable to recover it. 00:30:31.338 [2024-12-09 06:29:25.694945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.338 [2024-12-09 06:29:25.694955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.338 qpair failed and we were unable to recover it. 00:30:31.338 [2024-12-09 06:29:25.695189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.338 [2024-12-09 06:29:25.695199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.338 qpair failed and we were unable to recover it. 00:30:31.338 [2024-12-09 06:29:25.695540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.338 [2024-12-09 06:29:25.695550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.338 qpair failed and we were unable to recover it. 00:30:31.338 [2024-12-09 06:29:25.695735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.338 [2024-12-09 06:29:25.695746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.338 qpair failed and we were unable to recover it. 00:30:31.338 [2024-12-09 06:29:25.695984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.338 [2024-12-09 06:29:25.695993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.338 qpair failed and we were unable to recover it. 00:30:31.338 [2024-12-09 06:29:25.696299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.338 [2024-12-09 06:29:25.696309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.338 qpair failed and we were unable to recover it. 00:30:31.338 [2024-12-09 06:29:25.696474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.338 [2024-12-09 06:29:25.696484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.338 qpair failed and we were unable to recover it. 00:30:31.338 [2024-12-09 06:29:25.696676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.338 [2024-12-09 06:29:25.696686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.338 qpair failed and we were unable to recover it. 00:30:31.338 [2024-12-09 06:29:25.697015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.338 [2024-12-09 06:29:25.697024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.338 qpair failed and we were unable to recover it. 00:30:31.338 [2024-12-09 06:29:25.697299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.338 [2024-12-09 06:29:25.697309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.338 qpair failed and we were unable to recover it. 00:30:31.338 [2024-12-09 06:29:25.697483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.338 [2024-12-09 06:29:25.697494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.338 qpair failed and we were unable to recover it. 00:30:31.338 [2024-12-09 06:29:25.697662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.338 [2024-12-09 06:29:25.697672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.338 qpair failed and we were unable to recover it. 00:30:31.338 [2024-12-09 06:29:25.697862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.338 [2024-12-09 06:29:25.697873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.338 qpair failed and we were unable to recover it. 00:30:31.338 [2024-12-09 06:29:25.698202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.338 [2024-12-09 06:29:25.698213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.338 qpair failed and we were unable to recover it. 00:30:31.338 [2024-12-09 06:29:25.698412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.338 [2024-12-09 06:29:25.698422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.338 qpair failed and we were unable to recover it. 00:30:31.338 [2024-12-09 06:29:25.698597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.338 [2024-12-09 06:29:25.698608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.338 qpair failed and we were unable to recover it. 00:30:31.338 [2024-12-09 06:29:25.698916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.338 [2024-12-09 06:29:25.698925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.338 qpair failed and we were unable to recover it. 00:30:31.338 [2024-12-09 06:29:25.699192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.338 [2024-12-09 06:29:25.699202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.338 qpair failed and we were unable to recover it. 00:30:31.338 [2024-12-09 06:29:25.699504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.338 [2024-12-09 06:29:25.699514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.338 qpair failed and we were unable to recover it. 00:30:31.338 [2024-12-09 06:29:25.699762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.338 [2024-12-09 06:29:25.699772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.338 qpair failed and we were unable to recover it. 00:30:31.338 [2024-12-09 06:29:25.700114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.338 [2024-12-09 06:29:25.700123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.338 qpair failed and we were unable to recover it. 00:30:31.338 [2024-12-09 06:29:25.700272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.338 [2024-12-09 06:29:25.700281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.338 qpair failed and we were unable to recover it. 00:30:31.338 [2024-12-09 06:29:25.700612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.338 [2024-12-09 06:29:25.700623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.338 qpair failed and we were unable to recover it. 00:30:31.338 [2024-12-09 06:29:25.701010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.338 [2024-12-09 06:29:25.701020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.338 qpair failed and we were unable to recover it. 00:30:31.338 [2024-12-09 06:29:25.701306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.338 [2024-12-09 06:29:25.701316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.338 qpair failed and we were unable to recover it. 00:30:31.338 [2024-12-09 06:29:25.701612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.338 [2024-12-09 06:29:25.701622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.338 qpair failed and we were unable to recover it. 00:30:31.338 [2024-12-09 06:29:25.701871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.338 [2024-12-09 06:29:25.701880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.338 qpair failed and we were unable to recover it. 00:30:31.338 [2024-12-09 06:29:25.702071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.338 [2024-12-09 06:29:25.702080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.338 qpair failed and we were unable to recover it. 00:30:31.338 [2024-12-09 06:29:25.702240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.338 [2024-12-09 06:29:25.702249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.338 qpair failed and we were unable to recover it. 00:30:31.338 [2024-12-09 06:29:25.702420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.338 [2024-12-09 06:29:25.702429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.338 qpair failed and we were unable to recover it. 00:30:31.338 [2024-12-09 06:29:25.702616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.338 [2024-12-09 06:29:25.702626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.338 qpair failed and we were unable to recover it. 00:30:31.338 [2024-12-09 06:29:25.702874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.338 [2024-12-09 06:29:25.702884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.338 qpair failed and we were unable to recover it. 00:30:31.338 [2024-12-09 06:29:25.703205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.338 [2024-12-09 06:29:25.703214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.338 qpair failed and we were unable to recover it. 00:30:31.338 [2024-12-09 06:29:25.703427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.338 [2024-12-09 06:29:25.703437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.338 qpair failed and we were unable to recover it. 00:30:31.338 [2024-12-09 06:29:25.703711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.338 [2024-12-09 06:29:25.703722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.338 qpair failed and we were unable to recover it. 00:30:31.338 [2024-12-09 06:29:25.704019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.338 [2024-12-09 06:29:25.704028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.338 qpair failed and we were unable to recover it. 00:30:31.338 [2024-12-09 06:29:25.704340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.339 [2024-12-09 06:29:25.704350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.339 qpair failed and we were unable to recover it. 00:30:31.339 [2024-12-09 06:29:25.704440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.339 [2024-12-09 06:29:25.704457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.339 qpair failed and we were unable to recover it. 00:30:31.339 [2024-12-09 06:29:25.704744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.339 [2024-12-09 06:29:25.704753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.339 qpair failed and we were unable to recover it. 00:30:31.339 [2024-12-09 06:29:25.705042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.339 [2024-12-09 06:29:25.705054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.339 qpair failed and we were unable to recover it. 00:30:31.339 [2024-12-09 06:29:25.705215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.339 [2024-12-09 06:29:25.705225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.339 qpair failed and we were unable to recover it. 00:30:31.339 [2024-12-09 06:29:25.705304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.339 [2024-12-09 06:29:25.705314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.339 qpair failed and we were unable to recover it. 00:30:31.339 [2024-12-09 06:29:25.705616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.339 [2024-12-09 06:29:25.705626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.339 qpair failed and we were unable to recover it. 00:30:31.339 [2024-12-09 06:29:25.705899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.339 [2024-12-09 06:29:25.705909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.339 qpair failed and we were unable to recover it. 00:30:31.339 [2024-12-09 06:29:25.706115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.339 [2024-12-09 06:29:25.706125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.339 qpair failed and we were unable to recover it. 00:30:31.339 [2024-12-09 06:29:25.706371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.339 [2024-12-09 06:29:25.706380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.339 qpair failed and we were unable to recover it. 00:30:31.339 [2024-12-09 06:29:25.706442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.339 [2024-12-09 06:29:25.706455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.339 qpair failed and we were unable to recover it. 00:30:31.339 [2024-12-09 06:29:25.706619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.339 [2024-12-09 06:29:25.706629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.339 qpair failed and we were unable to recover it. 00:30:31.339 [2024-12-09 06:29:25.706913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.339 [2024-12-09 06:29:25.706924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.339 qpair failed and we were unable to recover it. 00:30:31.339 [2024-12-09 06:29:25.706987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.339 [2024-12-09 06:29:25.706997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.339 qpair failed and we were unable to recover it. 00:30:31.339 [2024-12-09 06:29:25.707199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.339 [2024-12-09 06:29:25.707208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.339 qpair failed and we were unable to recover it. 00:30:31.339 [2024-12-09 06:29:25.707390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.339 [2024-12-09 06:29:25.707400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.339 qpair failed and we were unable to recover it. 00:30:31.339 [2024-12-09 06:29:25.707693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.339 [2024-12-09 06:29:25.707703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.339 qpair failed and we were unable to recover it. 00:30:31.339 [2024-12-09 06:29:25.707998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.339 [2024-12-09 06:29:25.708009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.339 qpair failed and we were unable to recover it. 00:30:31.339 [2024-12-09 06:29:25.708322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.339 [2024-12-09 06:29:25.708331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.339 qpair failed and we were unable to recover it. 00:30:31.339 [2024-12-09 06:29:25.708614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.339 [2024-12-09 06:29:25.708624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.339 qpair failed and we were unable to recover it. 00:30:31.339 [2024-12-09 06:29:25.708940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.339 [2024-12-09 06:29:25.708950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.339 qpair failed and we were unable to recover it. 00:30:31.339 [2024-12-09 06:29:25.709109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.339 [2024-12-09 06:29:25.709118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.339 qpair failed and we were unable to recover it. 00:30:31.339 [2024-12-09 06:29:25.709321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.339 [2024-12-09 06:29:25.709330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.339 qpair failed and we were unable to recover it. 00:30:31.339 [2024-12-09 06:29:25.709496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.339 [2024-12-09 06:29:25.709505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.339 qpair failed and we were unable to recover it. 00:30:31.339 [2024-12-09 06:29:25.709780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.339 [2024-12-09 06:29:25.709789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.339 qpair failed and we were unable to recover it. 00:30:31.339 [2024-12-09 06:29:25.710046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.339 [2024-12-09 06:29:25.710056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.339 qpair failed and we were unable to recover it. 00:30:31.339 [2024-12-09 06:29:25.710442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.339 [2024-12-09 06:29:25.710458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.339 qpair failed and we were unable to recover it. 00:30:31.339 [2024-12-09 06:29:25.710749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.339 [2024-12-09 06:29:25.710759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.339 qpair failed and we were unable to recover it. 00:30:31.339 [2024-12-09 06:29:25.710958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.339 [2024-12-09 06:29:25.710967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.339 qpair failed and we were unable to recover it. 00:30:31.339 [2024-12-09 06:29:25.711248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.339 [2024-12-09 06:29:25.711258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.339 qpair failed and we were unable to recover it. 00:30:31.339 [2024-12-09 06:29:25.711447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.339 [2024-12-09 06:29:25.711460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.339 qpair failed and we were unable to recover it. 00:30:31.339 [2024-12-09 06:29:25.711756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.339 [2024-12-09 06:29:25.711765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.339 qpair failed and we were unable to recover it. 00:30:31.339 [2024-12-09 06:29:25.712119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.339 [2024-12-09 06:29:25.712129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.339 qpair failed and we were unable to recover it. 00:30:31.339 [2024-12-09 06:29:25.712310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.339 [2024-12-09 06:29:25.712320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.339 qpair failed and we were unable to recover it. 00:30:31.339 [2024-12-09 06:29:25.712668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.339 [2024-12-09 06:29:25.712678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.339 qpair failed and we were unable to recover it. 00:30:31.339 [2024-12-09 06:29:25.712990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.339 [2024-12-09 06:29:25.713000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.339 qpair failed and we were unable to recover it. 00:30:31.339 [2024-12-09 06:29:25.713090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.339 [2024-12-09 06:29:25.713099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.339 qpair failed and we were unable to recover it. 00:30:31.339 [2024-12-09 06:29:25.713435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.339 [2024-12-09 06:29:25.713446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.339 qpair failed and we were unable to recover it. 00:30:31.339 [2024-12-09 06:29:25.713517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.340 [2024-12-09 06:29:25.713527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.340 qpair failed and we were unable to recover it. 00:30:31.340 [2024-12-09 06:29:25.713726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.340 [2024-12-09 06:29:25.713735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.340 qpair failed and we were unable to recover it. 00:30:31.340 [2024-12-09 06:29:25.713912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.340 [2024-12-09 06:29:25.713922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.340 qpair failed and we were unable to recover it. 00:30:31.340 [2024-12-09 06:29:25.714117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.340 [2024-12-09 06:29:25.714127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.340 qpair failed and we were unable to recover it. 00:30:31.340 [2024-12-09 06:29:25.714400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.340 [2024-12-09 06:29:25.714409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.340 qpair failed and we were unable to recover it. 00:30:31.340 [2024-12-09 06:29:25.714870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.340 [2024-12-09 06:29:25.714882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.340 qpair failed and we were unable to recover it. 00:30:31.340 [2024-12-09 06:29:25.715043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.340 [2024-12-09 06:29:25.715052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.340 qpair failed and we were unable to recover it. 00:30:31.340 [2024-12-09 06:29:25.715426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.340 [2024-12-09 06:29:25.715435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.340 qpair failed and we were unable to recover it. 00:30:31.340 [2024-12-09 06:29:25.715616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.340 [2024-12-09 06:29:25.715626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.340 qpair failed and we were unable to recover it. 00:30:31.340 [2024-12-09 06:29:25.715928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.340 [2024-12-09 06:29:25.715938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.340 qpair failed and we were unable to recover it. 00:30:31.340 [2024-12-09 06:29:25.716253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.340 [2024-12-09 06:29:25.716263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.340 qpair failed and we were unable to recover it. 00:30:31.340 [2024-12-09 06:29:25.716582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.340 [2024-12-09 06:29:25.716592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.340 qpair failed and we were unable to recover it. 00:30:31.340 [2024-12-09 06:29:25.716904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.340 [2024-12-09 06:29:25.716913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.340 qpair failed and we were unable to recover it. 00:30:31.340 [2024-12-09 06:29:25.717225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.340 [2024-12-09 06:29:25.717234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.340 qpair failed and we were unable to recover it. 00:30:31.340 [2024-12-09 06:29:25.717440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.340 [2024-12-09 06:29:25.717455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.340 qpair failed and we were unable to recover it. 00:30:31.340 [2024-12-09 06:29:25.717711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.340 [2024-12-09 06:29:25.717721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.340 qpair failed and we were unable to recover it. 00:30:31.340 [2024-12-09 06:29:25.718021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.340 [2024-12-09 06:29:25.718031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.340 qpair failed and we were unable to recover it. 00:30:31.340 [2024-12-09 06:29:25.718373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.340 [2024-12-09 06:29:25.718383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.340 qpair failed and we were unable to recover it. 00:30:31.340 [2024-12-09 06:29:25.718695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.340 [2024-12-09 06:29:25.718704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.340 qpair failed and we were unable to recover it. 00:30:31.340 [2024-12-09 06:29:25.719022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.340 [2024-12-09 06:29:25.719032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.340 qpair failed and we were unable to recover it. 00:30:31.340 [2024-12-09 06:29:25.719364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.340 [2024-12-09 06:29:25.719374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.340 qpair failed and we were unable to recover it. 00:30:31.340 [2024-12-09 06:29:25.719699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.340 [2024-12-09 06:29:25.719709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.340 qpair failed and we were unable to recover it. 00:30:31.340 [2024-12-09 06:29:25.719913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.340 [2024-12-09 06:29:25.719922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.340 qpair failed and we were unable to recover it. 00:30:31.340 [2024-12-09 06:29:25.720103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.340 [2024-12-09 06:29:25.720113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.340 qpair failed and we were unable to recover it. 00:30:31.340 [2024-12-09 06:29:25.720461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.340 [2024-12-09 06:29:25.720472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.340 qpair failed and we were unable to recover it. 00:30:31.340 [2024-12-09 06:29:25.720674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.340 [2024-12-09 06:29:25.720683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.340 qpair failed and we were unable to recover it. 00:30:31.340 [2024-12-09 06:29:25.720971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.340 [2024-12-09 06:29:25.720981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.340 qpair failed and we were unable to recover it. 00:30:31.340 [2024-12-09 06:29:25.721291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.340 [2024-12-09 06:29:25.721300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.340 qpair failed and we were unable to recover it. 00:30:31.340 [2024-12-09 06:29:25.721484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.340 [2024-12-09 06:29:25.721493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.340 qpair failed and we were unable to recover it. 00:30:31.340 [2024-12-09 06:29:25.721755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.340 [2024-12-09 06:29:25.721764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.340 qpair failed and we were unable to recover it. 00:30:31.340 [2024-12-09 06:29:25.722111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.340 [2024-12-09 06:29:25.722121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.340 qpair failed and we were unable to recover it. 00:30:31.340 [2024-12-09 06:29:25.722166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.340 [2024-12-09 06:29:25.722176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.340 qpair failed and we were unable to recover it. 00:30:31.340 [2024-12-09 06:29:25.722351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.340 [2024-12-09 06:29:25.722361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.340 qpair failed and we were unable to recover it. 00:30:31.340 [2024-12-09 06:29:25.722571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.340 [2024-12-09 06:29:25.722581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.340 qpair failed and we were unable to recover it. 00:30:31.340 [2024-12-09 06:29:25.722880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.340 [2024-12-09 06:29:25.722890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.340 qpair failed and we were unable to recover it. 00:30:31.340 [2024-12-09 06:29:25.723108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.340 [2024-12-09 06:29:25.723118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.340 qpair failed and we were unable to recover it. 00:30:31.340 [2024-12-09 06:29:25.723305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.340 [2024-12-09 06:29:25.723315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.340 qpair failed and we were unable to recover it. 00:30:31.340 [2024-12-09 06:29:25.723504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.340 [2024-12-09 06:29:25.723514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.340 qpair failed and we were unable to recover it. 00:30:31.340 [2024-12-09 06:29:25.723739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.340 [2024-12-09 06:29:25.723750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.341 qpair failed and we were unable to recover it. 00:30:31.341 [2024-12-09 06:29:25.724057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.341 [2024-12-09 06:29:25.724067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.341 qpair failed and we were unable to recover it. 00:30:31.341 [2024-12-09 06:29:25.724239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.341 [2024-12-09 06:29:25.724249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.341 qpair failed and we were unable to recover it. 00:30:31.341 [2024-12-09 06:29:25.724684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.341 [2024-12-09 06:29:25.724694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.341 qpair failed and we were unable to recover it. 00:30:31.341 [2024-12-09 06:29:25.724978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.341 [2024-12-09 06:29:25.724989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.341 qpair failed and we were unable to recover it. 00:30:31.341 [2024-12-09 06:29:25.725150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.341 [2024-12-09 06:29:25.725160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.341 qpair failed and we were unable to recover it. 00:30:31.341 [2024-12-09 06:29:25.725329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.341 [2024-12-09 06:29:25.725338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.341 qpair failed and we were unable to recover it. 00:30:31.341 [2024-12-09 06:29:25.725554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.341 [2024-12-09 06:29:25.725566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.341 qpair failed and we were unable to recover it. 00:30:31.341 [2024-12-09 06:29:25.725881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.341 [2024-12-09 06:29:25.725891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.341 qpair failed and we were unable to recover it. 00:30:31.341 [2024-12-09 06:29:25.726061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.341 [2024-12-09 06:29:25.726070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.341 qpair failed and we were unable to recover it. 00:30:31.341 [2024-12-09 06:29:25.726320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.341 [2024-12-09 06:29:25.726330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.341 qpair failed and we were unable to recover it. 00:30:31.341 [2024-12-09 06:29:25.726673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.341 [2024-12-09 06:29:25.726683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.341 qpair failed and we were unable to recover it. 00:30:31.341 [2024-12-09 06:29:25.726974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.341 [2024-12-09 06:29:25.726984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.341 qpair failed and we were unable to recover it. 00:30:31.341 [2024-12-09 06:29:25.727276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.341 [2024-12-09 06:29:25.727285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.341 qpair failed and we were unable to recover it. 00:30:31.341 [2024-12-09 06:29:25.727338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.341 [2024-12-09 06:29:25.727347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.341 qpair failed and we were unable to recover it. 00:30:31.341 [2024-12-09 06:29:25.727394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.341 [2024-12-09 06:29:25.727405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.341 qpair failed and we were unable to recover it. 00:30:31.341 [2024-12-09 06:29:25.727559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.341 [2024-12-09 06:29:25.727570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.341 qpair failed and we were unable to recover it. 00:30:31.341 [2024-12-09 06:29:25.727915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.341 [2024-12-09 06:29:25.727924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.341 qpair failed and we were unable to recover it. 00:30:31.341 [2024-12-09 06:29:25.728139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.341 [2024-12-09 06:29:25.728148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.341 qpair failed and we were unable to recover it. 00:30:31.341 [2024-12-09 06:29:25.728360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.341 [2024-12-09 06:29:25.728370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.341 qpair failed and we were unable to recover it. 00:30:31.341 [2024-12-09 06:29:25.728652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.341 [2024-12-09 06:29:25.728662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.341 qpair failed and we were unable to recover it. 00:30:31.341 [2024-12-09 06:29:25.728872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.341 [2024-12-09 06:29:25.728881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.341 qpair failed and we were unable to recover it. 00:30:31.341 [2024-12-09 06:29:25.729074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.341 [2024-12-09 06:29:25.729083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.341 qpair failed and we were unable to recover it. 00:30:31.341 [2024-12-09 06:29:25.729127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.341 [2024-12-09 06:29:25.729136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.341 qpair failed and we were unable to recover it. 00:30:31.341 [2024-12-09 06:29:25.729461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.341 [2024-12-09 06:29:25.729471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.341 qpair failed and we were unable to recover it. 00:30:31.341 [2024-12-09 06:29:25.729779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.341 [2024-12-09 06:29:25.729788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.341 qpair failed and we were unable to recover it. 00:30:31.341 [2024-12-09 06:29:25.730107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.341 [2024-12-09 06:29:25.730116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.341 qpair failed and we were unable to recover it. 00:30:31.341 [2024-12-09 06:29:25.730424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.341 [2024-12-09 06:29:25.730433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.341 qpair failed and we were unable to recover it. 00:30:31.341 [2024-12-09 06:29:25.730738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.341 [2024-12-09 06:29:25.730748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.341 qpair failed and we were unable to recover it. 00:30:31.341 [2024-12-09 06:29:25.730831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.341 [2024-12-09 06:29:25.730840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.341 qpair failed and we were unable to recover it. 00:30:31.341 [2024-12-09 06:29:25.731258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.341 [2024-12-09 06:29:25.731360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.341 qpair failed and we were unable to recover it. 00:30:31.341 [2024-12-09 06:29:25.731834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.341 [2024-12-09 06:29:25.731935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.341 qpair failed and we were unable to recover it. 00:30:31.341 [2024-12-09 06:29:25.732366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.341 [2024-12-09 06:29:25.732380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.341 qpair failed and we were unable to recover it. 00:30:31.341 [2024-12-09 06:29:25.732795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.341 [2024-12-09 06:29:25.732846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.341 qpair failed and we were unable to recover it. 00:30:31.341 [2024-12-09 06:29:25.733099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.341 [2024-12-09 06:29:25.733115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.341 qpair failed and we were unable to recover it. 00:30:31.341 [2024-12-09 06:29:25.733237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.341 [2024-12-09 06:29:25.733249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.341 qpair failed and we were unable to recover it. 00:30:31.341 [2024-12-09 06:29:25.733574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.341 [2024-12-09 06:29:25.733675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:31.341 qpair failed and we were unable to recover it. 00:30:31.341 [2024-12-09 06:29:25.734066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.341 [2024-12-09 06:29:25.734104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:31.341 qpair failed and we were unable to recover it. 00:30:31.341 [2024-12-09 06:29:25.734212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.341 [2024-12-09 06:29:25.734242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:31.341 qpair failed and we were unable to recover it. 00:30:31.342 Read completed with error (sct=0, sc=8) 00:30:31.342 starting I/O failed 00:30:31.342 Read completed with error (sct=0, sc=8) 00:30:31.342 starting I/O failed 00:30:31.342 Read completed with error (sct=0, sc=8) 00:30:31.342 starting I/O failed 00:30:31.342 Read completed with error (sct=0, sc=8) 00:30:31.342 starting I/O failed 00:30:31.342 Read completed with error (sct=0, sc=8) 00:30:31.342 starting I/O failed 00:30:31.342 Read completed with error (sct=0, sc=8) 00:30:31.342 starting I/O failed 00:30:31.342 Read completed with error (sct=0, sc=8) 00:30:31.342 starting I/O failed 00:30:31.342 Read completed with error (sct=0, sc=8) 00:30:31.342 starting I/O failed 00:30:31.342 Read completed with error (sct=0, sc=8) 00:30:31.342 starting I/O failed 00:30:31.342 Read completed with error (sct=0, sc=8) 00:30:31.342 starting I/O failed 00:30:31.342 Read completed with error (sct=0, sc=8) 00:30:31.342 starting I/O failed 00:30:31.342 Read completed with error (sct=0, sc=8) 00:30:31.342 starting I/O failed 00:30:31.342 Write completed with error (sct=0, sc=8) 00:30:31.342 starting I/O failed 00:30:31.342 Read completed with error (sct=0, sc=8) 00:30:31.342 starting I/O failed 00:30:31.342 Write completed with error (sct=0, sc=8) 00:30:31.342 starting I/O failed 00:30:31.342 Read completed with error (sct=0, sc=8) 00:30:31.342 starting I/O failed 00:30:31.342 Write completed with error (sct=0, sc=8) 00:30:31.342 starting I/O failed 00:30:31.342 Read completed with error (sct=0, sc=8) 00:30:31.342 starting I/O failed 00:30:31.342 Write completed with error (sct=0, sc=8) 00:30:31.342 starting I/O failed 00:30:31.342 Write completed with error (sct=0, sc=8) 00:30:31.342 starting I/O failed 00:30:31.342 Write completed with error (sct=0, sc=8) 00:30:31.342 starting I/O failed 00:30:31.342 Read completed with error (sct=0, sc=8) 00:30:31.342 starting I/O failed 00:30:31.342 Write completed with error (sct=0, sc=8) 00:30:31.342 starting I/O failed 00:30:31.342 Write completed with error (sct=0, sc=8) 00:30:31.342 starting I/O failed 00:30:31.342 Read completed with error (sct=0, sc=8) 00:30:31.342 starting I/O failed 00:30:31.342 Write completed with error (sct=0, sc=8) 00:30:31.342 starting I/O failed 00:30:31.342 Read completed with error (sct=0, sc=8) 00:30:31.342 starting I/O failed 00:30:31.342 Write completed with error (sct=0, sc=8) 00:30:31.342 starting I/O failed 00:30:31.342 Read completed with error (sct=0, sc=8) 00:30:31.342 starting I/O failed 00:30:31.342 Read completed with error (sct=0, sc=8) 00:30:31.342 starting I/O failed 00:30:31.342 Read completed with error (sct=0, sc=8) 00:30:31.342 starting I/O failed 00:30:31.342 Write completed with error (sct=0, sc=8) 00:30:31.342 starting I/O failed 00:30:31.342 [2024-12-09 06:29:25.734955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.342 [2024-12-09 06:29:25.735215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.342 [2024-12-09 06:29:25.735269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.342 qpair failed and we were unable to recover it. 00:30:31.342 [2024-12-09 06:29:25.735366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.342 [2024-12-09 06:29:25.735409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.342 qpair failed and we were unable to recover it. 00:30:31.342 [2024-12-09 06:29:25.735741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.342 [2024-12-09 06:29:25.735771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.342 qpair failed and we were unable to recover it. 00:30:31.342 [2024-12-09 06:29:25.736024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.342 [2024-12-09 06:29:25.736050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.342 qpair failed and we were unable to recover it. 00:30:31.342 [2024-12-09 06:29:25.736293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.342 [2024-12-09 06:29:25.736304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.342 qpair failed and we were unable to recover it. 00:30:31.342 [2024-12-09 06:29:25.736630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.342 [2024-12-09 06:29:25.736640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.342 qpair failed and we were unable to recover it. 00:30:31.342 [2024-12-09 06:29:25.736834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.342 [2024-12-09 06:29:25.736843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.342 qpair failed and we were unable to recover it. 00:30:31.342 [2024-12-09 06:29:25.737070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.342 [2024-12-09 06:29:25.737080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.342 qpair failed and we were unable to recover it. 00:30:31.342 [2024-12-09 06:29:25.737538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.342 [2024-12-09 06:29:25.737548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.342 qpair failed and we were unable to recover it. 00:30:31.342 [2024-12-09 06:29:25.737973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.342 [2024-12-09 06:29:25.737982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.342 qpair failed and we were unable to recover it. 00:30:31.342 [2024-12-09 06:29:25.738322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.342 [2024-12-09 06:29:25.738331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.342 qpair failed and we were unable to recover it. 00:30:31.342 [2024-12-09 06:29:25.738652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.342 [2024-12-09 06:29:25.738662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.342 qpair failed and we were unable to recover it. 00:30:31.342 [2024-12-09 06:29:25.739068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.342 [2024-12-09 06:29:25.739079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.342 qpair failed and we were unable to recover it. 00:30:31.342 [2024-12-09 06:29:25.739244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.342 [2024-12-09 06:29:25.739254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.342 qpair failed and we were unable to recover it. 00:30:31.342 [2024-12-09 06:29:25.739547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.342 [2024-12-09 06:29:25.739557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.342 qpair failed and we were unable to recover it. 00:30:31.342 [2024-12-09 06:29:25.739906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.342 [2024-12-09 06:29:25.739916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.342 qpair failed and we were unable to recover it. 00:30:31.342 [2024-12-09 06:29:25.740231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.342 [2024-12-09 06:29:25.740241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.342 qpair failed and we were unable to recover it. 00:30:31.342 [2024-12-09 06:29:25.740539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.342 [2024-12-09 06:29:25.740549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.342 qpair failed and we were unable to recover it. 00:30:31.342 [2024-12-09 06:29:25.740726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.342 [2024-12-09 06:29:25.740735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.342 qpair failed and we were unable to recover it. 00:30:31.342 [2024-12-09 06:29:25.740930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.342 [2024-12-09 06:29:25.740940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.342 qpair failed and we were unable to recover it. 00:30:31.342 [2024-12-09 06:29:25.741205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.342 [2024-12-09 06:29:25.741215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.342 qpair failed and we were unable to recover it. 00:30:31.342 [2024-12-09 06:29:25.741501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.342 [2024-12-09 06:29:25.741512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.342 qpair failed and we were unable to recover it. 00:30:31.342 [2024-12-09 06:29:25.741844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.342 [2024-12-09 06:29:25.741854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.342 qpair failed and we were unable to recover it. 00:30:31.342 [2024-12-09 06:29:25.742141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.343 [2024-12-09 06:29:25.742151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.343 qpair failed and we were unable to recover it. 00:30:31.343 [2024-12-09 06:29:25.742346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.343 [2024-12-09 06:29:25.742356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.343 qpair failed and we were unable to recover it. 00:30:31.343 [2024-12-09 06:29:25.742740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.343 [2024-12-09 06:29:25.742749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.343 qpair failed and we were unable to recover it. 00:30:31.343 [2024-12-09 06:29:25.743133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.343 [2024-12-09 06:29:25.743142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.343 qpair failed and we were unable to recover it. 00:30:31.343 [2024-12-09 06:29:25.743417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.343 [2024-12-09 06:29:25.743427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.343 qpair failed and we were unable to recover it. 00:30:31.343 [2024-12-09 06:29:25.743607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.343 [2024-12-09 06:29:25.743617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.343 qpair failed and we were unable to recover it. 00:30:31.343 [2024-12-09 06:29:25.743957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.343 [2024-12-09 06:29:25.743968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.343 qpair failed and we were unable to recover it. 00:30:31.343 [2024-12-09 06:29:25.744191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.343 [2024-12-09 06:29:25.744201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.343 qpair failed and we were unable to recover it. 00:30:31.343 [2024-12-09 06:29:25.744251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.343 [2024-12-09 06:29:25.744260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.343 qpair failed and we were unable to recover it. 00:30:31.343 [2024-12-09 06:29:25.744566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.343 [2024-12-09 06:29:25.744576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.343 qpair failed and we were unable to recover it. 00:30:31.343 [2024-12-09 06:29:25.744752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.343 [2024-12-09 06:29:25.744763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.343 qpair failed and we were unable to recover it. 00:30:31.343 [2024-12-09 06:29:25.744975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.343 [2024-12-09 06:29:25.744986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.343 qpair failed and we were unable to recover it. 00:30:31.343 [2024-12-09 06:29:25.745160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.343 [2024-12-09 06:29:25.745171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.343 qpair failed and we were unable to recover it. 00:30:31.343 [2024-12-09 06:29:25.745354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.343 [2024-12-09 06:29:25.745364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.343 qpair failed and we were unable to recover it. 00:30:31.343 [2024-12-09 06:29:25.745530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.343 [2024-12-09 06:29:25.745539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.343 qpair failed and we were unable to recover it. 00:30:31.343 [2024-12-09 06:29:25.745810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.343 [2024-12-09 06:29:25.745819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.343 qpair failed and we were unable to recover it. 00:30:31.343 [2024-12-09 06:29:25.746102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.343 [2024-12-09 06:29:25.746112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.343 qpair failed and we were unable to recover it. 00:30:31.343 [2024-12-09 06:29:25.746279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.343 [2024-12-09 06:29:25.746288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.343 qpair failed and we were unable to recover it. 00:30:31.343 [2024-12-09 06:29:25.746481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.343 [2024-12-09 06:29:25.746494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.343 qpair failed and we were unable to recover it. 00:30:31.343 [2024-12-09 06:29:25.746822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.343 [2024-12-09 06:29:25.746831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.343 qpair failed and we were unable to recover it. 00:30:31.343 [2024-12-09 06:29:25.747135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.343 [2024-12-09 06:29:25.747145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.343 qpair failed and we were unable to recover it. 00:30:31.343 [2024-12-09 06:29:25.747334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.343 [2024-12-09 06:29:25.747343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.343 qpair failed and we were unable to recover it. 00:30:31.343 [2024-12-09 06:29:25.747633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.343 [2024-12-09 06:29:25.747643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.343 qpair failed and we were unable to recover it. 00:30:31.343 [2024-12-09 06:29:25.747956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.343 [2024-12-09 06:29:25.747965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.343 qpair failed and we were unable to recover it. 00:30:31.343 [2024-12-09 06:29:25.748168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.343 [2024-12-09 06:29:25.748178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.343 qpair failed and we were unable to recover it. 00:30:31.343 [2024-12-09 06:29:25.748381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.343 [2024-12-09 06:29:25.748391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.343 qpair failed and we were unable to recover it. 00:30:31.343 [2024-12-09 06:29:25.748602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.343 [2024-12-09 06:29:25.748612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.343 qpair failed and we were unable to recover it. 00:30:31.343 [2024-12-09 06:29:25.748808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.343 [2024-12-09 06:29:25.748817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.343 qpair failed and we were unable to recover it. 00:30:31.343 [2024-12-09 06:29:25.748978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.343 [2024-12-09 06:29:25.748987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.343 qpair failed and we were unable to recover it. 00:30:31.343 [2024-12-09 06:29:25.749038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.343 [2024-12-09 06:29:25.749048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.343 qpair failed and we were unable to recover it. 00:30:31.343 [2024-12-09 06:29:25.749364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.343 [2024-12-09 06:29:25.749373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.343 qpair failed and we were unable to recover it. 00:30:31.343 [2024-12-09 06:29:25.749681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.343 [2024-12-09 06:29:25.749691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.343 qpair failed and we were unable to recover it. 00:30:31.343 [2024-12-09 06:29:25.749989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.343 [2024-12-09 06:29:25.749998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.343 qpair failed and we were unable to recover it. 00:30:31.343 [2024-12-09 06:29:25.750284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.343 [2024-12-09 06:29:25.750293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.343 qpair failed and we were unable to recover it. 00:30:31.343 [2024-12-09 06:29:25.750590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.343 [2024-12-09 06:29:25.750600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.343 qpair failed and we were unable to recover it. 00:30:31.343 [2024-12-09 06:29:25.750904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.343 [2024-12-09 06:29:25.750913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.343 qpair failed and we were unable to recover it. 00:30:31.343 [2024-12-09 06:29:25.751213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.343 [2024-12-09 06:29:25.751222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.343 qpair failed and we were unable to recover it. 00:30:31.343 [2024-12-09 06:29:25.751418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.343 [2024-12-09 06:29:25.751427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.343 qpair failed and we were unable to recover it. 00:30:31.343 [2024-12-09 06:29:25.751645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.343 [2024-12-09 06:29:25.751654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.344 qpair failed and we were unable to recover it. 00:30:31.344 [2024-12-09 06:29:25.751832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.344 [2024-12-09 06:29:25.751841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.344 qpair failed and we were unable to recover it. 00:30:31.344 [2024-12-09 06:29:25.752139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.344 [2024-12-09 06:29:25.752148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.344 qpair failed and we were unable to recover it. 00:30:31.344 [2024-12-09 06:29:25.752417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.344 [2024-12-09 06:29:25.752426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.344 qpair failed and we were unable to recover it. 00:30:31.344 [2024-12-09 06:29:25.752748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.344 [2024-12-09 06:29:25.752757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.344 qpair failed and we were unable to recover it. 00:30:31.344 [2024-12-09 06:29:25.753071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.344 [2024-12-09 06:29:25.753081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.344 qpair failed and we were unable to recover it. 00:30:31.344 [2024-12-09 06:29:25.753280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.344 [2024-12-09 06:29:25.753291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.344 qpair failed and we were unable to recover it. 00:30:31.344 [2024-12-09 06:29:25.753487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.344 [2024-12-09 06:29:25.753497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.344 qpair failed and we were unable to recover it. 00:30:31.344 [2024-12-09 06:29:25.753803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.344 [2024-12-09 06:29:25.753812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.344 qpair failed and we were unable to recover it. 00:30:31.344 [2024-12-09 06:29:25.753997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.344 [2024-12-09 06:29:25.754007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.344 qpair failed and we were unable to recover it. 00:30:31.344 [2024-12-09 06:29:25.754347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.344 [2024-12-09 06:29:25.754357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.344 qpair failed and we were unable to recover it. 00:30:31.344 [2024-12-09 06:29:25.754401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.344 [2024-12-09 06:29:25.754411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.344 qpair failed and we were unable to recover it. 00:30:31.344 [2024-12-09 06:29:25.754692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.344 [2024-12-09 06:29:25.754701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.344 qpair failed and we were unable to recover it. 00:30:31.344 [2024-12-09 06:29:25.754861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.344 [2024-12-09 06:29:25.754870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.344 qpair failed and we were unable to recover it. 00:30:31.344 [2024-12-09 06:29:25.755185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.344 [2024-12-09 06:29:25.755194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.344 qpair failed and we were unable to recover it. 00:30:31.344 [2024-12-09 06:29:25.755500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.344 [2024-12-09 06:29:25.755510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.344 qpair failed and we were unable to recover it. 00:30:31.344 [2024-12-09 06:29:25.755712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.344 [2024-12-09 06:29:25.755722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.344 qpair failed and we were unable to recover it. 00:30:31.344 [2024-12-09 06:29:25.756031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.344 [2024-12-09 06:29:25.756041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.344 qpair failed and we were unable to recover it. 00:30:31.344 [2024-12-09 06:29:25.756197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.344 [2024-12-09 06:29:25.756207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.344 qpair failed and we were unable to recover it. 00:30:31.344 [2024-12-09 06:29:25.756575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.344 [2024-12-09 06:29:25.756584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.344 qpair failed and we were unable to recover it. 00:30:31.344 [2024-12-09 06:29:25.756870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.344 [2024-12-09 06:29:25.756881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.344 qpair failed and we were unable to recover it. 00:30:31.344 [2024-12-09 06:29:25.757035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.344 [2024-12-09 06:29:25.757044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.344 qpair failed and we were unable to recover it. 00:30:31.344 [2024-12-09 06:29:25.757345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.344 [2024-12-09 06:29:25.757354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.344 qpair failed and we were unable to recover it. 00:30:31.344 [2024-12-09 06:29:25.757525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.344 [2024-12-09 06:29:25.757536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.344 qpair failed and we were unable to recover it. 00:30:31.344 [2024-12-09 06:29:25.757780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.344 [2024-12-09 06:29:25.757789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.344 qpair failed and we were unable to recover it. 00:30:31.344 [2024-12-09 06:29:25.758012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.344 [2024-12-09 06:29:25.758022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.344 qpair failed and we were unable to recover it. 00:30:31.344 [2024-12-09 06:29:25.758066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.344 [2024-12-09 06:29:25.758074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.344 qpair failed and we were unable to recover it. 00:30:31.344 [2024-12-09 06:29:25.758233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.344 [2024-12-09 06:29:25.758243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.344 qpair failed and we were unable to recover it. 00:30:31.344 [2024-12-09 06:29:25.758572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.344 [2024-12-09 06:29:25.758582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.344 qpair failed and we were unable to recover it. 00:30:31.344 [2024-12-09 06:29:25.758776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.344 [2024-12-09 06:29:25.758786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.344 qpair failed and we were unable to recover it. 00:30:31.344 [2024-12-09 06:29:25.759122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.344 [2024-12-09 06:29:25.759131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.344 qpair failed and we were unable to recover it. 00:30:31.344 [2024-12-09 06:29:25.759430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.344 [2024-12-09 06:29:25.759439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.344 qpair failed and we were unable to recover it. 00:30:31.344 [2024-12-09 06:29:25.759751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.344 [2024-12-09 06:29:25.759760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.344 qpair failed and we were unable to recover it. 00:30:31.344 [2024-12-09 06:29:25.759824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.344 [2024-12-09 06:29:25.759833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.344 qpair failed and we were unable to recover it. 00:30:31.344 [2024-12-09 06:29:25.759943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.344 [2024-12-09 06:29:25.759953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.344 qpair failed and we were unable to recover it. 00:30:31.344 [2024-12-09 06:29:25.760152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.344 [2024-12-09 06:29:25.760162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.344 qpair failed and we were unable to recover it. 00:30:31.344 [2024-12-09 06:29:25.760308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.344 [2024-12-09 06:29:25.760318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.344 qpair failed and we were unable to recover it. 00:30:31.344 [2024-12-09 06:29:25.760565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.344 [2024-12-09 06:29:25.760575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.344 qpair failed and we were unable to recover it. 00:30:31.344 [2024-12-09 06:29:25.760909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.344 [2024-12-09 06:29:25.760918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.345 qpair failed and we were unable to recover it. 00:30:31.345 [2024-12-09 06:29:25.761222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.345 [2024-12-09 06:29:25.761232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.345 qpair failed and we were unable to recover it. 00:30:31.345 [2024-12-09 06:29:25.761539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.345 [2024-12-09 06:29:25.761549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.345 qpair failed and we were unable to recover it. 00:30:31.345 [2024-12-09 06:29:25.761902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.345 [2024-12-09 06:29:25.761912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.345 qpair failed and we were unable to recover it. 00:30:31.345 [2024-12-09 06:29:25.762175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.345 [2024-12-09 06:29:25.762184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.345 qpair failed and we were unable to recover it. 00:30:31.345 [2024-12-09 06:29:25.762482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.345 [2024-12-09 06:29:25.762492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.345 qpair failed and we were unable to recover it. 00:30:31.345 [2024-12-09 06:29:25.762803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.345 [2024-12-09 06:29:25.762813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.345 qpair failed and we were unable to recover it. 00:30:31.345 [2024-12-09 06:29:25.763121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.345 [2024-12-09 06:29:25.763131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.345 qpair failed and we were unable to recover it. 00:30:31.345 [2024-12-09 06:29:25.763498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.345 [2024-12-09 06:29:25.763508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.345 qpair failed and we were unable to recover it. 00:30:31.345 [2024-12-09 06:29:25.763675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.345 [2024-12-09 06:29:25.763684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.345 qpair failed and we were unable to recover it. 00:30:31.345 [2024-12-09 06:29:25.764000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.345 [2024-12-09 06:29:25.764016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.345 qpair failed and we were unable to recover it. 00:30:31.345 [2024-12-09 06:29:25.764177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.345 [2024-12-09 06:29:25.764187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.345 qpair failed and we were unable to recover it. 00:30:31.345 [2024-12-09 06:29:25.764395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.345 [2024-12-09 06:29:25.764405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.345 qpair failed and we were unable to recover it. 00:30:31.345 [2024-12-09 06:29:25.764752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.345 [2024-12-09 06:29:25.764762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.345 qpair failed and we were unable to recover it. 00:30:31.345 [2024-12-09 06:29:25.765045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.345 [2024-12-09 06:29:25.765055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.345 qpair failed and we were unable to recover it. 00:30:31.345 [2024-12-09 06:29:25.765251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.345 [2024-12-09 06:29:25.765260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.345 qpair failed and we were unable to recover it. 00:30:31.345 [2024-12-09 06:29:25.765418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.345 [2024-12-09 06:29:25.765428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.345 qpair failed and we were unable to recover it. 00:30:31.345 [2024-12-09 06:29:25.765604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.345 [2024-12-09 06:29:25.765614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.345 qpair failed and we were unable to recover it. 00:30:31.345 [2024-12-09 06:29:25.765977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.345 [2024-12-09 06:29:25.765986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.345 qpair failed and we were unable to recover it. 00:30:31.345 [2024-12-09 06:29:25.766165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.345 [2024-12-09 06:29:25.766175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.345 qpair failed and we were unable to recover it. 00:30:31.345 [2024-12-09 06:29:25.766518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.345 [2024-12-09 06:29:25.766527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.345 qpair failed and we were unable to recover it. 00:30:31.345 [2024-12-09 06:29:25.766828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.345 [2024-12-09 06:29:25.766838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.345 qpair failed and we were unable to recover it. 00:30:31.345 [2024-12-09 06:29:25.767020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.345 [2024-12-09 06:29:25.767029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.345 qpair failed and we were unable to recover it. 00:30:31.345 [2024-12-09 06:29:25.767216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.345 [2024-12-09 06:29:25.767225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.345 qpair failed and we were unable to recover it. 00:30:31.345 [2024-12-09 06:29:25.767392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.345 [2024-12-09 06:29:25.767402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.345 qpair failed and we were unable to recover it. 00:30:31.345 [2024-12-09 06:29:25.767740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.345 [2024-12-09 06:29:25.767750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.345 qpair failed and we were unable to recover it. 00:30:31.345 [2024-12-09 06:29:25.767948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.345 [2024-12-09 06:29:25.767957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.345 qpair failed and we were unable to recover it. 00:30:31.345 [2024-12-09 06:29:25.768308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.345 [2024-12-09 06:29:25.768318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.345 qpair failed and we were unable to recover it. 00:30:31.345 [2024-12-09 06:29:25.768496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.345 [2024-12-09 06:29:25.768505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.345 qpair failed and we were unable to recover it. 00:30:31.345 [2024-12-09 06:29:25.768731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.345 [2024-12-09 06:29:25.768741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.345 qpair failed and we were unable to recover it. 00:30:31.345 [2024-12-09 06:29:25.769103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.345 [2024-12-09 06:29:25.769112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.345 qpair failed and we were unable to recover it. 00:30:31.345 [2024-12-09 06:29:25.769407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.345 [2024-12-09 06:29:25.769416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.345 qpair failed and we were unable to recover it. 00:30:31.345 [2024-12-09 06:29:25.769721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.345 [2024-12-09 06:29:25.769731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.345 qpair failed and we were unable to recover it. 00:30:31.345 [2024-12-09 06:29:25.770031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.345 [2024-12-09 06:29:25.770040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.345 qpair failed and we were unable to recover it. 00:30:31.345 [2024-12-09 06:29:25.770343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.345 [2024-12-09 06:29:25.770352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.345 qpair failed and we were unable to recover it. 00:30:31.345 [2024-12-09 06:29:25.770689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.345 [2024-12-09 06:29:25.770698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.345 qpair failed and we were unable to recover it. 00:30:31.345 [2024-12-09 06:29:25.770876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.345 [2024-12-09 06:29:25.770885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.345 qpair failed and we were unable to recover it. 00:30:31.345 [2024-12-09 06:29:25.771155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.345 [2024-12-09 06:29:25.771165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.345 qpair failed and we were unable to recover it. 00:30:31.345 [2024-12-09 06:29:25.771349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.345 [2024-12-09 06:29:25.771359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.345 qpair failed and we were unable to recover it. 00:30:31.346 [2024-12-09 06:29:25.771568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.346 [2024-12-09 06:29:25.771577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.346 qpair failed and we were unable to recover it. 00:30:31.346 [2024-12-09 06:29:25.771927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.346 [2024-12-09 06:29:25.771937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.346 qpair failed and we were unable to recover it. 00:30:31.346 [2024-12-09 06:29:25.772237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.346 [2024-12-09 06:29:25.772247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.346 qpair failed and we were unable to recover it. 00:30:31.346 [2024-12-09 06:29:25.772427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.346 [2024-12-09 06:29:25.772437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.346 qpair failed and we were unable to recover it. 00:30:31.346 [2024-12-09 06:29:25.772632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.346 [2024-12-09 06:29:25.772642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.346 qpair failed and we were unable to recover it. 00:30:31.346 [2024-12-09 06:29:25.772944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.346 [2024-12-09 06:29:25.772954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.346 qpair failed and we were unable to recover it. 00:30:31.346 [2024-12-09 06:29:25.773108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.346 [2024-12-09 06:29:25.773118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.346 qpair failed and we were unable to recover it. 00:30:31.346 [2024-12-09 06:29:25.773440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.346 [2024-12-09 06:29:25.773454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.346 qpair failed and we were unable to recover it. 00:30:31.346 [2024-12-09 06:29:25.773738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.346 [2024-12-09 06:29:25.773748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.346 qpair failed and we were unable to recover it. 00:30:31.346 [2024-12-09 06:29:25.773904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.346 [2024-12-09 06:29:25.773913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.346 qpair failed and we were unable to recover it. 00:30:31.346 [2024-12-09 06:29:25.774245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.346 [2024-12-09 06:29:25.774260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.346 qpair failed and we were unable to recover it. 00:30:31.346 [2024-12-09 06:29:25.774581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.346 [2024-12-09 06:29:25.774592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.346 qpair failed and we were unable to recover it. 00:30:31.346 [2024-12-09 06:29:25.774895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.346 [2024-12-09 06:29:25.774904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.346 qpair failed and we were unable to recover it. 00:30:31.346 [2024-12-09 06:29:25.775118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.346 [2024-12-09 06:29:25.775128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.346 qpair failed and we were unable to recover it. 00:30:31.346 [2024-12-09 06:29:25.775440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.346 [2024-12-09 06:29:25.775452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.346 qpair failed and we were unable to recover it. 00:30:31.346 [2024-12-09 06:29:25.775725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.346 [2024-12-09 06:29:25.775734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.346 qpair failed and we were unable to recover it. 00:30:31.346 [2024-12-09 06:29:25.775917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.346 [2024-12-09 06:29:25.775926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.346 qpair failed and we were unable to recover it. 00:30:31.346 [2024-12-09 06:29:25.776238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.346 [2024-12-09 06:29:25.776247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.346 qpair failed and we were unable to recover it. 00:30:31.346 [2024-12-09 06:29:25.776572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.346 [2024-12-09 06:29:25.776582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.346 qpair failed and we were unable to recover it. 00:30:31.346 [2024-12-09 06:29:25.776915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.346 [2024-12-09 06:29:25.776924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.346 qpair failed and we were unable to recover it. 00:30:31.346 [2024-12-09 06:29:25.777322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.346 [2024-12-09 06:29:25.777332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.346 qpair failed and we were unable to recover it. 00:30:31.346 [2024-12-09 06:29:25.777746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.346 [2024-12-09 06:29:25.777755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.346 qpair failed and we were unable to recover it. 00:30:31.346 [2024-12-09 06:29:25.777797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.346 [2024-12-09 06:29:25.777806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.346 qpair failed and we were unable to recover it. 00:30:31.346 [2024-12-09 06:29:25.778100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.346 [2024-12-09 06:29:25.778109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.346 qpair failed and we were unable to recover it. 00:30:31.346 [2024-12-09 06:29:25.778334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.346 [2024-12-09 06:29:25.778343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.346 qpair failed and we were unable to recover it. 00:30:31.346 [2024-12-09 06:29:25.778522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.346 [2024-12-09 06:29:25.778531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.346 qpair failed and we were unable to recover it. 00:30:31.346 [2024-12-09 06:29:25.778848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.346 [2024-12-09 06:29:25.778858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.346 qpair failed and we were unable to recover it. 00:30:31.346 [2024-12-09 06:29:25.778981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.346 [2024-12-09 06:29:25.778990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.346 qpair failed and we were unable to recover it. 00:30:31.346 [2024-12-09 06:29:25.779333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.346 [2024-12-09 06:29:25.779343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.346 qpair failed and we were unable to recover it. 00:30:31.346 [2024-12-09 06:29:25.779523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.346 [2024-12-09 06:29:25.779534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.346 qpair failed and we were unable to recover it. 00:30:31.346 [2024-12-09 06:29:25.779686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.346 [2024-12-09 06:29:25.779696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.346 qpair failed and we were unable to recover it. 00:30:31.346 [2024-12-09 06:29:25.779898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.346 [2024-12-09 06:29:25.779907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.346 qpair failed and we were unable to recover it. 00:30:31.346 [2024-12-09 06:29:25.780276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.346 [2024-12-09 06:29:25.780285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.346 qpair failed and we were unable to recover it. 00:30:31.346 [2024-12-09 06:29:25.780577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.346 [2024-12-09 06:29:25.780586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.346 qpair failed and we were unable to recover it. 00:30:31.346 [2024-12-09 06:29:25.780872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.346 [2024-12-09 06:29:25.780881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.347 qpair failed and we were unable to recover it. 00:30:31.347 [2024-12-09 06:29:25.781181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.347 [2024-12-09 06:29:25.781190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.347 qpair failed and we were unable to recover it. 00:30:31.347 [2024-12-09 06:29:25.781503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.347 [2024-12-09 06:29:25.781513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.347 qpair failed and we were unable to recover it. 00:30:31.347 [2024-12-09 06:29:25.781829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.347 [2024-12-09 06:29:25.781839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.347 qpair failed and we were unable to recover it. 00:30:31.347 [2024-12-09 06:29:25.782143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.347 [2024-12-09 06:29:25.782152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.347 qpair failed and we were unable to recover it. 00:30:31.347 [2024-12-09 06:29:25.782457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.347 [2024-12-09 06:29:25.782467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.347 qpair failed and we were unable to recover it. 00:30:31.347 [2024-12-09 06:29:25.782756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.347 [2024-12-09 06:29:25.782765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.347 qpair failed and we were unable to recover it. 00:30:31.347 [2024-12-09 06:29:25.783049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.347 [2024-12-09 06:29:25.783059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.347 qpair failed and we were unable to recover it. 00:30:31.347 [2024-12-09 06:29:25.783362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.347 [2024-12-09 06:29:25.783372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.347 qpair failed and we were unable to recover it. 00:30:31.347 [2024-12-09 06:29:25.783668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.347 [2024-12-09 06:29:25.783679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.347 qpair failed and we were unable to recover it. 00:30:31.347 [2024-12-09 06:29:25.783841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.347 [2024-12-09 06:29:25.783850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.347 qpair failed and we were unable to recover it. 00:30:31.347 [2024-12-09 06:29:25.784126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.347 [2024-12-09 06:29:25.784135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.347 qpair failed and we were unable to recover it. 00:30:31.347 [2024-12-09 06:29:25.784453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.347 [2024-12-09 06:29:25.784463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.347 qpair failed and we were unable to recover it. 00:30:31.347 [2024-12-09 06:29:25.784773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.347 [2024-12-09 06:29:25.784782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.347 qpair failed and we were unable to recover it. 00:30:31.347 [2024-12-09 06:29:25.784961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.347 [2024-12-09 06:29:25.784971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.347 qpair failed and we were unable to recover it. 00:30:31.347 [2024-12-09 06:29:25.785248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.347 [2024-12-09 06:29:25.785257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.347 qpair failed and we were unable to recover it. 00:30:31.347 [2024-12-09 06:29:25.785560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.347 [2024-12-09 06:29:25.785572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.347 qpair failed and we were unable to recover it. 00:30:31.347 [2024-12-09 06:29:25.785742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.347 [2024-12-09 06:29:25.785752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.347 qpair failed and we were unable to recover it. 00:30:31.347 [2024-12-09 06:29:25.785920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.347 [2024-12-09 06:29:25.785930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.347 qpair failed and we were unable to recover it. 00:30:31.347 [2024-12-09 06:29:25.786222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.347 [2024-12-09 06:29:25.786232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.347 qpair failed and we were unable to recover it. 00:30:31.347 [2024-12-09 06:29:25.786530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.347 [2024-12-09 06:29:25.786539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.347 qpair failed and we were unable to recover it. 00:30:31.347 [2024-12-09 06:29:25.786723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.347 [2024-12-09 06:29:25.786739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.347 qpair failed and we were unable to recover it. 00:30:31.347 [2024-12-09 06:29:25.787054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.347 [2024-12-09 06:29:25.787063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.347 qpair failed and we were unable to recover it. 00:30:31.347 [2024-12-09 06:29:25.787218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.347 [2024-12-09 06:29:25.787227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.347 qpair failed and we were unable to recover it. 00:30:31.347 [2024-12-09 06:29:25.787567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.347 [2024-12-09 06:29:25.787576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.347 qpair failed and we were unable to recover it. 00:30:31.347 [2024-12-09 06:29:25.787746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.347 [2024-12-09 06:29:25.787755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.347 qpair failed and we were unable to recover it. 00:30:31.347 [2024-12-09 06:29:25.788083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.347 [2024-12-09 06:29:25.788092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.347 qpair failed and we were unable to recover it. 00:30:31.347 [2024-12-09 06:29:25.788474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.347 [2024-12-09 06:29:25.788483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.347 qpair failed and we were unable to recover it. 00:30:31.347 [2024-12-09 06:29:25.788701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.347 [2024-12-09 06:29:25.788710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.347 qpair failed and we were unable to recover it. 00:30:31.347 [2024-12-09 06:29:25.788870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.347 [2024-12-09 06:29:25.788879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.347 qpair failed and we were unable to recover it. 00:30:31.347 [2024-12-09 06:29:25.789170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.347 [2024-12-09 06:29:25.789180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.347 qpair failed and we were unable to recover it. 00:30:31.347 [2024-12-09 06:29:25.789257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.347 [2024-12-09 06:29:25.789266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.347 qpair failed and we were unable to recover it. 00:30:31.347 [2024-12-09 06:29:25.789425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.347 [2024-12-09 06:29:25.789435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.347 qpair failed and we were unable to recover it. 00:30:31.347 [2024-12-09 06:29:25.789749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.347 [2024-12-09 06:29:25.789758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.347 qpair failed and we were unable to recover it. 00:30:31.347 [2024-12-09 06:29:25.789922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.347 [2024-12-09 06:29:25.789932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.347 qpair failed and we were unable to recover it. 00:30:31.347 [2024-12-09 06:29:25.790291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.347 [2024-12-09 06:29:25.790302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.347 qpair failed and we were unable to recover it. 00:30:31.347 [2024-12-09 06:29:25.790462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.347 [2024-12-09 06:29:25.790473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.347 qpair failed and we were unable to recover it. 00:30:31.347 [2024-12-09 06:29:25.790684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.347 [2024-12-09 06:29:25.790693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.347 qpair failed and we were unable to recover it. 00:30:31.347 [2024-12-09 06:29:25.791007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.347 [2024-12-09 06:29:25.791016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.348 qpair failed and we were unable to recover it. 00:30:31.348 [2024-12-09 06:29:25.791063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.348 [2024-12-09 06:29:25.791072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.348 qpair failed and we were unable to recover it. 00:30:31.348 [2024-12-09 06:29:25.791357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.348 [2024-12-09 06:29:25.791366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.348 qpair failed and we were unable to recover it. 00:30:31.348 [2024-12-09 06:29:25.791665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.348 [2024-12-09 06:29:25.791674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.348 qpair failed and we were unable to recover it. 00:30:31.348 [2024-12-09 06:29:25.791878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.348 [2024-12-09 06:29:25.791887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.348 qpair failed and we were unable to recover it. 00:30:31.348 [2024-12-09 06:29:25.792193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.348 [2024-12-09 06:29:25.792202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.348 qpair failed and we were unable to recover it. 00:30:31.348 [2024-12-09 06:29:25.792360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.348 [2024-12-09 06:29:25.792370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.348 qpair failed and we were unable to recover it. 00:30:31.348 [2024-12-09 06:29:25.792748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.348 [2024-12-09 06:29:25.792757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.348 qpair failed and we were unable to recover it. 00:30:31.348 [2024-12-09 06:29:25.792951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.348 [2024-12-09 06:29:25.792960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.348 qpair failed and we were unable to recover it. 00:30:31.348 [2024-12-09 06:29:25.793239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.348 [2024-12-09 06:29:25.793250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.348 qpair failed and we were unable to recover it. 00:30:31.348 [2024-12-09 06:29:25.793557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.348 [2024-12-09 06:29:25.793566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.348 qpair failed and we were unable to recover it. 00:30:31.348 [2024-12-09 06:29:25.793877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.348 [2024-12-09 06:29:25.793891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.348 qpair failed and we were unable to recover it. 00:30:31.348 [2024-12-09 06:29:25.794204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.348 [2024-12-09 06:29:25.794213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.348 qpair failed and we were unable to recover it. 00:30:31.348 [2024-12-09 06:29:25.794511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.348 [2024-12-09 06:29:25.794521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.348 qpair failed and we were unable to recover it. 00:30:31.348 [2024-12-09 06:29:25.794680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.348 [2024-12-09 06:29:25.794690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.348 qpair failed and we were unable to recover it. 00:30:31.348 [2024-12-09 06:29:25.794834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.348 [2024-12-09 06:29:25.794844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.348 qpair failed and we were unable to recover it. 00:30:31.348 [2024-12-09 06:29:25.795045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.348 [2024-12-09 06:29:25.795054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.348 qpair failed and we were unable to recover it. 00:30:31.348 [2024-12-09 06:29:25.795376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.348 [2024-12-09 06:29:25.795385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.348 qpair failed and we were unable to recover it. 00:30:31.348 [2024-12-09 06:29:25.795557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.348 [2024-12-09 06:29:25.795570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.348 qpair failed and we were unable to recover it. 00:30:31.348 [2024-12-09 06:29:25.795806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.348 [2024-12-09 06:29:25.795815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.348 qpair failed and we were unable to recover it. 00:30:31.348 [2024-12-09 06:29:25.796118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.348 [2024-12-09 06:29:25.796127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.348 qpair failed and we were unable to recover it. 00:30:31.348 [2024-12-09 06:29:25.796339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.348 [2024-12-09 06:29:25.796349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.348 qpair failed and we were unable to recover it. 00:30:31.348 [2024-12-09 06:29:25.796692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.348 [2024-12-09 06:29:25.796701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.348 qpair failed and we were unable to recover it. 00:30:31.348 [2024-12-09 06:29:25.797033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.348 [2024-12-09 06:29:25.797043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.348 qpair failed and we were unable to recover it. 00:30:31.348 [2024-12-09 06:29:25.797380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.348 [2024-12-09 06:29:25.797389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.348 qpair failed and we were unable to recover it. 00:30:31.348 [2024-12-09 06:29:25.797553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.348 [2024-12-09 06:29:25.797563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.348 qpair failed and we were unable to recover it. 00:30:31.348 [2024-12-09 06:29:25.797927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.348 [2024-12-09 06:29:25.797938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.348 qpair failed and we were unable to recover it. 00:30:31.348 [2024-12-09 06:29:25.798253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.348 [2024-12-09 06:29:25.798263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.348 qpair failed and we were unable to recover it. 00:30:31.348 [2024-12-09 06:29:25.798559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.348 [2024-12-09 06:29:25.798570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.348 qpair failed and we were unable to recover it. 00:30:31.348 [2024-12-09 06:29:25.798756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.348 [2024-12-09 06:29:25.798767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.348 qpair failed and we were unable to recover it. 00:30:31.348 [2024-12-09 06:29:25.798920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.348 [2024-12-09 06:29:25.798933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.348 qpair failed and we were unable to recover it. 00:30:31.348 [2024-12-09 06:29:25.799232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.348 [2024-12-09 06:29:25.799241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.348 qpair failed and we were unable to recover it. 00:30:31.348 [2024-12-09 06:29:25.799543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.348 [2024-12-09 06:29:25.799555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.348 qpair failed and we were unable to recover it. 00:30:31.348 [2024-12-09 06:29:25.799720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.348 [2024-12-09 06:29:25.799732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.348 qpair failed and we were unable to recover it. 00:30:31.348 [2024-12-09 06:29:25.800040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.348 [2024-12-09 06:29:25.800050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.348 qpair failed and we were unable to recover it. 00:30:31.348 [2024-12-09 06:29:25.800413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.348 [2024-12-09 06:29:25.800423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.348 qpair failed and we were unable to recover it. 00:30:31.348 [2024-12-09 06:29:25.800720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.348 [2024-12-09 06:29:25.800729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.348 qpair failed and we were unable to recover it. 00:30:31.348 [2024-12-09 06:29:25.801030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.348 [2024-12-09 06:29:25.801039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.348 qpair failed and we were unable to recover it. 00:30:31.348 [2024-12-09 06:29:25.801219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.348 [2024-12-09 06:29:25.801228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.348 qpair failed and we were unable to recover it. 00:30:31.349 [2024-12-09 06:29:25.801513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.349 [2024-12-09 06:29:25.801524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.349 qpair failed and we were unable to recover it. 00:30:31.349 [2024-12-09 06:29:25.801800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.349 [2024-12-09 06:29:25.801810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.349 qpair failed and we were unable to recover it. 00:30:31.349 [2024-12-09 06:29:25.802112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.349 [2024-12-09 06:29:25.802122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.349 qpair failed and we were unable to recover it. 00:30:31.349 [2024-12-09 06:29:25.802399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.349 [2024-12-09 06:29:25.802408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.349 qpair failed and we were unable to recover it. 00:30:31.349 [2024-12-09 06:29:25.802720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.349 [2024-12-09 06:29:25.802730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.349 qpair failed and we were unable to recover it. 00:30:31.349 [2024-12-09 06:29:25.802892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.349 [2024-12-09 06:29:25.802901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.349 qpair failed and we were unable to recover it. 00:30:31.349 [2024-12-09 06:29:25.803267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.349 [2024-12-09 06:29:25.803277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.349 qpair failed and we were unable to recover it. 00:30:31.349 [2024-12-09 06:29:25.803426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.349 [2024-12-09 06:29:25.803436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.349 qpair failed and we were unable to recover it. 00:30:31.349 [2024-12-09 06:29:25.803727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.349 [2024-12-09 06:29:25.803737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.349 qpair failed and we were unable to recover it. 00:30:31.349 [2024-12-09 06:29:25.804041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.349 [2024-12-09 06:29:25.804051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.349 qpair failed and we were unable to recover it. 00:30:31.349 [2024-12-09 06:29:25.804348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.349 [2024-12-09 06:29:25.804358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.349 qpair failed and we were unable to recover it. 00:30:31.349 [2024-12-09 06:29:25.804562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.349 [2024-12-09 06:29:25.804572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.349 qpair failed and we were unable to recover it. 00:30:31.349 [2024-12-09 06:29:25.804888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.349 [2024-12-09 06:29:25.804902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.349 qpair failed and we were unable to recover it. 00:30:31.349 [2024-12-09 06:29:25.805204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.349 [2024-12-09 06:29:25.805214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.349 qpair failed and we were unable to recover it. 00:30:31.349 [2024-12-09 06:29:25.805408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.349 [2024-12-09 06:29:25.805418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.349 qpair failed and we were unable to recover it. 00:30:31.349 [2024-12-09 06:29:25.805732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.349 [2024-12-09 06:29:25.805742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.349 qpair failed and we were unable to recover it. 00:30:31.349 [2024-12-09 06:29:25.806046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.349 [2024-12-09 06:29:25.806055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.349 qpair failed and we were unable to recover it. 00:30:31.349 [2024-12-09 06:29:25.806377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.349 [2024-12-09 06:29:25.806390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.349 qpair failed and we were unable to recover it. 00:30:31.349 [2024-12-09 06:29:25.806690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.349 [2024-12-09 06:29:25.806700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.349 qpair failed and we were unable to recover it. 00:30:31.349 [2024-12-09 06:29:25.806999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.349 [2024-12-09 06:29:25.807012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.349 qpair failed and we were unable to recover it. 00:30:31.349 [2024-12-09 06:29:25.807308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.349 [2024-12-09 06:29:25.807318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.349 qpair failed and we were unable to recover it. 00:30:31.349 [2024-12-09 06:29:25.807497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.349 [2024-12-09 06:29:25.807508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.349 qpair failed and we were unable to recover it. 00:30:31.349 [2024-12-09 06:29:25.807820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.349 [2024-12-09 06:29:25.807831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.349 qpair failed and we were unable to recover it. 00:30:31.349 [2024-12-09 06:29:25.808128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.349 [2024-12-09 06:29:25.808138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.349 qpair failed and we were unable to recover it. 00:30:31.349 [2024-12-09 06:29:25.808437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.349 [2024-12-09 06:29:25.808446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.349 qpair failed and we were unable to recover it. 00:30:31.349 [2024-12-09 06:29:25.808528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.349 [2024-12-09 06:29:25.808538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.349 qpair failed and we were unable to recover it. 00:30:31.349 [2024-12-09 06:29:25.808840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.349 [2024-12-09 06:29:25.808849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.349 qpair failed and we were unable to recover it. 00:30:31.349 [2024-12-09 06:29:25.809017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.349 [2024-12-09 06:29:25.809027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.349 qpair failed and we were unable to recover it. 00:30:31.349 [2024-12-09 06:29:25.809199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.349 [2024-12-09 06:29:25.809208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.349 qpair failed and we were unable to recover it. 00:30:31.349 [2024-12-09 06:29:25.809398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.349 [2024-12-09 06:29:25.809408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.349 qpair failed and we were unable to recover it. 00:30:31.349 [2024-12-09 06:29:25.809563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.349 [2024-12-09 06:29:25.809573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.349 qpair failed and we were unable to recover it. 00:30:31.349 [2024-12-09 06:29:25.809837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.349 [2024-12-09 06:29:25.809847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.349 qpair failed and we were unable to recover it. 00:30:31.349 [2024-12-09 06:29:25.810149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.349 [2024-12-09 06:29:25.810160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.349 qpair failed and we were unable to recover it. 00:30:31.349 [2024-12-09 06:29:25.810473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.349 [2024-12-09 06:29:25.810483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.349 qpair failed and we were unable to recover it. 00:30:31.349 [2024-12-09 06:29:25.810799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.349 [2024-12-09 06:29:25.810810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.349 qpair failed and we were unable to recover it. 00:30:31.349 [2024-12-09 06:29:25.811104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.349 [2024-12-09 06:29:25.811114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.349 qpair failed and we were unable to recover it. 00:30:31.349 [2024-12-09 06:29:25.811162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.349 [2024-12-09 06:29:25.811172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.349 qpair failed and we were unable to recover it. 00:30:31.349 [2024-12-09 06:29:25.811470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.349 [2024-12-09 06:29:25.811480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.349 qpair failed and we were unable to recover it. 00:30:31.349 [2024-12-09 06:29:25.811780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.350 [2024-12-09 06:29:25.811789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.350 qpair failed and we were unable to recover it. 00:30:31.350 [2024-12-09 06:29:25.812084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.350 [2024-12-09 06:29:25.812093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.350 qpair failed and we were unable to recover it. 00:30:31.350 [2024-12-09 06:29:25.812398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.350 [2024-12-09 06:29:25.812407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.350 qpair failed and we were unable to recover it. 00:30:31.350 [2024-12-09 06:29:25.812572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.350 [2024-12-09 06:29:25.812581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.350 qpair failed and we were unable to recover it. 00:30:31.350 [2024-12-09 06:29:25.812880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.350 [2024-12-09 06:29:25.812890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.350 qpair failed and we were unable to recover it. 00:30:31.350 [2024-12-09 06:29:25.813162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.350 [2024-12-09 06:29:25.813172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.350 qpair failed and we were unable to recover it. 00:30:31.350 [2024-12-09 06:29:25.813473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.350 [2024-12-09 06:29:25.813484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.350 qpair failed and we were unable to recover it. 00:30:31.350 [2024-12-09 06:29:25.813773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.350 [2024-12-09 06:29:25.813783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.350 qpair failed and we were unable to recover it. 00:30:31.350 [2024-12-09 06:29:25.814148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.350 [2024-12-09 06:29:25.814159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.350 qpair failed and we were unable to recover it. 00:30:31.350 [2024-12-09 06:29:25.814206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.350 [2024-12-09 06:29:25.814216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.350 qpair failed and we were unable to recover it. 00:30:31.350 [2024-12-09 06:29:25.814403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.350 [2024-12-09 06:29:25.814413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.350 qpair failed and we were unable to recover it. 00:30:31.350 [2024-12-09 06:29:25.814746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.350 [2024-12-09 06:29:25.814756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.350 qpair failed and we were unable to recover it. 00:30:31.350 [2024-12-09 06:29:25.815051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.350 [2024-12-09 06:29:25.815062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.350 qpair failed and we were unable to recover it. 00:30:31.350 [2024-12-09 06:29:25.815393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.350 [2024-12-09 06:29:25.815406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.350 qpair failed and we were unable to recover it. 00:30:31.350 [2024-12-09 06:29:25.815713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.350 [2024-12-09 06:29:25.815724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.350 qpair failed and we were unable to recover it. 00:30:31.350 [2024-12-09 06:29:25.815952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.350 [2024-12-09 06:29:25.815963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.350 qpair failed and we were unable to recover it. 00:30:31.350 [2024-12-09 06:29:25.816157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.350 [2024-12-09 06:29:25.816167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.350 qpair failed and we were unable to recover it. 00:30:31.350 [2024-12-09 06:29:25.816214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.350 [2024-12-09 06:29:25.816223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.350 qpair failed and we were unable to recover it. 00:30:31.350 [2024-12-09 06:29:25.816434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.350 [2024-12-09 06:29:25.816444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.350 qpair failed and we were unable to recover it. 00:30:31.350 [2024-12-09 06:29:25.816856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.350 [2024-12-09 06:29:25.816865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.350 qpair failed and we were unable to recover it. 00:30:31.350 [2024-12-09 06:29:25.817188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.350 [2024-12-09 06:29:25.817199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.350 qpair failed and we were unable to recover it. 00:30:31.350 [2024-12-09 06:29:25.817371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.350 [2024-12-09 06:29:25.817386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.350 qpair failed and we were unable to recover it. 00:30:31.350 [2024-12-09 06:29:25.817679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.350 [2024-12-09 06:29:25.817691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.350 qpair failed and we were unable to recover it. 00:30:31.350 [2024-12-09 06:29:25.817737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.350 [2024-12-09 06:29:25.817746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.350 qpair failed and we were unable to recover it. 00:30:31.350 [2024-12-09 06:29:25.818031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.350 [2024-12-09 06:29:25.818045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.350 qpair failed and we were unable to recover it. 00:30:31.350 [2024-12-09 06:29:25.818131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.350 [2024-12-09 06:29:25.818141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.350 qpair failed and we were unable to recover it. 00:30:31.350 [2024-12-09 06:29:25.818396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.350 [2024-12-09 06:29:25.818406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.350 qpair failed and we were unable to recover it. 00:30:31.350 [2024-12-09 06:29:25.818706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.350 [2024-12-09 06:29:25.818717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.350 qpair failed and we were unable to recover it. 00:30:31.350 [2024-12-09 06:29:25.819020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.350 [2024-12-09 06:29:25.819030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.350 qpair failed and we were unable to recover it. 00:30:31.350 [2024-12-09 06:29:25.819319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.350 [2024-12-09 06:29:25.819329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.350 qpair failed and we were unable to recover it. 00:30:31.350 [2024-12-09 06:29:25.819369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.350 [2024-12-09 06:29:25.819378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.350 qpair failed and we were unable to recover it. 00:30:31.350 [2024-12-09 06:29:25.819646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.350 [2024-12-09 06:29:25.819657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.350 qpair failed and we were unable to recover it. 00:30:31.350 [2024-12-09 06:29:25.819931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.350 [2024-12-09 06:29:25.819942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.350 qpair failed and we were unable to recover it. 00:30:31.350 [2024-12-09 06:29:25.820095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.350 [2024-12-09 06:29:25.820106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.350 qpair failed and we were unable to recover it. 00:30:31.350 [2024-12-09 06:29:25.820366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.350 [2024-12-09 06:29:25.820377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.350 qpair failed and we were unable to recover it. 00:30:31.350 [2024-12-09 06:29:25.820677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.350 [2024-12-09 06:29:25.820688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.350 qpair failed and we were unable to recover it. 00:30:31.350 [2024-12-09 06:29:25.820924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.350 [2024-12-09 06:29:25.820935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.350 qpair failed and we were unable to recover it. 00:30:31.350 [2024-12-09 06:29:25.821143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.350 [2024-12-09 06:29:25.821154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.350 qpair failed and we were unable to recover it. 00:30:31.350 [2024-12-09 06:29:25.821372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.351 [2024-12-09 06:29:25.821383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.351 qpair failed and we were unable to recover it. 00:30:31.351 [2024-12-09 06:29:25.821684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.351 [2024-12-09 06:29:25.821695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.351 qpair failed and we were unable to recover it. 00:30:31.351 [2024-12-09 06:29:25.822002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.351 [2024-12-09 06:29:25.822012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.351 qpair failed and we were unable to recover it. 00:30:31.351 [2024-12-09 06:29:25.822379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.351 [2024-12-09 06:29:25.822390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.351 qpair failed and we were unable to recover it. 00:30:31.351 [2024-12-09 06:29:25.822603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.351 [2024-12-09 06:29:25.822617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.351 qpair failed and we were unable to recover it. 00:30:31.351 [2024-12-09 06:29:25.822815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.351 [2024-12-09 06:29:25.822826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.351 qpair failed and we were unable to recover it. 00:30:31.351 [2024-12-09 06:29:25.822997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.351 [2024-12-09 06:29:25.823008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.351 qpair failed and we were unable to recover it. 00:30:31.351 [2024-12-09 06:29:25.823323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.351 [2024-12-09 06:29:25.823335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.351 qpair failed and we were unable to recover it. 00:30:31.351 [2024-12-09 06:29:25.823527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.351 [2024-12-09 06:29:25.823539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.351 qpair failed and we were unable to recover it. 00:30:31.351 [2024-12-09 06:29:25.823852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.351 [2024-12-09 06:29:25.823862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.351 qpair failed and we were unable to recover it. 00:30:31.351 [2024-12-09 06:29:25.824137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.351 [2024-12-09 06:29:25.824146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.351 qpair failed and we were unable to recover it. 00:30:31.351 [2024-12-09 06:29:25.824305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.351 [2024-12-09 06:29:25.824315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.351 qpair failed and we were unable to recover it. 00:30:31.351 [2024-12-09 06:29:25.824539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.351 [2024-12-09 06:29:25.824549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.351 qpair failed and we were unable to recover it. 00:30:31.351 [2024-12-09 06:29:25.824721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.351 [2024-12-09 06:29:25.824731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.351 qpair failed and we were unable to recover it. 00:30:31.351 [2024-12-09 06:29:25.824883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.351 [2024-12-09 06:29:25.824892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.351 qpair failed and we were unable to recover it. 00:30:31.351 [2024-12-09 06:29:25.825170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.351 [2024-12-09 06:29:25.825179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.351 qpair failed and we were unable to recover it. 00:30:31.351 [2024-12-09 06:29:25.825355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.351 [2024-12-09 06:29:25.825365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.351 qpair failed and we were unable to recover it. 00:30:31.351 [2024-12-09 06:29:25.825668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.351 [2024-12-09 06:29:25.825678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.351 qpair failed and we were unable to recover it. 00:30:31.351 [2024-12-09 06:29:25.825981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.351 [2024-12-09 06:29:25.825990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.351 qpair failed and we were unable to recover it. 00:30:31.351 [2024-12-09 06:29:25.826148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.351 [2024-12-09 06:29:25.826158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.351 qpair failed and we were unable to recover it. 00:30:31.351 [2024-12-09 06:29:25.826463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.351 [2024-12-09 06:29:25.826473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.351 qpair failed and we were unable to recover it. 00:30:31.351 [2024-12-09 06:29:25.826760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.351 [2024-12-09 06:29:25.826770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.351 qpair failed and we were unable to recover it. 00:30:31.351 [2024-12-09 06:29:25.826931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.351 [2024-12-09 06:29:25.826941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.351 qpair failed and we were unable to recover it. 00:30:31.351 [2024-12-09 06:29:25.827138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.351 [2024-12-09 06:29:25.827152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.351 qpair failed and we were unable to recover it. 00:30:31.351 [2024-12-09 06:29:25.827474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.351 [2024-12-09 06:29:25.827484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.351 qpair failed and we were unable to recover it. 00:30:31.351 [2024-12-09 06:29:25.827639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.351 [2024-12-09 06:29:25.827648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.351 qpair failed and we were unable to recover it. 00:30:31.351 [2024-12-09 06:29:25.827861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.351 [2024-12-09 06:29:25.827872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.351 qpair failed and we were unable to recover it. 00:30:31.351 [2024-12-09 06:29:25.828056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.351 [2024-12-09 06:29:25.828066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.351 qpair failed and we were unable to recover it. 00:30:31.351 [2024-12-09 06:29:25.828437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.351 [2024-12-09 06:29:25.828446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.351 qpair failed and we were unable to recover it. 00:30:31.351 [2024-12-09 06:29:25.828746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.351 [2024-12-09 06:29:25.828755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.351 qpair failed and we were unable to recover it. 00:30:31.351 [2024-12-09 06:29:25.828921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.351 [2024-12-09 06:29:25.828932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.351 qpair failed and we were unable to recover it. 00:30:31.351 [2024-12-09 06:29:25.829256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.351 [2024-12-09 06:29:25.829266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.351 qpair failed and we were unable to recover it. 00:30:31.351 [2024-12-09 06:29:25.829566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.351 [2024-12-09 06:29:25.829578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.351 qpair failed and we were unable to recover it. 00:30:31.351 [2024-12-09 06:29:25.829892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.351 [2024-12-09 06:29:25.829903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.351 qpair failed and we were unable to recover it. 00:30:31.351 [2024-12-09 06:29:25.830204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.351 [2024-12-09 06:29:25.830215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.351 qpair failed and we were unable to recover it. 00:30:31.352 [2024-12-09 06:29:25.830381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.352 [2024-12-09 06:29:25.830391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.352 qpair failed and we were unable to recover it. 00:30:31.352 [2024-12-09 06:29:25.830556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.352 [2024-12-09 06:29:25.830565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.352 qpair failed and we were unable to recover it. 00:30:31.352 [2024-12-09 06:29:25.830734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.352 [2024-12-09 06:29:25.830743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.352 qpair failed and we were unable to recover it. 00:30:31.352 [2024-12-09 06:29:25.831008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.352 [2024-12-09 06:29:25.831018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.352 qpair failed and we were unable to recover it. 00:30:31.352 [2024-12-09 06:29:25.831215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.352 [2024-12-09 06:29:25.831225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.352 qpair failed and we were unable to recover it. 00:30:31.352 [2024-12-09 06:29:25.831524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.352 [2024-12-09 06:29:25.831534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.352 qpair failed and we were unable to recover it. 00:30:31.352 [2024-12-09 06:29:25.831847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.352 [2024-12-09 06:29:25.831856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.352 qpair failed and we were unable to recover it. 00:30:31.352 [2024-12-09 06:29:25.831945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.352 [2024-12-09 06:29:25.831954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.352 qpair failed and we were unable to recover it. 00:30:31.352 [2024-12-09 06:29:25.832109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.352 [2024-12-09 06:29:25.832128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.352 qpair failed and we were unable to recover it. 00:30:31.352 [2024-12-09 06:29:25.832437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.352 [2024-12-09 06:29:25.832447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.352 qpair failed and we were unable to recover it. 00:30:31.352 [2024-12-09 06:29:25.832624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.352 [2024-12-09 06:29:25.832634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.352 qpair failed and we were unable to recover it. 00:30:31.352 [2024-12-09 06:29:25.832837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.352 [2024-12-09 06:29:25.832847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.352 qpair failed and we were unable to recover it. 00:30:31.352 [2024-12-09 06:29:25.833162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.352 [2024-12-09 06:29:25.833171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.352 qpair failed and we were unable to recover it. 00:30:31.352 [2024-12-09 06:29:25.833362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.352 [2024-12-09 06:29:25.833372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.352 qpair failed and we were unable to recover it. 00:30:31.352 [2024-12-09 06:29:25.833662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.352 [2024-12-09 06:29:25.833672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.352 qpair failed and we were unable to recover it. 00:30:31.352 [2024-12-09 06:29:25.833884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.352 [2024-12-09 06:29:25.833894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.352 qpair failed and we were unable to recover it. 00:30:31.352 [2024-12-09 06:29:25.834111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.352 [2024-12-09 06:29:25.834120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.352 qpair failed and we were unable to recover it. 00:30:31.352 [2024-12-09 06:29:25.834429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.352 [2024-12-09 06:29:25.834439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.352 qpair failed and we were unable to recover it. 00:30:31.352 [2024-12-09 06:29:25.834720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.352 [2024-12-09 06:29:25.834730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.352 qpair failed and we were unable to recover it. 00:30:31.352 [2024-12-09 06:29:25.834883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.352 [2024-12-09 06:29:25.834893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.352 qpair failed and we were unable to recover it. 00:30:31.352 [2024-12-09 06:29:25.835088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.352 [2024-12-09 06:29:25.835098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.352 qpair failed and we were unable to recover it. 00:30:31.352 [2024-12-09 06:29:25.835379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.352 [2024-12-09 06:29:25.835388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.352 qpair failed and we were unable to recover it. 00:30:31.352 [2024-12-09 06:29:25.835658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.352 [2024-12-09 06:29:25.835668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.352 qpair failed and we were unable to recover it. 00:30:31.352 [2024-12-09 06:29:25.835944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.352 [2024-12-09 06:29:25.835954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.352 qpair failed and we were unable to recover it. 00:30:31.352 [2024-12-09 06:29:25.836230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.352 [2024-12-09 06:29:25.836239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.352 qpair failed and we were unable to recover it. 00:30:31.352 [2024-12-09 06:29:25.836520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.352 [2024-12-09 06:29:25.836534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.352 qpair failed and we were unable to recover it. 00:30:31.352 [2024-12-09 06:29:25.836804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.352 [2024-12-09 06:29:25.836814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.352 qpair failed and we were unable to recover it. 00:30:31.352 [2024-12-09 06:29:25.837113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.352 [2024-12-09 06:29:25.837123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.352 qpair failed and we were unable to recover it. 00:30:31.352 [2024-12-09 06:29:25.837351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.352 [2024-12-09 06:29:25.837363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.352 qpair failed and we were unable to recover it. 00:30:31.352 [2024-12-09 06:29:25.837634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.352 [2024-12-09 06:29:25.837644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.352 qpair failed and we were unable to recover it. 00:30:31.352 [2024-12-09 06:29:25.837947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.352 [2024-12-09 06:29:25.837957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.352 qpair failed and we were unable to recover it. 00:30:31.352 [2024-12-09 06:29:25.838257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.352 [2024-12-09 06:29:25.838268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.352 qpair failed and we were unable to recover it. 00:30:31.352 [2024-12-09 06:29:25.838483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.352 [2024-12-09 06:29:25.838493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.352 qpair failed and we were unable to recover it. 00:30:31.352 [2024-12-09 06:29:25.838821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.352 [2024-12-09 06:29:25.838830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.352 qpair failed and we were unable to recover it. 00:30:31.352 [2024-12-09 06:29:25.839144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.352 [2024-12-09 06:29:25.839153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.352 qpair failed and we were unable to recover it. 00:30:31.352 [2024-12-09 06:29:25.839465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.352 [2024-12-09 06:29:25.839475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.352 qpair failed and we were unable to recover it. 00:30:31.352 [2024-12-09 06:29:25.839651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.352 [2024-12-09 06:29:25.839661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.352 qpair failed and we were unable to recover it. 00:30:31.352 [2024-12-09 06:29:25.839831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.352 [2024-12-09 06:29:25.839841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.352 qpair failed and we were unable to recover it. 00:30:31.353 [2024-12-09 06:29:25.840007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.353 [2024-12-09 06:29:25.840017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.353 qpair failed and we were unable to recover it. 00:30:31.353 [2024-12-09 06:29:25.840276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.353 [2024-12-09 06:29:25.840286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.353 qpair failed and we were unable to recover it. 00:30:31.353 [2024-12-09 06:29:25.840610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.353 [2024-12-09 06:29:25.840621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.353 qpair failed and we were unable to recover it. 00:30:31.353 [2024-12-09 06:29:25.840955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.353 [2024-12-09 06:29:25.840965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.353 qpair failed and we were unable to recover it. 00:30:31.353 [2024-12-09 06:29:25.841262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.353 [2024-12-09 06:29:25.841271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.353 qpair failed and we were unable to recover it. 00:30:31.353 [2024-12-09 06:29:25.841586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.353 [2024-12-09 06:29:25.841595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.353 qpair failed and we were unable to recover it. 00:30:31.353 [2024-12-09 06:29:25.841898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.353 [2024-12-09 06:29:25.841908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.353 qpair failed and we were unable to recover it. 00:30:31.353 [2024-12-09 06:29:25.842212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.353 [2024-12-09 06:29:25.842222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.353 qpair failed and we were unable to recover it. 00:30:31.353 [2024-12-09 06:29:25.842394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.353 [2024-12-09 06:29:25.842404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.353 qpair failed and we were unable to recover it. 00:30:31.353 [2024-12-09 06:29:25.842738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.353 [2024-12-09 06:29:25.842748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.353 qpair failed and we were unable to recover it. 00:30:31.353 [2024-12-09 06:29:25.842940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.353 [2024-12-09 06:29:25.842951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.353 qpair failed and we were unable to recover it. 00:30:31.353 [2024-12-09 06:29:25.843122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.353 [2024-12-09 06:29:25.843132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.353 qpair failed and we were unable to recover it. 00:30:31.353 [2024-12-09 06:29:25.843387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.353 [2024-12-09 06:29:25.843397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.353 qpair failed and we were unable to recover it. 00:30:31.353 [2024-12-09 06:29:25.843555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.353 [2024-12-09 06:29:25.843565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.353 qpair failed and we were unable to recover it. 00:30:31.353 [2024-12-09 06:29:25.843700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.353 [2024-12-09 06:29:25.843709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.353 qpair failed and we were unable to recover it. 00:30:31.353 [2024-12-09 06:29:25.843890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.353 [2024-12-09 06:29:25.843900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.353 qpair failed and we were unable to recover it. 00:30:31.353 [2024-12-09 06:29:25.844213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.353 [2024-12-09 06:29:25.844222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.353 qpair failed and we were unable to recover it. 00:30:31.353 [2024-12-09 06:29:25.844503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.353 [2024-12-09 06:29:25.844512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.353 qpair failed and we were unable to recover it. 00:30:31.353 [2024-12-09 06:29:25.844814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.353 [2024-12-09 06:29:25.844823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.353 qpair failed and we were unable to recover it. 00:30:31.353 [2024-12-09 06:29:25.845125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.353 [2024-12-09 06:29:25.845136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.353 qpair failed and we were unable to recover it. 00:30:31.353 [2024-12-09 06:29:25.845476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.353 [2024-12-09 06:29:25.845486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.353 qpair failed and we were unable to recover it. 00:30:31.353 [2024-12-09 06:29:25.845642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.353 [2024-12-09 06:29:25.845651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.353 qpair failed and we were unable to recover it. 00:30:31.353 [2024-12-09 06:29:25.845967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.353 [2024-12-09 06:29:25.845976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.353 qpair failed and we were unable to recover it. 00:30:31.353 [2024-12-09 06:29:25.846300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.353 [2024-12-09 06:29:25.846309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.353 qpair failed and we were unable to recover it. 00:30:31.353 [2024-12-09 06:29:25.846547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.353 [2024-12-09 06:29:25.846557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.353 qpair failed and we were unable to recover it. 00:30:31.353 [2024-12-09 06:29:25.846806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.353 [2024-12-09 06:29:25.846816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.353 qpair failed and we were unable to recover it. 00:30:31.353 [2024-12-09 06:29:25.847127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.353 [2024-12-09 06:29:25.847138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.353 qpair failed and we were unable to recover it. 00:30:31.353 [2024-12-09 06:29:25.847437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.353 [2024-12-09 06:29:25.847451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.353 qpair failed and we were unable to recover it. 00:30:31.353 [2024-12-09 06:29:25.847617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.353 [2024-12-09 06:29:25.847626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.353 qpair failed and we were unable to recover it. 00:30:31.353 [2024-12-09 06:29:25.847838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.353 [2024-12-09 06:29:25.847847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.353 qpair failed and we were unable to recover it. 00:30:31.353 [2024-12-09 06:29:25.848189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.353 [2024-12-09 06:29:25.848201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.353 qpair failed and we were unable to recover it. 00:30:31.353 [2024-12-09 06:29:25.848501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.353 [2024-12-09 06:29:25.848511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.353 qpair failed and we were unable to recover it. 00:30:31.353 [2024-12-09 06:29:25.848826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.353 [2024-12-09 06:29:25.848835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.353 qpair failed and we were unable to recover it. 00:30:31.353 [2024-12-09 06:29:25.849004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.353 [2024-12-09 06:29:25.849022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.353 qpair failed and we were unable to recover it. 00:30:31.353 [2024-12-09 06:29:25.849171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.353 [2024-12-09 06:29:25.849181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.353 qpair failed and we were unable to recover it. 00:30:31.353 [2024-12-09 06:29:25.849371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.353 [2024-12-09 06:29:25.849380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.353 qpair failed and we were unable to recover it. 00:30:31.353 [2024-12-09 06:29:25.849672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.353 [2024-12-09 06:29:25.849683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.353 qpair failed and we were unable to recover it. 00:30:31.353 [2024-12-09 06:29:25.849987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.354 [2024-12-09 06:29:25.849996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.354 qpair failed and we were unable to recover it. 00:30:31.354 [2024-12-09 06:29:25.850282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.354 [2024-12-09 06:29:25.850291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.354 qpair failed and we were unable to recover it. 00:30:31.354 [2024-12-09 06:29:25.850632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.354 [2024-12-09 06:29:25.850642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.354 qpair failed and we were unable to recover it. 00:30:31.354 [2024-12-09 06:29:25.850905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.354 [2024-12-09 06:29:25.850914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.354 qpair failed and we were unable to recover it. 00:30:31.354 [2024-12-09 06:29:25.851119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.354 [2024-12-09 06:29:25.851129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.354 qpair failed and we were unable to recover it. 00:30:31.354 [2024-12-09 06:29:25.851326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.354 [2024-12-09 06:29:25.851335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.354 qpair failed and we were unable to recover it. 00:30:31.354 [2024-12-09 06:29:25.851601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.354 [2024-12-09 06:29:25.851610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.354 qpair failed and we were unable to recover it. 00:30:31.354 [2024-12-09 06:29:25.851909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.354 [2024-12-09 06:29:25.851920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.354 qpair failed and we were unable to recover it. 00:30:31.354 [2024-12-09 06:29:25.852277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.354 [2024-12-09 06:29:25.852287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.354 qpair failed and we were unable to recover it. 00:30:31.354 [2024-12-09 06:29:25.852575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.354 [2024-12-09 06:29:25.852585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.354 qpair failed and we were unable to recover it. 00:30:31.354 [2024-12-09 06:29:25.852847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.354 [2024-12-09 06:29:25.852856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.354 qpair failed and we were unable to recover it. 00:30:31.354 [2024-12-09 06:29:25.853166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.354 [2024-12-09 06:29:25.853176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.354 qpair failed and we were unable to recover it. 00:30:31.354 [2024-12-09 06:29:25.853484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.354 [2024-12-09 06:29:25.853493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.354 qpair failed and we were unable to recover it. 00:30:31.354 [2024-12-09 06:29:25.853877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.354 [2024-12-09 06:29:25.853886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.354 qpair failed and we were unable to recover it. 00:30:31.354 [2024-12-09 06:29:25.854190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.354 [2024-12-09 06:29:25.854200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.354 qpair failed and we were unable to recover it. 00:30:31.354 [2024-12-09 06:29:25.854385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.354 [2024-12-09 06:29:25.854394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.354 qpair failed and we were unable to recover it. 00:30:31.354 [2024-12-09 06:29:25.854622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.354 [2024-12-09 06:29:25.854631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.354 qpair failed and we were unable to recover it. 00:30:31.354 [2024-12-09 06:29:25.854792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.354 [2024-12-09 06:29:25.854801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.354 qpair failed and we were unable to recover it. 00:30:31.354 [2024-12-09 06:29:25.855136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.354 [2024-12-09 06:29:25.855146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.354 qpair failed and we were unable to recover it. 00:30:31.354 [2024-12-09 06:29:25.855348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.354 [2024-12-09 06:29:25.855358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.354 qpair failed and we were unable to recover it. 00:30:31.354 [2024-12-09 06:29:25.855536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.354 [2024-12-09 06:29:25.855546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.354 qpair failed and we were unable to recover it. 00:30:31.354 [2024-12-09 06:29:25.855819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.354 [2024-12-09 06:29:25.855828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.354 qpair failed and we were unable to recover it. 00:30:31.354 [2024-12-09 06:29:25.856051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.354 [2024-12-09 06:29:25.856060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.354 qpair failed and we were unable to recover it. 00:30:31.354 [2024-12-09 06:29:25.856353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.354 [2024-12-09 06:29:25.856364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.354 qpair failed and we were unable to recover it. 00:30:31.354 [2024-12-09 06:29:25.856519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.354 [2024-12-09 06:29:25.856529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.354 qpair failed and we were unable to recover it. 00:30:31.354 [2024-12-09 06:29:25.856797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.354 [2024-12-09 06:29:25.856807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.354 qpair failed and we were unable to recover it. 00:30:31.354 [2024-12-09 06:29:25.857110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.354 [2024-12-09 06:29:25.857120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.354 qpair failed and we were unable to recover it. 00:30:31.354 [2024-12-09 06:29:25.857426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.354 [2024-12-09 06:29:25.857436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.354 qpair failed and we were unable to recover it. 00:30:31.354 [2024-12-09 06:29:25.857640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.354 [2024-12-09 06:29:25.857651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.354 qpair failed and we were unable to recover it. 00:30:31.354 [2024-12-09 06:29:25.857851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.354 [2024-12-09 06:29:25.857860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.354 qpair failed and we were unable to recover it. 00:30:31.354 [2024-12-09 06:29:25.858034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.354 [2024-12-09 06:29:25.858044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.354 qpair failed and we were unable to recover it. 00:30:31.354 [2024-12-09 06:29:25.858336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.354 [2024-12-09 06:29:25.858347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.354 qpair failed and we were unable to recover it. 00:30:31.354 [2024-12-09 06:29:25.858523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.354 [2024-12-09 06:29:25.858535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.354 qpair failed and we were unable to recover it. 00:30:31.354 [2024-12-09 06:29:25.858811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.354 [2024-12-09 06:29:25.858824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.354 qpair failed and we were unable to recover it. 00:30:31.354 [2024-12-09 06:29:25.859178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.354 [2024-12-09 06:29:25.859188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.354 qpair failed and we were unable to recover it. 00:30:31.354 [2024-12-09 06:29:25.859484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.354 [2024-12-09 06:29:25.859494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.354 qpair failed and we were unable to recover it. 00:30:31.354 [2024-12-09 06:29:25.859769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.354 [2024-12-09 06:29:25.859778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.354 qpair failed and we were unable to recover it. 00:30:31.354 [2024-12-09 06:29:25.860089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.354 [2024-12-09 06:29:25.860098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.354 qpair failed and we were unable to recover it. 00:30:31.354 [2024-12-09 06:29:25.860256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.354 [2024-12-09 06:29:25.860266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.355 qpair failed and we were unable to recover it. 00:30:31.355 [2024-12-09 06:29:25.860451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.355 [2024-12-09 06:29:25.860461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.355 qpair failed and we were unable to recover it. 00:30:31.355 [2024-12-09 06:29:25.860543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.355 [2024-12-09 06:29:25.860552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.355 qpair failed and we were unable to recover it. 00:30:31.355 [2024-12-09 06:29:25.860857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.355 [2024-12-09 06:29:25.860868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.355 qpair failed and we were unable to recover it. 00:30:31.355 [2024-12-09 06:29:25.860909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.355 [2024-12-09 06:29:25.860919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.355 qpair failed and we were unable to recover it. 00:30:31.355 [2024-12-09 06:29:25.861255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.355 [2024-12-09 06:29:25.861264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.355 qpair failed and we were unable to recover it. 00:30:31.355 [2024-12-09 06:29:25.861550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.355 [2024-12-09 06:29:25.861560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.355 qpair failed and we were unable to recover it. 00:30:31.355 [2024-12-09 06:29:25.861869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.355 [2024-12-09 06:29:25.861878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.355 qpair failed and we were unable to recover it. 00:30:31.355 [2024-12-09 06:29:25.862159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.355 [2024-12-09 06:29:25.862168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.355 qpair failed and we were unable to recover it. 00:30:31.355 [2024-12-09 06:29:25.862366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.355 [2024-12-09 06:29:25.862375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.355 qpair failed and we were unable to recover it. 00:30:31.355 [2024-12-09 06:29:25.862695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.355 [2024-12-09 06:29:25.862705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.355 qpair failed and we were unable to recover it. 00:30:31.355 [2024-12-09 06:29:25.863008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.355 [2024-12-09 06:29:25.863018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.355 qpair failed and we were unable to recover it. 00:30:31.355 [2024-12-09 06:29:25.863321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.355 [2024-12-09 06:29:25.863331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.355 qpair failed and we were unable to recover it. 00:30:31.355 [2024-12-09 06:29:25.863526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.355 [2024-12-09 06:29:25.863536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.355 qpair failed and we were unable to recover it. 00:30:31.355 [2024-12-09 06:29:25.863885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.355 [2024-12-09 06:29:25.863894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.355 qpair failed and we were unable to recover it. 00:30:31.355 [2024-12-09 06:29:25.864194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.355 [2024-12-09 06:29:25.864203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.355 qpair failed and we were unable to recover it. 00:30:31.355 [2024-12-09 06:29:25.864525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.355 [2024-12-09 06:29:25.864535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.355 qpair failed and we were unable to recover it. 00:30:31.355 [2024-12-09 06:29:25.864693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.355 [2024-12-09 06:29:25.864702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.355 qpair failed and we were unable to recover it. 00:30:31.355 [2024-12-09 06:29:25.864906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.355 [2024-12-09 06:29:25.864916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.355 qpair failed and we were unable to recover it. 00:30:31.355 [2024-12-09 06:29:25.865228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.355 [2024-12-09 06:29:25.865238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.355 qpair failed and we were unable to recover it. 00:30:31.355 [2024-12-09 06:29:25.865558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.355 [2024-12-09 06:29:25.865568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.355 qpair failed and we were unable to recover it. 00:30:31.355 [2024-12-09 06:29:25.865740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.355 [2024-12-09 06:29:25.865749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.355 qpair failed and we were unable to recover it. 00:30:31.355 [2024-12-09 06:29:25.865962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.355 [2024-12-09 06:29:25.865971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.355 qpair failed and we were unable to recover it. 00:30:31.355 [2024-12-09 06:29:25.866270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.355 [2024-12-09 06:29:25.866285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.355 qpair failed and we were unable to recover it. 00:30:31.355 [2024-12-09 06:29:25.866460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.355 [2024-12-09 06:29:25.866470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.355 qpair failed and we were unable to recover it. 00:30:31.355 [2024-12-09 06:29:25.866634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.355 [2024-12-09 06:29:25.866643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.355 qpair failed and we were unable to recover it. 00:30:31.355 [2024-12-09 06:29:25.866917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.355 [2024-12-09 06:29:25.866926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.355 qpair failed and we were unable to recover it. 00:30:31.355 [2024-12-09 06:29:25.867126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.355 [2024-12-09 06:29:25.867136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.355 qpair failed and we were unable to recover it. 00:30:31.355 [2024-12-09 06:29:25.867434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.355 [2024-12-09 06:29:25.867444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.355 qpair failed and we were unable to recover it. 00:30:31.355 [2024-12-09 06:29:25.867748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.355 [2024-12-09 06:29:25.867758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.355 qpair failed and we were unable to recover it. 00:30:31.355 [2024-12-09 06:29:25.868040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.355 [2024-12-09 06:29:25.868050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.355 qpair failed and we were unable to recover it. 00:30:31.355 [2024-12-09 06:29:25.868354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.355 [2024-12-09 06:29:25.868363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.355 qpair failed and we were unable to recover it. 00:30:31.355 [2024-12-09 06:29:25.868537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.355 [2024-12-09 06:29:25.868547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.355 qpair failed and we were unable to recover it. 00:30:31.355 [2024-12-09 06:29:25.868728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.355 [2024-12-09 06:29:25.868737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.355 qpair failed and we were unable to recover it. 00:30:31.355 [2024-12-09 06:29:25.868958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.355 [2024-12-09 06:29:25.868967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.355 qpair failed and we were unable to recover it. 00:30:31.355 [2024-12-09 06:29:25.869296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.356 [2024-12-09 06:29:25.869308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.356 qpair failed and we were unable to recover it. 00:30:31.356 [2024-12-09 06:29:25.869603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.356 [2024-12-09 06:29:25.869613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.356 qpair failed and we were unable to recover it. 00:30:31.356 [2024-12-09 06:29:25.869802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.356 [2024-12-09 06:29:25.869813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.356 qpair failed and we were unable to recover it. 00:30:31.356 [2024-12-09 06:29:25.870163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.356 [2024-12-09 06:29:25.870172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.356 qpair failed and we were unable to recover it. 00:30:31.356 [2024-12-09 06:29:25.870468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.356 [2024-12-09 06:29:25.870477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.356 qpair failed and we were unable to recover it. 00:30:31.356 [2024-12-09 06:29:25.870648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.356 [2024-12-09 06:29:25.870657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.356 qpair failed and we were unable to recover it. 00:30:31.356 [2024-12-09 06:29:25.870970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.356 [2024-12-09 06:29:25.870979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.356 qpair failed and we were unable to recover it. 00:30:31.356 [2024-12-09 06:29:25.871181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.356 [2024-12-09 06:29:25.871191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.356 qpair failed and we were unable to recover it. 00:30:31.356 [2024-12-09 06:29:25.871522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.356 [2024-12-09 06:29:25.871532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.356 qpair failed and we were unable to recover it. 00:30:31.356 [2024-12-09 06:29:25.871721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.356 [2024-12-09 06:29:25.871731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.356 qpair failed and we were unable to recover it. 00:30:31.356 [2024-12-09 06:29:25.872010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.356 [2024-12-09 06:29:25.872020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.356 qpair failed and we were unable to recover it. 00:30:31.356 [2024-12-09 06:29:25.872195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.356 [2024-12-09 06:29:25.872204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.356 qpair failed and we were unable to recover it. 00:30:31.356 [2024-12-09 06:29:25.872246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.356 [2024-12-09 06:29:25.872254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.356 qpair failed and we were unable to recover it. 00:30:31.356 [2024-12-09 06:29:25.872430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.356 [2024-12-09 06:29:25.872440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.356 qpair failed and we were unable to recover it. 00:30:31.356 [2024-12-09 06:29:25.872743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.356 [2024-12-09 06:29:25.872753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.356 qpair failed and we were unable to recover it. 00:30:31.356 [2024-12-09 06:29:25.872908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.356 [2024-12-09 06:29:25.872917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.356 qpair failed and we were unable to recover it. 00:30:31.356 [2024-12-09 06:29:25.873274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.356 [2024-12-09 06:29:25.873283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.356 qpair failed and we were unable to recover it. 00:30:31.356 [2024-12-09 06:29:25.873491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.356 [2024-12-09 06:29:25.873500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.356 qpair failed and we were unable to recover it. 00:30:31.356 [2024-12-09 06:29:25.873652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.356 [2024-12-09 06:29:25.873661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.356 qpair failed and we were unable to recover it. 00:30:31.356 [2024-12-09 06:29:25.873927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.356 [2024-12-09 06:29:25.873938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.356 qpair failed and we were unable to recover it. 00:30:31.356 [2024-12-09 06:29:25.874121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.356 [2024-12-09 06:29:25.874132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.356 qpair failed and we were unable to recover it. 00:30:31.356 [2024-12-09 06:29:25.874397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.356 [2024-12-09 06:29:25.874407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.356 qpair failed and we were unable to recover it. 00:30:31.356 [2024-12-09 06:29:25.874691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.356 [2024-12-09 06:29:25.874701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.356 qpair failed and we were unable to recover it. 00:30:31.356 [2024-12-09 06:29:25.874884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.356 [2024-12-09 06:29:25.874894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.356 qpair failed and we were unable to recover it. 00:30:31.356 [2024-12-09 06:29:25.875092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.356 [2024-12-09 06:29:25.875101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.356 qpair failed and we were unable to recover it. 00:30:31.356 [2024-12-09 06:29:25.875412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.356 [2024-12-09 06:29:25.875421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.356 qpair failed and we were unable to recover it. 00:30:31.356 [2024-12-09 06:29:25.875630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.356 [2024-12-09 06:29:25.875639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.356 qpair failed and we were unable to recover it. 00:30:31.356 [2024-12-09 06:29:25.875939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.356 [2024-12-09 06:29:25.875948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.356 qpair failed and we were unable to recover it. 00:30:31.356 [2024-12-09 06:29:25.876153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.356 [2024-12-09 06:29:25.876163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.356 qpair failed and we were unable to recover it. 00:30:31.356 [2024-12-09 06:29:25.876356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.356 [2024-12-09 06:29:25.876365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.356 qpair failed and we were unable to recover it. 00:30:31.356 [2024-12-09 06:29:25.876652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.356 [2024-12-09 06:29:25.876661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.356 qpair failed and we were unable to recover it. 00:30:31.356 [2024-12-09 06:29:25.876826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.356 [2024-12-09 06:29:25.876836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.356 qpair failed and we were unable to recover it. 00:30:31.356 [2024-12-09 06:29:25.877017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.356 [2024-12-09 06:29:25.877026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.356 qpair failed and we were unable to recover it. 00:30:31.356 [2024-12-09 06:29:25.877266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.356 [2024-12-09 06:29:25.877275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.356 qpair failed and we were unable to recover it. 00:30:31.356 [2024-12-09 06:29:25.877631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.356 [2024-12-09 06:29:25.877640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.356 qpair failed and we were unable to recover it. 00:30:31.356 [2024-12-09 06:29:25.877921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.356 [2024-12-09 06:29:25.877930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.356 qpair failed and we were unable to recover it. 00:30:31.356 [2024-12-09 06:29:25.878166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.356 [2024-12-09 06:29:25.878176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.356 qpair failed and we were unable to recover it. 00:30:31.356 [2024-12-09 06:29:25.878477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.356 [2024-12-09 06:29:25.878487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.356 qpair failed and we were unable to recover it. 00:30:31.356 [2024-12-09 06:29:25.878813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.356 [2024-12-09 06:29:25.878823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.357 qpair failed and we were unable to recover it. 00:30:31.357 [2024-12-09 06:29:25.879101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.357 [2024-12-09 06:29:25.879110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.357 qpair failed and we were unable to recover it. 00:30:31.357 [2024-12-09 06:29:25.879291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.357 [2024-12-09 06:29:25.879304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.357 qpair failed and we were unable to recover it. 00:30:31.357 [2024-12-09 06:29:25.879470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.357 [2024-12-09 06:29:25.879479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.357 qpair failed and we were unable to recover it. 00:30:31.357 [2024-12-09 06:29:25.879739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.357 [2024-12-09 06:29:25.879748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.357 qpair failed and we were unable to recover it. 00:30:31.357 [2024-12-09 06:29:25.880050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.357 [2024-12-09 06:29:25.880059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.357 qpair failed and we were unable to recover it. 00:30:31.357 [2024-12-09 06:29:25.880336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.357 [2024-12-09 06:29:25.880347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.357 qpair failed and we were unable to recover it. 00:30:31.357 [2024-12-09 06:29:25.880651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.357 [2024-12-09 06:29:25.880661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.357 qpair failed and we were unable to recover it. 00:30:31.357 [2024-12-09 06:29:25.880967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.357 [2024-12-09 06:29:25.880976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.357 qpair failed and we were unable to recover it. 00:30:31.357 [2024-12-09 06:29:25.881268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.357 [2024-12-09 06:29:25.881277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.357 qpair failed and we were unable to recover it. 00:30:31.357 [2024-12-09 06:29:25.881566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.357 [2024-12-09 06:29:25.881575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.357 qpair failed and we were unable to recover it. 00:30:31.357 [2024-12-09 06:29:25.881732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.357 [2024-12-09 06:29:25.881742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.357 qpair failed and we were unable to recover it. 00:30:31.357 [2024-12-09 06:29:25.882108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.357 [2024-12-09 06:29:25.882118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.357 qpair failed and we were unable to recover it. 00:30:31.357 [2024-12-09 06:29:25.882316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.357 [2024-12-09 06:29:25.882325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.357 qpair failed and we were unable to recover it. 00:30:31.357 [2024-12-09 06:29:25.882671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.357 [2024-12-09 06:29:25.882681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.357 qpair failed and we were unable to recover it. 00:30:31.357 [2024-12-09 06:29:25.882959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.357 [2024-12-09 06:29:25.882969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.357 qpair failed and we were unable to recover it. 00:30:31.357 [2024-12-09 06:29:25.883263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.357 [2024-12-09 06:29:25.883272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.357 qpair failed and we were unable to recover it. 00:30:31.357 [2024-12-09 06:29:25.883571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.357 [2024-12-09 06:29:25.883581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.357 qpair failed and we were unable to recover it. 00:30:31.357 [2024-12-09 06:29:25.883892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.357 [2024-12-09 06:29:25.883902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.357 qpair failed and we were unable to recover it. 00:30:31.357 [2024-12-09 06:29:25.884069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.357 [2024-12-09 06:29:25.884079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.357 qpair failed and we were unable to recover it. 00:30:31.357 [2024-12-09 06:29:25.884266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.357 [2024-12-09 06:29:25.884275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.357 qpair failed and we were unable to recover it. 00:30:31.357 [2024-12-09 06:29:25.884578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.357 [2024-12-09 06:29:25.884587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.357 qpair failed and we were unable to recover it. 00:30:31.357 [2024-12-09 06:29:25.884784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.357 [2024-12-09 06:29:25.884793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.357 qpair failed and we were unable to recover it. 00:30:31.357 [2024-12-09 06:29:25.884983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.357 [2024-12-09 06:29:25.884992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.357 qpair failed and we were unable to recover it. 00:30:31.357 [2024-12-09 06:29:25.885161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.357 [2024-12-09 06:29:25.885171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.357 qpair failed and we were unable to recover it. 00:30:31.357 [2024-12-09 06:29:25.885480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.357 [2024-12-09 06:29:25.885489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.357 qpair failed and we were unable to recover it. 00:30:31.357 [2024-12-09 06:29:25.885805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.357 [2024-12-09 06:29:25.885814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.357 qpair failed and we were unable to recover it. 00:30:31.357 [2024-12-09 06:29:25.886109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.357 [2024-12-09 06:29:25.886118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.357 qpair failed and we were unable to recover it. 00:30:31.357 [2024-12-09 06:29:25.886395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.357 [2024-12-09 06:29:25.886405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.357 qpair failed and we were unable to recover it. 00:30:31.357 [2024-12-09 06:29:25.886692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.357 [2024-12-09 06:29:25.886702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.357 qpair failed and we were unable to recover it. 00:30:31.357 [2024-12-09 06:29:25.886978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.357 [2024-12-09 06:29:25.886988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.357 qpair failed and we were unable to recover it. 00:30:31.357 [2024-12-09 06:29:25.887144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.357 [2024-12-09 06:29:25.887153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.357 qpair failed and we were unable to recover it. 00:30:31.357 [2024-12-09 06:29:25.887372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.357 [2024-12-09 06:29:25.887380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.357 qpair failed and we were unable to recover it. 00:30:31.357 [2024-12-09 06:29:25.887658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.357 [2024-12-09 06:29:25.887668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.357 qpair failed and we were unable to recover it. 00:30:31.357 [2024-12-09 06:29:25.887866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.357 [2024-12-09 06:29:25.887875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.357 qpair failed and we were unable to recover it. 00:30:31.357 [2024-12-09 06:29:25.888138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.357 [2024-12-09 06:29:25.888148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.357 qpair failed and we were unable to recover it. 00:30:31.357 [2024-12-09 06:29:25.888332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.357 [2024-12-09 06:29:25.888342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.357 qpair failed and we were unable to recover it. 00:30:31.357 [2024-12-09 06:29:25.888496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.357 [2024-12-09 06:29:25.888507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.357 qpair failed and we were unable to recover it. 00:30:31.357 [2024-12-09 06:29:25.888910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.357 [2024-12-09 06:29:25.888920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.357 qpair failed and we were unable to recover it. 00:30:31.358 [2024-12-09 06:29:25.889097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.358 [2024-12-09 06:29:25.889107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.358 qpair failed and we were unable to recover it. 00:30:31.358 [2024-12-09 06:29:25.889285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.358 [2024-12-09 06:29:25.889295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.358 qpair failed and we were unable to recover it. 00:30:31.358 [2024-12-09 06:29:25.889501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.358 [2024-12-09 06:29:25.889510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.358 qpair failed and we were unable to recover it. 00:30:31.358 [2024-12-09 06:29:25.889846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.358 [2024-12-09 06:29:25.889857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.358 qpair failed and we were unable to recover it. 00:30:31.358 [2024-12-09 06:29:25.890042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.358 [2024-12-09 06:29:25.890051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.358 qpair failed and we were unable to recover it. 00:30:31.358 [2024-12-09 06:29:25.890426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.358 [2024-12-09 06:29:25.890436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.358 qpair failed and we were unable to recover it. 00:30:31.358 [2024-12-09 06:29:25.890598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.358 [2024-12-09 06:29:25.890608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.358 qpair failed and we were unable to recover it. 00:30:31.358 [2024-12-09 06:29:25.890882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.358 [2024-12-09 06:29:25.890892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.358 qpair failed and we were unable to recover it. 00:30:31.358 [2024-12-09 06:29:25.891071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.358 [2024-12-09 06:29:25.891080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.358 qpair failed and we were unable to recover it. 00:30:31.358 [2024-12-09 06:29:25.891287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.358 [2024-12-09 06:29:25.891296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.358 qpair failed and we were unable to recover it. 00:30:31.358 [2024-12-09 06:29:25.891555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.358 [2024-12-09 06:29:25.891565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.358 qpair failed and we were unable to recover it. 00:30:31.358 [2024-12-09 06:29:25.891729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.358 [2024-12-09 06:29:25.891738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.358 qpair failed and we were unable to recover it. 00:30:31.358 [2024-12-09 06:29:25.891778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.358 [2024-12-09 06:29:25.891786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.358 qpair failed and we were unable to recover it. 00:30:31.358 [2024-12-09 06:29:25.891997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.358 [2024-12-09 06:29:25.892006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.358 qpair failed and we were unable to recover it. 00:30:31.358 [2024-12-09 06:29:25.892301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.358 [2024-12-09 06:29:25.892311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.358 qpair failed and we were unable to recover it. 00:30:31.358 [2024-12-09 06:29:25.892611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.358 [2024-12-09 06:29:25.892621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.358 qpair failed and we were unable to recover it. 00:30:31.358 [2024-12-09 06:29:25.892778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.358 [2024-12-09 06:29:25.892788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.358 qpair failed and we were unable to recover it. 00:30:31.358 [2024-12-09 06:29:25.893110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.358 [2024-12-09 06:29:25.893119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.358 qpair failed and we were unable to recover it. 00:30:31.358 [2024-12-09 06:29:25.893402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.358 [2024-12-09 06:29:25.893411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.358 qpair failed and we were unable to recover it. 00:30:31.358 [2024-12-09 06:29:25.893600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.358 [2024-12-09 06:29:25.893609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.358 qpair failed and we were unable to recover it. 00:30:31.358 [2024-12-09 06:29:25.893866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.358 [2024-12-09 06:29:25.893876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.358 qpair failed and we were unable to recover it. 00:30:31.358 [2024-12-09 06:29:25.893924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.358 [2024-12-09 06:29:25.893933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.358 qpair failed and we were unable to recover it. 00:30:31.358 [2024-12-09 06:29:25.894229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.358 [2024-12-09 06:29:25.894239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.358 qpair failed and we were unable to recover it. 00:30:31.358 [2024-12-09 06:29:25.894549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.358 [2024-12-09 06:29:25.894560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.358 qpair failed and we were unable to recover it. 00:30:31.358 [2024-12-09 06:29:25.894866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.358 [2024-12-09 06:29:25.894877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.358 qpair failed and we were unable to recover it. 00:30:31.358 [2024-12-09 06:29:25.895057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.358 [2024-12-09 06:29:25.895067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.358 qpair failed and we were unable to recover it. 00:30:31.358 [2024-12-09 06:29:25.895373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.358 [2024-12-09 06:29:25.895383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.358 qpair failed and we were unable to recover it. 00:30:31.358 [2024-12-09 06:29:25.895500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.358 [2024-12-09 06:29:25.895514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.358 qpair failed and we were unable to recover it. 00:30:31.358 [2024-12-09 06:29:25.895806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.358 [2024-12-09 06:29:25.895815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.358 qpair failed and we were unable to recover it. 00:30:31.358 [2024-12-09 06:29:25.895982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.358 [2024-12-09 06:29:25.895992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.358 qpair failed and we were unable to recover it. 00:30:31.358 [2024-12-09 06:29:25.896235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.358 [2024-12-09 06:29:25.896244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.358 qpair failed and we were unable to recover it. 00:30:31.358 [2024-12-09 06:29:25.896551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.358 [2024-12-09 06:29:25.896560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.358 qpair failed and we were unable to recover it. 00:30:31.358 [2024-12-09 06:29:25.896745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.358 [2024-12-09 06:29:25.896754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.358 qpair failed and we were unable to recover it. 00:30:31.358 [2024-12-09 06:29:25.896947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.358 [2024-12-09 06:29:25.896957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.358 qpair failed and we were unable to recover it. 00:30:31.358 [2024-12-09 06:29:25.897284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.358 [2024-12-09 06:29:25.897293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.358 qpair failed and we were unable to recover it. 00:30:31.358 [2024-12-09 06:29:25.897411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.358 [2024-12-09 06:29:25.897419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.358 qpair failed and we were unable to recover it. 00:30:31.635 [2024-12-09 06:29:25.897678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.636 [2024-12-09 06:29:25.897690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.636 qpair failed and we were unable to recover it. 00:30:31.636 [2024-12-09 06:29:25.897987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.636 [2024-12-09 06:29:25.897998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.636 qpair failed and we were unable to recover it. 00:30:31.636 [2024-12-09 06:29:25.898297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.636 [2024-12-09 06:29:25.898307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.636 qpair failed and we were unable to recover it. 00:30:31.636 [2024-12-09 06:29:25.898690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.636 [2024-12-09 06:29:25.898700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.636 qpair failed and we were unable to recover it. 00:30:31.636 [2024-12-09 06:29:25.898889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.636 [2024-12-09 06:29:25.898900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.636 qpair failed and we were unable to recover it. 00:30:31.636 [2024-12-09 06:29:25.899237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.636 [2024-12-09 06:29:25.899246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.636 qpair failed and we were unable to recover it. 00:30:31.636 [2024-12-09 06:29:25.899526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.636 [2024-12-09 06:29:25.899536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.636 qpair failed and we were unable to recover it. 00:30:31.636 [2024-12-09 06:29:25.899843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.636 [2024-12-09 06:29:25.899855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.636 qpair failed and we were unable to recover it. 00:30:31.636 [2024-12-09 06:29:25.900018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.636 [2024-12-09 06:29:25.900027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.636 qpair failed and we were unable to recover it. 00:30:31.636 [2024-12-09 06:29:25.900341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.636 [2024-12-09 06:29:25.900350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.636 qpair failed and we were unable to recover it. 00:30:31.636 [2024-12-09 06:29:25.900608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.636 [2024-12-09 06:29:25.900618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.636 qpair failed and we were unable to recover it. 00:30:31.636 [2024-12-09 06:29:25.900927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.636 [2024-12-09 06:29:25.900938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.636 qpair failed and we were unable to recover it. 00:30:31.636 [2024-12-09 06:29:25.901096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.636 [2024-12-09 06:29:25.901106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.636 qpair failed and we were unable to recover it. 00:30:31.636 [2024-12-09 06:29:25.901191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.636 [2024-12-09 06:29:25.901200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.636 qpair failed and we were unable to recover it. 00:30:31.636 [2024-12-09 06:29:25.901273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.636 [2024-12-09 06:29:25.901283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.636 qpair failed and we were unable to recover it. 00:30:31.636 [2024-12-09 06:29:25.901550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.636 [2024-12-09 06:29:25.901560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.636 qpair failed and we were unable to recover it. 00:30:31.636 [2024-12-09 06:29:25.901602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.636 [2024-12-09 06:29:25.901610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.636 qpair failed and we were unable to recover it. 00:30:31.636 [2024-12-09 06:29:25.901891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.636 [2024-12-09 06:29:25.901901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.636 qpair failed and we were unable to recover it. 00:30:31.636 [2024-12-09 06:29:25.902222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.636 [2024-12-09 06:29:25.902231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.636 qpair failed and we were unable to recover it. 00:30:31.636 [2024-12-09 06:29:25.902391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.636 [2024-12-09 06:29:25.902402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.636 qpair failed and we were unable to recover it. 00:30:31.636 [2024-12-09 06:29:25.902688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.636 [2024-12-09 06:29:25.902698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.636 qpair failed and we were unable to recover it. 00:30:31.636 [2024-12-09 06:29:25.903004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.636 [2024-12-09 06:29:25.903013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.636 qpair failed and we were unable to recover it. 00:30:31.636 [2024-12-09 06:29:25.903169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.636 [2024-12-09 06:29:25.903178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.636 qpair failed and we were unable to recover it. 00:30:31.636 [2024-12-09 06:29:25.903546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.636 [2024-12-09 06:29:25.903556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.636 qpair failed and we were unable to recover it. 00:30:31.636 [2024-12-09 06:29:25.903772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.636 [2024-12-09 06:29:25.903782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.636 qpair failed and we were unable to recover it. 00:30:31.636 [2024-12-09 06:29:25.904077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.636 [2024-12-09 06:29:25.904086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.636 qpair failed and we were unable to recover it. 00:30:31.636 [2024-12-09 06:29:25.904124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.636 [2024-12-09 06:29:25.904132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.636 qpair failed and we were unable to recover it. 00:30:31.636 [2024-12-09 06:29:25.904401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.636 [2024-12-09 06:29:25.904410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.636 qpair failed and we were unable to recover it. 00:30:31.636 [2024-12-09 06:29:25.904712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.636 [2024-12-09 06:29:25.904722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.636 qpair failed and we were unable to recover it. 00:30:31.636 [2024-12-09 06:29:25.904883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.636 [2024-12-09 06:29:25.904893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.636 qpair failed and we were unable to recover it. 00:30:31.636 [2024-12-09 06:29:25.905240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.636 [2024-12-09 06:29:25.905249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.636 qpair failed and we were unable to recover it. 00:30:31.636 [2024-12-09 06:29:25.905410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.636 [2024-12-09 06:29:25.905419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.636 qpair failed and we were unable to recover it. 00:30:31.636 [2024-12-09 06:29:25.905739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.636 [2024-12-09 06:29:25.905749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.636 qpair failed and we were unable to recover it. 00:30:31.636 [2024-12-09 06:29:25.906051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.636 [2024-12-09 06:29:25.906061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.636 qpair failed and we were unable to recover it. 00:30:31.636 [2024-12-09 06:29:25.906217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.636 [2024-12-09 06:29:25.906229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.636 qpair failed and we were unable to recover it. 00:30:31.636 [2024-12-09 06:29:25.906406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.636 [2024-12-09 06:29:25.906415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.636 qpair failed and we were unable to recover it. 00:30:31.636 [2024-12-09 06:29:25.906711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.637 [2024-12-09 06:29:25.906721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.637 qpair failed and we were unable to recover it. 00:30:31.637 [2024-12-09 06:29:25.907016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.637 [2024-12-09 06:29:25.907025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.637 qpair failed and we were unable to recover it. 00:30:31.637 [2024-12-09 06:29:25.907184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.637 [2024-12-09 06:29:25.907194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.637 qpair failed and we were unable to recover it. 00:30:31.637 [2024-12-09 06:29:25.907586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.637 [2024-12-09 06:29:25.907596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.637 qpair failed and we were unable to recover it. 00:30:31.637 [2024-12-09 06:29:25.907827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.637 [2024-12-09 06:29:25.907837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.637 qpair failed and we were unable to recover it. 00:30:31.637 [2024-12-09 06:29:25.908027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.637 [2024-12-09 06:29:25.908038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.637 qpair failed and we were unable to recover it. 00:30:31.637 [2024-12-09 06:29:25.908245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.637 [2024-12-09 06:29:25.908254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.637 qpair failed and we were unable to recover it. 00:30:31.637 [2024-12-09 06:29:25.908416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.637 [2024-12-09 06:29:25.908425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.637 qpair failed and we were unable to recover it. 00:30:31.637 [2024-12-09 06:29:25.908741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.637 [2024-12-09 06:29:25.908750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.637 qpair failed and we were unable to recover it. 00:30:31.637 [2024-12-09 06:29:25.909060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.637 [2024-12-09 06:29:25.909069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.637 qpair failed and we were unable to recover it. 00:30:31.637 [2024-12-09 06:29:25.909366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.637 [2024-12-09 06:29:25.909376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.637 qpair failed and we were unable to recover it. 00:30:31.637 [2024-12-09 06:29:25.909677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.637 [2024-12-09 06:29:25.909687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.637 qpair failed and we were unable to recover it. 00:30:31.637 [2024-12-09 06:29:25.910017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.637 [2024-12-09 06:29:25.910027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.637 qpair failed and we were unable to recover it. 00:30:31.637 [2024-12-09 06:29:25.910302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.637 [2024-12-09 06:29:25.910312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.637 qpair failed and we were unable to recover it. 00:30:31.637 [2024-12-09 06:29:25.910642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.637 [2024-12-09 06:29:25.910651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.637 qpair failed and we were unable to recover it. 00:30:31.637 [2024-12-09 06:29:25.910904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.637 [2024-12-09 06:29:25.910913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.637 qpair failed and we were unable to recover it. 00:30:31.637 [2024-12-09 06:29:25.911239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.637 [2024-12-09 06:29:25.911249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.637 qpair failed and we were unable to recover it. 00:30:31.637 [2024-12-09 06:29:25.911555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.637 [2024-12-09 06:29:25.911565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.637 qpair failed and we were unable to recover it. 00:30:31.637 [2024-12-09 06:29:25.911775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.637 [2024-12-09 06:29:25.911785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.637 qpair failed and we were unable to recover it. 00:30:31.637 [2024-12-09 06:29:25.912098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.637 [2024-12-09 06:29:25.912107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.637 qpair failed and we were unable to recover it. 00:30:31.637 [2024-12-09 06:29:25.912257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.637 [2024-12-09 06:29:25.912267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.637 qpair failed and we were unable to recover it. 00:30:31.637 [2024-12-09 06:29:25.912632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.637 [2024-12-09 06:29:25.912642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.637 qpair failed and we were unable to recover it. 00:30:31.637 [2024-12-09 06:29:25.913025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.637 [2024-12-09 06:29:25.913034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.637 qpair failed and we were unable to recover it. 00:30:31.637 [2024-12-09 06:29:25.913361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.637 [2024-12-09 06:29:25.913370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.637 qpair failed and we were unable to recover it. 00:30:31.637 [2024-12-09 06:29:25.913676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.637 [2024-12-09 06:29:25.913686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.637 qpair failed and we were unable to recover it. 00:30:31.637 [2024-12-09 06:29:25.913775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.637 [2024-12-09 06:29:25.913784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.637 qpair failed and we were unable to recover it. 00:30:31.637 [2024-12-09 06:29:25.914066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.637 [2024-12-09 06:29:25.914075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.637 qpair failed and we were unable to recover it. 00:30:31.637 [2024-12-09 06:29:25.914374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.637 [2024-12-09 06:29:25.914384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.637 qpair failed and we were unable to recover it. 00:30:31.637 [2024-12-09 06:29:25.914599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.637 [2024-12-09 06:29:25.914608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.637 qpair failed and we were unable to recover it. 00:30:31.637 [2024-12-09 06:29:25.914902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.637 [2024-12-09 06:29:25.914911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.637 qpair failed and we were unable to recover it. 00:30:31.637 [2024-12-09 06:29:25.915066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.637 [2024-12-09 06:29:25.915075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.637 qpair failed and we were unable to recover it. 00:30:31.637 [2024-12-09 06:29:25.915330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.637 [2024-12-09 06:29:25.915340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.637 qpair failed and we were unable to recover it. 00:30:31.637 [2024-12-09 06:29:25.915666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.637 [2024-12-09 06:29:25.915676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.637 qpair failed and we were unable to recover it. 00:30:31.637 [2024-12-09 06:29:25.915989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.637 [2024-12-09 06:29:25.915999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.637 qpair failed and we were unable to recover it. 00:30:31.637 [2024-12-09 06:29:25.916157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.637 [2024-12-09 06:29:25.916167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.637 qpair failed and we were unable to recover it. 00:30:31.637 [2024-12-09 06:29:25.916498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.637 [2024-12-09 06:29:25.916507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.637 qpair failed and we were unable to recover it. 00:30:31.637 [2024-12-09 06:29:25.916669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.638 [2024-12-09 06:29:25.916678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.638 qpair failed and we were unable to recover it. 00:30:31.638 [2024-12-09 06:29:25.916981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.638 [2024-12-09 06:29:25.916991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.638 qpair failed and we were unable to recover it. 00:30:31.638 [2024-12-09 06:29:25.917291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.638 [2024-12-09 06:29:25.917302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.638 qpair failed and we were unable to recover it. 00:30:31.638 [2024-12-09 06:29:25.917456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.638 [2024-12-09 06:29:25.917466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.638 qpair failed and we were unable to recover it. 00:30:31.638 [2024-12-09 06:29:25.917809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.638 [2024-12-09 06:29:25.917819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.638 qpair failed and we were unable to recover it. 00:30:31.638 [2024-12-09 06:29:25.918010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.638 [2024-12-09 06:29:25.918019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.638 qpair failed and we were unable to recover it. 00:30:31.638 [2024-12-09 06:29:25.918392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.638 [2024-12-09 06:29:25.918401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.638 qpair failed and we were unable to recover it. 00:30:31.638 [2024-12-09 06:29:25.918721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.638 [2024-12-09 06:29:25.918730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.638 qpair failed and we were unable to recover it. 00:30:31.638 [2024-12-09 06:29:25.918907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.638 [2024-12-09 06:29:25.918917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.638 qpair failed and we were unable to recover it. 00:30:31.638 [2024-12-09 06:29:25.919096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.638 [2024-12-09 06:29:25.919105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.638 qpair failed and we were unable to recover it. 00:30:31.638 [2024-12-09 06:29:25.919419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.638 [2024-12-09 06:29:25.919428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.638 qpair failed and we were unable to recover it. 00:30:31.638 [2024-12-09 06:29:25.919601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.638 [2024-12-09 06:29:25.919611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.638 qpair failed and we were unable to recover it. 00:30:31.638 [2024-12-09 06:29:25.919931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.638 [2024-12-09 06:29:25.919940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.638 qpair failed and we were unable to recover it. 00:30:31.638 [2024-12-09 06:29:25.920260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.638 [2024-12-09 06:29:25.920271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.638 qpair failed and we were unable to recover it. 00:30:31.638 [2024-12-09 06:29:25.920547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.638 [2024-12-09 06:29:25.920557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.638 qpair failed and we were unable to recover it. 00:30:31.638 [2024-12-09 06:29:25.920865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.638 [2024-12-09 06:29:25.920874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.638 qpair failed and we were unable to recover it. 00:30:31.638 [2024-12-09 06:29:25.921030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.638 [2024-12-09 06:29:25.921047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.638 qpair failed and we were unable to recover it. 00:30:31.638 [2024-12-09 06:29:25.921352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.638 [2024-12-09 06:29:25.921361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.638 qpair failed and we were unable to recover it. 00:30:31.638 [2024-12-09 06:29:25.921655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.638 [2024-12-09 06:29:25.921664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.638 qpair failed and we were unable to recover it. 00:30:31.638 [2024-12-09 06:29:25.921820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.638 [2024-12-09 06:29:25.921830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.638 qpair failed and we were unable to recover it. 00:30:31.638 [2024-12-09 06:29:25.922134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.638 [2024-12-09 06:29:25.922144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.638 qpair failed and we were unable to recover it. 00:30:31.638 [2024-12-09 06:29:25.922303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.638 [2024-12-09 06:29:25.922312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.638 qpair failed and we were unable to recover it. 00:30:31.638 [2024-12-09 06:29:25.922609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.638 [2024-12-09 06:29:25.922618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.638 qpair failed and we were unable to recover it. 00:30:31.638 [2024-12-09 06:29:25.922929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.638 [2024-12-09 06:29:25.922938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.638 qpair failed and we were unable to recover it. 00:30:31.638 [2024-12-09 06:29:25.923218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.638 [2024-12-09 06:29:25.923227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.638 qpair failed and we were unable to recover it. 00:30:31.638 [2024-12-09 06:29:25.923527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.638 [2024-12-09 06:29:25.923537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.638 qpair failed and we were unable to recover it. 00:30:31.638 [2024-12-09 06:29:25.923711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.638 [2024-12-09 06:29:25.923720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.638 qpair failed and we were unable to recover it. 00:30:31.638 [2024-12-09 06:29:25.924094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.638 [2024-12-09 06:29:25.924105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.638 qpair failed and we were unable to recover it. 00:30:31.638 [2024-12-09 06:29:25.924389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.638 [2024-12-09 06:29:25.924399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.638 qpair failed and we were unable to recover it. 00:30:31.638 [2024-12-09 06:29:25.924666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.638 [2024-12-09 06:29:25.924676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.638 qpair failed and we were unable to recover it. 00:30:31.638 [2024-12-09 06:29:25.924948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.638 [2024-12-09 06:29:25.924957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.638 qpair failed and we were unable to recover it. 00:30:31.638 [2024-12-09 06:29:25.925278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.638 [2024-12-09 06:29:25.925286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.638 qpair failed and we were unable to recover it. 00:30:31.638 [2024-12-09 06:29:25.925581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.638 [2024-12-09 06:29:25.925591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.638 qpair failed and we were unable to recover it. 00:30:31.638 [2024-12-09 06:29:25.925882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.638 [2024-12-09 06:29:25.925892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.638 qpair failed and we were unable to recover it. 00:30:31.638 [2024-12-09 06:29:25.926199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.638 [2024-12-09 06:29:25.926209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.638 qpair failed and we were unable to recover it. 00:30:31.638 [2024-12-09 06:29:25.926535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.638 [2024-12-09 06:29:25.926545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.638 qpair failed and we were unable to recover it. 00:30:31.638 [2024-12-09 06:29:25.926592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.638 [2024-12-09 06:29:25.926602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.638 qpair failed and we were unable to recover it. 00:30:31.638 [2024-12-09 06:29:25.926894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.638 [2024-12-09 06:29:25.926904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.639 qpair failed and we were unable to recover it. 00:30:31.639 [2024-12-09 06:29:25.927220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.639 [2024-12-09 06:29:25.927229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.639 qpair failed and we were unable to recover it. 00:30:31.639 [2024-12-09 06:29:25.927541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.639 [2024-12-09 06:29:25.927551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.639 qpair failed and we were unable to recover it. 00:30:31.639 [2024-12-09 06:29:25.927748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.639 [2024-12-09 06:29:25.927758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.639 qpair failed and we were unable to recover it. 00:30:31.639 [2024-12-09 06:29:25.927927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.639 [2024-12-09 06:29:25.927937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.639 qpair failed and we were unable to recover it. 00:30:31.639 [2024-12-09 06:29:25.928246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.639 [2024-12-09 06:29:25.928257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.639 qpair failed and we were unable to recover it. 00:30:31.639 [2024-12-09 06:29:25.928583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.639 [2024-12-09 06:29:25.928594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.639 qpair failed and we were unable to recover it. 00:30:31.639 [2024-12-09 06:29:25.928898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.639 [2024-12-09 06:29:25.928908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.639 qpair failed and we were unable to recover it. 00:30:31.639 [2024-12-09 06:29:25.929229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.639 [2024-12-09 06:29:25.929239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.639 qpair failed and we were unable to recover it. 00:30:31.639 [2024-12-09 06:29:25.929410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.639 [2024-12-09 06:29:25.929420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.639 qpair failed and we were unable to recover it. 00:30:31.639 [2024-12-09 06:29:25.929735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.639 [2024-12-09 06:29:25.929744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.639 qpair failed and we were unable to recover it. 00:30:31.639 [2024-12-09 06:29:25.930055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.639 [2024-12-09 06:29:25.930064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.639 qpair failed and we were unable to recover it. 00:30:31.639 [2024-12-09 06:29:25.930226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.639 [2024-12-09 06:29:25.930236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.639 qpair failed and we were unable to recover it. 00:30:31.639 [2024-12-09 06:29:25.930586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.639 [2024-12-09 06:29:25.930596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.639 qpair failed and we were unable to recover it. 00:30:31.639 [2024-12-09 06:29:25.930899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.639 [2024-12-09 06:29:25.930909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.639 qpair failed and we were unable to recover it. 00:30:31.639 [2024-12-09 06:29:25.931269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.639 [2024-12-09 06:29:25.931278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.639 qpair failed and we were unable to recover it. 00:30:31.639 [2024-12-09 06:29:25.931431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.639 [2024-12-09 06:29:25.931441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.639 qpair failed and we were unable to recover it. 00:30:31.639 [2024-12-09 06:29:25.931760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.639 [2024-12-09 06:29:25.931769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.639 qpair failed and we were unable to recover it. 00:30:31.639 [2024-12-09 06:29:25.932078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.639 [2024-12-09 06:29:25.932087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.639 qpair failed and we were unable to recover it. 00:30:31.639 [2024-12-09 06:29:25.932270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.639 [2024-12-09 06:29:25.932281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.639 qpair failed and we were unable to recover it. 00:30:31.639 [2024-12-09 06:29:25.932590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.639 [2024-12-09 06:29:25.932600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.639 qpair failed and we were unable to recover it. 00:30:31.639 [2024-12-09 06:29:25.932897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.639 [2024-12-09 06:29:25.932906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.639 qpair failed and we were unable to recover it. 00:30:31.639 [2024-12-09 06:29:25.933067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.639 [2024-12-09 06:29:25.933077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.639 qpair failed and we were unable to recover it. 00:30:31.639 [2024-12-09 06:29:25.933372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.639 [2024-12-09 06:29:25.933381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.639 qpair failed and we were unable to recover it. 00:30:31.639 [2024-12-09 06:29:25.933534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.639 [2024-12-09 06:29:25.933543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.639 qpair failed and we were unable to recover it. 00:30:31.639 [2024-12-09 06:29:25.933857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.639 [2024-12-09 06:29:25.933867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.639 qpair failed and we were unable to recover it. 00:30:31.639 [2024-12-09 06:29:25.934039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.639 [2024-12-09 06:29:25.934049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.639 qpair failed and we were unable to recover it. 00:30:31.639 [2024-12-09 06:29:25.934402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.639 [2024-12-09 06:29:25.934413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.639 qpair failed and we were unable to recover it. 00:30:31.639 [2024-12-09 06:29:25.934587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.639 [2024-12-09 06:29:25.934597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.639 qpair failed and we were unable to recover it. 00:30:31.639 [2024-12-09 06:29:25.934936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.639 [2024-12-09 06:29:25.934945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.639 qpair failed and we were unable to recover it. 00:30:31.639 [2024-12-09 06:29:25.935245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.639 [2024-12-09 06:29:25.935254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.639 qpair failed and we were unable to recover it. 00:30:31.639 [2024-12-09 06:29:25.935440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.639 [2024-12-09 06:29:25.935467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.639 qpair failed and we were unable to recover it. 00:30:31.639 [2024-12-09 06:29:25.935741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.639 [2024-12-09 06:29:25.935750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.639 qpair failed and we were unable to recover it. 00:30:31.639 [2024-12-09 06:29:25.936019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.639 [2024-12-09 06:29:25.936028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.639 qpair failed and we were unable to recover it. 00:30:31.639 [2024-12-09 06:29:25.936331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.639 [2024-12-09 06:29:25.936340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.639 qpair failed and we were unable to recover it. 00:30:31.639 [2024-12-09 06:29:25.936529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.639 [2024-12-09 06:29:25.936539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.639 qpair failed and we were unable to recover it. 00:30:31.639 [2024-12-09 06:29:25.936703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.640 [2024-12-09 06:29:25.936712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.640 qpair failed and we were unable to recover it. 00:30:31.640 [2024-12-09 06:29:25.937010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.640 [2024-12-09 06:29:25.937020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.640 qpair failed and we were unable to recover it. 00:30:31.640 [2024-12-09 06:29:25.937315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.640 [2024-12-09 06:29:25.937324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.640 qpair failed and we were unable to recover it. 00:30:31.640 [2024-12-09 06:29:25.937640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.640 [2024-12-09 06:29:25.937650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.640 qpair failed and we were unable to recover it. 00:30:31.640 [2024-12-09 06:29:25.937818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.640 [2024-12-09 06:29:25.937827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.640 qpair failed and we were unable to recover it. 00:30:31.640 [2024-12-09 06:29:25.938141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.640 [2024-12-09 06:29:25.938150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.640 qpair failed and we were unable to recover it. 00:30:31.640 [2024-12-09 06:29:25.938513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.640 [2024-12-09 06:29:25.938523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.640 qpair failed and we were unable to recover it. 00:30:31.640 [2024-12-09 06:29:25.938830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.640 [2024-12-09 06:29:25.938840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.640 qpair failed and we were unable to recover it. 00:30:31.640 [2024-12-09 06:29:25.938886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.640 [2024-12-09 06:29:25.938895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.640 qpair failed and we were unable to recover it. 00:30:31.640 [2024-12-09 06:29:25.939230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.640 [2024-12-09 06:29:25.939242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.640 qpair failed and we were unable to recover it. 00:30:31.640 [2024-12-09 06:29:25.939534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.640 [2024-12-09 06:29:25.939544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.640 qpair failed and we were unable to recover it. 00:30:31.640 [2024-12-09 06:29:25.939717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.640 [2024-12-09 06:29:25.939726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.640 qpair failed and we were unable to recover it. 00:30:31.640 [2024-12-09 06:29:25.940074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.640 [2024-12-09 06:29:25.940084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.640 qpair failed and we were unable to recover it. 00:30:31.640 [2024-12-09 06:29:25.940383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.640 [2024-12-09 06:29:25.940393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.640 qpair failed and we were unable to recover it. 00:30:31.640 [2024-12-09 06:29:25.940550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.640 [2024-12-09 06:29:25.940560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.640 qpair failed and we were unable to recover it. 00:30:31.640 [2024-12-09 06:29:25.940864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.640 [2024-12-09 06:29:25.940874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.640 qpair failed and we were unable to recover it. 00:30:31.640 [2024-12-09 06:29:25.941024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.640 [2024-12-09 06:29:25.941033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.640 qpair failed and we were unable to recover it. 00:30:31.640 [2024-12-09 06:29:25.941295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.640 [2024-12-09 06:29:25.941304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.640 qpair failed and we were unable to recover it. 00:30:31.640 [2024-12-09 06:29:25.941580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.640 [2024-12-09 06:29:25.941590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.640 qpair failed and we were unable to recover it. 00:30:31.640 [2024-12-09 06:29:25.941770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.640 [2024-12-09 06:29:25.941779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.640 qpair failed and we were unable to recover it. 00:30:31.640 [2024-12-09 06:29:25.941964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.640 [2024-12-09 06:29:25.941973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.640 qpair failed and we were unable to recover it. 00:30:31.640 [2024-12-09 06:29:25.942248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.640 [2024-12-09 06:29:25.942258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.640 qpair failed and we were unable to recover it. 00:30:31.640 [2024-12-09 06:29:25.942563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.640 [2024-12-09 06:29:25.942573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.640 qpair failed and we were unable to recover it. 00:30:31.640 [2024-12-09 06:29:25.942855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.640 [2024-12-09 06:29:25.942864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.640 qpair failed and we were unable to recover it. 00:30:31.640 [2024-12-09 06:29:25.943164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.640 [2024-12-09 06:29:25.943173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.640 qpair failed and we were unable to recover it. 00:30:31.640 [2024-12-09 06:29:25.943473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.640 [2024-12-09 06:29:25.943482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.640 qpair failed and we were unable to recover it. 00:30:31.640 [2024-12-09 06:29:25.943662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.640 [2024-12-09 06:29:25.943672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.640 qpair failed and we were unable to recover it. 00:30:31.640 [2024-12-09 06:29:25.943837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.640 [2024-12-09 06:29:25.943848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.640 qpair failed and we were unable to recover it. 00:30:31.641 [2024-12-09 06:29:25.944139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.641 [2024-12-09 06:29:25.944148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.641 qpair failed and we were unable to recover it. 00:30:31.641 [2024-12-09 06:29:25.944307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.641 [2024-12-09 06:29:25.944316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.641 qpair failed and we were unable to recover it. 00:30:31.641 [2024-12-09 06:29:25.944562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.641 [2024-12-09 06:29:25.944572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.641 qpair failed and we were unable to recover it. 00:30:31.641 [2024-12-09 06:29:25.944768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.641 [2024-12-09 06:29:25.944778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.641 qpair failed and we were unable to recover it. 00:30:31.641 [2024-12-09 06:29:25.945080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.641 [2024-12-09 06:29:25.945090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.641 qpair failed and we were unable to recover it. 00:30:31.641 [2024-12-09 06:29:25.945367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.641 [2024-12-09 06:29:25.945376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.641 qpair failed and we were unable to recover it. 00:30:31.641 [2024-12-09 06:29:25.945536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.641 [2024-12-09 06:29:25.945546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.641 qpair failed and we were unable to recover it. 00:30:31.641 [2024-12-09 06:29:25.945858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.641 [2024-12-09 06:29:25.945867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.641 qpair failed and we were unable to recover it. 00:30:31.641 [2024-12-09 06:29:25.946146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.641 [2024-12-09 06:29:25.946156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.641 qpair failed and we were unable to recover it. 00:30:31.641 [2024-12-09 06:29:25.946459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.641 [2024-12-09 06:29:25.946468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.641 qpair failed and we were unable to recover it. 00:30:31.641 [2024-12-09 06:29:25.946767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.641 [2024-12-09 06:29:25.946776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.641 qpair failed and we were unable to recover it. 00:30:31.641 [2024-12-09 06:29:25.947124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.641 [2024-12-09 06:29:25.947134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.641 qpair failed and we were unable to recover it. 00:30:31.641 [2024-12-09 06:29:25.947412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.641 [2024-12-09 06:29:25.947422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.641 qpair failed and we were unable to recover it. 00:30:31.641 [2024-12-09 06:29:25.947712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.641 [2024-12-09 06:29:25.947722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.641 qpair failed and we were unable to recover it. 00:30:31.641 [2024-12-09 06:29:25.947888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.641 [2024-12-09 06:29:25.947897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.641 qpair failed and we were unable to recover it. 00:30:31.641 [2024-12-09 06:29:25.947937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.641 [2024-12-09 06:29:25.947945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.641 qpair failed and we were unable to recover it. 00:30:31.641 [2024-12-09 06:29:25.948265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.641 [2024-12-09 06:29:25.948275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.641 qpair failed and we were unable to recover it. 00:30:31.641 [2024-12-09 06:29:25.948569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.641 [2024-12-09 06:29:25.948578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.641 qpair failed and we were unable to recover it. 00:30:31.641 [2024-12-09 06:29:25.948767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.641 [2024-12-09 06:29:25.948777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.641 qpair failed and we were unable to recover it. 00:30:31.641 [2024-12-09 06:29:25.949092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.641 [2024-12-09 06:29:25.949102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.641 qpair failed and we were unable to recover it. 00:30:31.641 [2024-12-09 06:29:25.949406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.641 [2024-12-09 06:29:25.949416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.641 qpair failed and we were unable to recover it. 00:30:31.641 [2024-12-09 06:29:25.949630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.641 [2024-12-09 06:29:25.949643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.641 qpair failed and we were unable to recover it. 00:30:31.641 [2024-12-09 06:29:25.949798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.641 [2024-12-09 06:29:25.949808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.641 qpair failed and we were unable to recover it. 00:30:31.641 [2024-12-09 06:29:25.950107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.641 [2024-12-09 06:29:25.950116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.641 qpair failed and we were unable to recover it. 00:30:31.641 [2024-12-09 06:29:25.950343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.641 [2024-12-09 06:29:25.950353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.641 qpair failed and we were unable to recover it. 00:30:31.641 [2024-12-09 06:29:25.950634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.641 [2024-12-09 06:29:25.950643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.641 qpair failed and we were unable to recover it. 00:30:31.641 [2024-12-09 06:29:25.950862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.641 [2024-12-09 06:29:25.950871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.641 qpair failed and we were unable to recover it. 00:30:31.641 [2024-12-09 06:29:25.951148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.641 [2024-12-09 06:29:25.951157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.641 qpair failed and we were unable to recover it. 00:30:31.641 [2024-12-09 06:29:25.951310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.641 [2024-12-09 06:29:25.951319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.641 qpair failed and we were unable to recover it. 00:30:31.641 [2024-12-09 06:29:25.951645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.641 [2024-12-09 06:29:25.951654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.641 qpair failed and we were unable to recover it. 00:30:31.641 [2024-12-09 06:29:25.951951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.641 [2024-12-09 06:29:25.951960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.641 qpair failed and we were unable to recover it. 00:30:31.641 [2024-12-09 06:29:25.952172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.641 [2024-12-09 06:29:25.952181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.641 qpair failed and we were unable to recover it. 00:30:31.641 [2024-12-09 06:29:25.952483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.641 [2024-12-09 06:29:25.952492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.641 qpair failed and we were unable to recover it. 00:30:31.641 [2024-12-09 06:29:25.952777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.641 [2024-12-09 06:29:25.952786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.641 qpair failed and we were unable to recover it. 00:30:31.641 [2024-12-09 06:29:25.952944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.641 [2024-12-09 06:29:25.952954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.641 qpair failed and we were unable to recover it. 00:30:31.641 [2024-12-09 06:29:25.953318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.641 [2024-12-09 06:29:25.953409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.641 qpair failed and we were unable to recover it. 00:30:31.642 [2024-12-09 06:29:25.953940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.642 [2024-12-09 06:29:25.954031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.642 qpair failed and we were unable to recover it. 00:30:31.642 [2024-12-09 06:29:25.954443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.642 [2024-12-09 06:29:25.954501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71b0000b90 with addr=10.0.0.2, port=4420 00:30:31.642 qpair failed and we were unable to recover it. 00:30:31.642 [2024-12-09 06:29:25.954811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.642 [2024-12-09 06:29:25.954820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.642 qpair failed and we were unable to recover it. 00:30:31.642 [2024-12-09 06:29:25.955105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.642 [2024-12-09 06:29:25.955115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.642 qpair failed and we were unable to recover it. 00:30:31.642 [2024-12-09 06:29:25.955426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.642 [2024-12-09 06:29:25.955436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.642 qpair failed and we were unable to recover it. 00:30:31.642 [2024-12-09 06:29:25.955622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.642 [2024-12-09 06:29:25.955632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.642 qpair failed and we were unable to recover it. 00:30:31.642 [2024-12-09 06:29:25.956054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.642 [2024-12-09 06:29:25.956063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.642 qpair failed and we were unable to recover it. 00:30:31.642 [2024-12-09 06:29:25.956352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.642 [2024-12-09 06:29:25.956361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.642 qpair failed and we were unable to recover it. 00:30:31.642 [2024-12-09 06:29:25.956577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.642 [2024-12-09 06:29:25.956588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.642 qpair failed and we were unable to recover it. 00:30:31.642 [2024-12-09 06:29:25.956741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.642 [2024-12-09 06:29:25.956750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.642 qpair failed and we were unable to recover it. 00:30:31.642 [2024-12-09 06:29:25.957083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.642 [2024-12-09 06:29:25.957093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.642 qpair failed and we were unable to recover it. 00:30:31.642 [2024-12-09 06:29:25.957300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.642 [2024-12-09 06:29:25.957310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.642 qpair failed and we were unable to recover it. 00:30:31.642 [2024-12-09 06:29:25.957515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.642 [2024-12-09 06:29:25.957524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.642 qpair failed and we were unable to recover it. 00:30:31.642 [2024-12-09 06:29:25.957627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.642 [2024-12-09 06:29:25.957636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.642 qpair failed and we were unable to recover it. 00:30:31.642 [2024-12-09 06:29:25.957794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.642 [2024-12-09 06:29:25.957804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.642 qpair failed and we were unable to recover it. 00:30:31.642 [2024-12-09 06:29:25.958011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.642 [2024-12-09 06:29:25.958020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.642 qpair failed and we were unable to recover it. 00:30:31.642 [2024-12-09 06:29:25.958325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.642 [2024-12-09 06:29:25.958335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.642 qpair failed and we were unable to recover it. 00:30:31.642 [2024-12-09 06:29:25.958383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.642 [2024-12-09 06:29:25.958392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.642 qpair failed and we were unable to recover it. 00:30:31.642 [2024-12-09 06:29:25.958534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.642 [2024-12-09 06:29:25.958543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.642 qpair failed and we were unable to recover it. 00:30:31.642 [2024-12-09 06:29:25.958870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.642 [2024-12-09 06:29:25.958879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.642 qpair failed and we were unable to recover it. 00:30:31.642 [2024-12-09 06:29:25.959061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.642 [2024-12-09 06:29:25.959071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.642 qpair failed and we were unable to recover it. 00:30:31.642 [2024-12-09 06:29:25.959253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.642 [2024-12-09 06:29:25.959264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.642 qpair failed and we were unable to recover it. 00:30:31.642 [2024-12-09 06:29:25.959569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.642 [2024-12-09 06:29:25.959579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.642 qpair failed and we were unable to recover it. 00:30:31.642 [2024-12-09 06:29:25.959854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.642 [2024-12-09 06:29:25.959864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.642 qpair failed and we were unable to recover it. 00:30:31.642 [2024-12-09 06:29:25.959906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.642 [2024-12-09 06:29:25.959915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.642 qpair failed and we were unable to recover it. 00:30:31.642 [2024-12-09 06:29:25.960069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.642 [2024-12-09 06:29:25.960080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.642 qpair failed and we were unable to recover it. 00:30:31.642 [2024-12-09 06:29:25.960255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.642 [2024-12-09 06:29:25.960265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.642 qpair failed and we were unable to recover it. 00:30:31.642 [2024-12-09 06:29:25.960525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.642 [2024-12-09 06:29:25.960534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.642 qpair failed and we were unable to recover it. 00:30:31.642 [2024-12-09 06:29:25.960624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.642 [2024-12-09 06:29:25.960632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.642 qpair failed and we were unable to recover it. 00:30:31.642 [2024-12-09 06:29:25.960877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.642 [2024-12-09 06:29:25.960886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.642 qpair failed and we were unable to recover it. 00:30:31.642 [2024-12-09 06:29:25.961188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.642 [2024-12-09 06:29:25.961197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.642 qpair failed and we were unable to recover it. 00:30:31.642 [2024-12-09 06:29:25.961439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.642 [2024-12-09 06:29:25.961453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.642 qpair failed and we were unable to recover it. 00:30:31.642 [2024-12-09 06:29:25.961677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.642 [2024-12-09 06:29:25.961686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.642 qpair failed and we were unable to recover it. 00:30:31.642 [2024-12-09 06:29:25.962037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.642 [2024-12-09 06:29:25.962047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.642 qpair failed and we were unable to recover it. 00:30:31.642 [2024-12-09 06:29:25.962327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.643 [2024-12-09 06:29:25.962336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.643 qpair failed and we were unable to recover it. 00:30:31.643 [2024-12-09 06:29:25.962614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.643 [2024-12-09 06:29:25.962624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.643 qpair failed and we were unable to recover it. 00:30:31.643 [2024-12-09 06:29:25.962915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.643 [2024-12-09 06:29:25.962924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.643 qpair failed and we were unable to recover it. 00:30:31.643 [2024-12-09 06:29:25.963131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.643 [2024-12-09 06:29:25.963140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.643 qpair failed and we were unable to recover it. 00:30:31.643 [2024-12-09 06:29:25.963182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.643 [2024-12-09 06:29:25.963191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.643 qpair failed and we were unable to recover it. 00:30:31.643 [2024-12-09 06:29:25.963404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.643 [2024-12-09 06:29:25.963415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.643 qpair failed and we were unable to recover it. 00:30:31.643 [2024-12-09 06:29:25.963736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.643 [2024-12-09 06:29:25.963745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.643 qpair failed and we were unable to recover it. 00:30:31.643 [2024-12-09 06:29:25.963809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.643 [2024-12-09 06:29:25.963817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.643 qpair failed and we were unable to recover it. 00:30:31.643 [2024-12-09 06:29:25.963986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.643 [2024-12-09 06:29:25.963996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.643 qpair failed and we were unable to recover it. 00:30:31.643 [2024-12-09 06:29:25.964196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.643 [2024-12-09 06:29:25.964205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.643 qpair failed and we were unable to recover it. 00:30:31.643 [2024-12-09 06:29:25.964402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.643 [2024-12-09 06:29:25.964411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.643 qpair failed and we were unable to recover it. 00:30:31.643 [2024-12-09 06:29:25.964699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.643 [2024-12-09 06:29:25.964709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.643 qpair failed and we were unable to recover it. 00:30:31.643 [2024-12-09 06:29:25.964747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.643 [2024-12-09 06:29:25.964756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.643 qpair failed and we were unable to recover it. 00:30:31.643 [2024-12-09 06:29:25.964944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.643 [2024-12-09 06:29:25.964955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.643 qpair failed and we were unable to recover it. 00:30:31.643 [2024-12-09 06:29:25.965141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.643 [2024-12-09 06:29:25.965151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.643 qpair failed and we were unable to recover it. 00:30:31.643 [2024-12-09 06:29:25.965306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.643 [2024-12-09 06:29:25.965316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.643 qpair failed and we were unable to recover it. 00:30:31.643 [2024-12-09 06:29:25.965571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.643 [2024-12-09 06:29:25.965581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.643 qpair failed and we were unable to recover it. 00:30:31.643 [2024-12-09 06:29:25.965773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.643 [2024-12-09 06:29:25.965782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.643 qpair failed and we were unable to recover it. 00:30:31.643 [2024-12-09 06:29:25.966089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.643 [2024-12-09 06:29:25.966098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.643 qpair failed and we were unable to recover it. 00:30:31.643 [2024-12-09 06:29:25.966268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.643 [2024-12-09 06:29:25.966277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.643 qpair failed and we were unable to recover it. 00:30:31.643 [2024-12-09 06:29:25.966587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.643 [2024-12-09 06:29:25.966597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.643 qpair failed and we were unable to recover it. 00:30:31.643 [2024-12-09 06:29:25.966914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.643 [2024-12-09 06:29:25.966923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.643 qpair failed and we were unable to recover it. 00:30:31.643 [2024-12-09 06:29:25.967109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.643 [2024-12-09 06:29:25.967119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.643 qpair failed and we were unable to recover it. 00:30:31.643 [2024-12-09 06:29:25.967334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.643 [2024-12-09 06:29:25.967343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.643 qpair failed and we were unable to recover it. 00:30:31.643 [2024-12-09 06:29:25.967657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.643 [2024-12-09 06:29:25.967667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.643 qpair failed and we were unable to recover it. 00:30:31.643 [2024-12-09 06:29:25.967970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.643 [2024-12-09 06:29:25.967979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.643 qpair failed and we were unable to recover it. 00:30:31.643 [2024-12-09 06:29:25.968245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.643 [2024-12-09 06:29:25.968254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.643 qpair failed and we were unable to recover it. 00:30:31.643 [2024-12-09 06:29:25.968568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.643 [2024-12-09 06:29:25.968578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.643 qpair failed and we were unable to recover it. 00:30:31.643 [2024-12-09 06:29:25.968880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.643 [2024-12-09 06:29:25.968890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.643 qpair failed and we were unable to recover it. 00:30:31.643 [2024-12-09 06:29:25.969194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.643 [2024-12-09 06:29:25.969204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.643 qpair failed and we were unable to recover it. 00:30:31.643 [2024-12-09 06:29:25.969360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.643 [2024-12-09 06:29:25.969370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.643 qpair failed and we were unable to recover it. 00:30:31.643 [2024-12-09 06:29:25.969642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.643 [2024-12-09 06:29:25.969654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.643 qpair failed and we were unable to recover it. 00:30:31.643 [2024-12-09 06:29:25.970030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.643 [2024-12-09 06:29:25.970040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.643 qpair failed and we were unable to recover it. 00:30:31.643 [2024-12-09 06:29:25.970321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.643 [2024-12-09 06:29:25.970331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.643 qpair failed and we were unable to recover it. 00:30:31.643 [2024-12-09 06:29:25.970637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.643 [2024-12-09 06:29:25.970646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.643 qpair failed and we were unable to recover it. 00:30:31.644 [2024-12-09 06:29:25.970918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.644 [2024-12-09 06:29:25.970927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.644 qpair failed and we were unable to recover it. 00:30:31.644 [2024-12-09 06:29:25.971224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.644 [2024-12-09 06:29:25.971234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.644 qpair failed and we were unable to recover it. 00:30:31.644 [2024-12-09 06:29:25.971537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.644 [2024-12-09 06:29:25.971546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.644 qpair failed and we were unable to recover it. 00:30:31.644 [2024-12-09 06:29:25.971864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.644 [2024-12-09 06:29:25.971874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.644 qpair failed and we were unable to recover it. 00:30:31.644 [2024-12-09 06:29:25.972062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.644 [2024-12-09 06:29:25.972072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.644 qpair failed and we were unable to recover it. 00:30:31.644 [2024-12-09 06:29:25.972391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.644 [2024-12-09 06:29:25.972400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.644 qpair failed and we were unable to recover it. 00:30:31.644 [2024-12-09 06:29:25.972699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.644 [2024-12-09 06:29:25.972708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.644 qpair failed and we were unable to recover it. 00:30:31.644 [2024-12-09 06:29:25.972870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.644 [2024-12-09 06:29:25.972879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.644 qpair failed and we were unable to recover it. 00:30:31.644 [2024-12-09 06:29:25.973088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.644 [2024-12-09 06:29:25.973097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.644 qpair failed and we were unable to recover it. 00:30:31.644 [2024-12-09 06:29:25.973271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.644 [2024-12-09 06:29:25.973280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.644 qpair failed and we were unable to recover it. 00:30:31.644 [2024-12-09 06:29:25.973339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.644 [2024-12-09 06:29:25.973347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.644 qpair failed and we were unable to recover it. 00:30:31.644 [2024-12-09 06:29:25.973530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.644 [2024-12-09 06:29:25.973540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.644 qpair failed and we were unable to recover it. 00:30:31.644 [2024-12-09 06:29:25.973754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.644 [2024-12-09 06:29:25.973764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.644 qpair failed and we were unable to recover it. 00:30:31.644 [2024-12-09 06:29:25.974068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.644 [2024-12-09 06:29:25.974078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.644 qpair failed and we were unable to recover it. 00:30:31.644 [2024-12-09 06:29:25.974261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.644 [2024-12-09 06:29:25.974271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.644 qpair failed and we were unable to recover it. 00:30:31.644 [2024-12-09 06:29:25.974431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.644 [2024-12-09 06:29:25.974441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.644 qpair failed and we were unable to recover it. 00:30:31.644 [2024-12-09 06:29:25.974622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.644 [2024-12-09 06:29:25.974632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.644 qpair failed and we were unable to recover it. 00:30:31.644 [2024-12-09 06:29:25.974686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.644 [2024-12-09 06:29:25.974695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.644 qpair failed and we were unable to recover it. 00:30:31.644 [2024-12-09 06:29:25.974966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.644 [2024-12-09 06:29:25.974975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.644 qpair failed and we were unable to recover it. 00:30:31.644 [2024-12-09 06:29:25.975298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.644 [2024-12-09 06:29:25.975309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.644 qpair failed and we were unable to recover it. 00:30:31.644 [2024-12-09 06:29:25.975389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.644 [2024-12-09 06:29:25.975398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.644 qpair failed and we were unable to recover it. 00:30:31.644 [2024-12-09 06:29:25.975722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.644 [2024-12-09 06:29:25.975733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.644 qpair failed and we were unable to recover it. 00:30:31.644 [2024-12-09 06:29:25.976009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.644 [2024-12-09 06:29:25.976020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.644 qpair failed and we were unable to recover it. 00:30:31.644 [2024-12-09 06:29:25.976064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.644 [2024-12-09 06:29:25.976072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.644 qpair failed and we were unable to recover it. 00:30:31.644 [2024-12-09 06:29:25.976232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.644 [2024-12-09 06:29:25.976247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.644 qpair failed and we were unable to recover it. 00:30:31.644 [2024-12-09 06:29:25.976424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.644 [2024-12-09 06:29:25.976433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.644 qpair failed and we were unable to recover it. 00:30:31.644 [2024-12-09 06:29:25.976656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.644 [2024-12-09 06:29:25.976665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.644 qpair failed and we were unable to recover it. 00:30:31.644 [2024-12-09 06:29:25.976821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.644 [2024-12-09 06:29:25.976830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.644 qpair failed and we were unable to recover it. 00:30:31.644 [2024-12-09 06:29:25.977198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.644 [2024-12-09 06:29:25.977207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.644 qpair failed and we were unable to recover it. 00:30:31.644 [2024-12-09 06:29:25.977384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.644 [2024-12-09 06:29:25.977393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.644 qpair failed and we were unable to recover it. 00:30:31.644 [2024-12-09 06:29:25.977701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.644 [2024-12-09 06:29:25.977710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.644 qpair failed and we were unable to recover it. 00:30:31.644 [2024-12-09 06:29:25.977873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.644 [2024-12-09 06:29:25.977883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.644 qpair failed and we were unable to recover it. 00:30:31.644 [2024-12-09 06:29:25.978151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.644 [2024-12-09 06:29:25.978161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.644 qpair failed and we were unable to recover it. 00:30:31.644 [2024-12-09 06:29:25.978338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.644 [2024-12-09 06:29:25.978348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.644 qpair failed and we were unable to recover it. 00:30:31.644 [2024-12-09 06:29:25.978668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.644 [2024-12-09 06:29:25.978678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.644 qpair failed and we were unable to recover it. 00:30:31.644 [2024-12-09 06:29:25.978723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.644 [2024-12-09 06:29:25.978732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.644 qpair failed and we were unable to recover it. 00:30:31.644 [2024-12-09 06:29:25.978916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.645 [2024-12-09 06:29:25.978927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.645 qpair failed and we were unable to recover it. 00:30:31.645 [2024-12-09 06:29:25.979222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.645 [2024-12-09 06:29:25.979232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.645 qpair failed and we were unable to recover it. 00:30:31.645 [2024-12-09 06:29:25.979288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.645 [2024-12-09 06:29:25.979297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.645 qpair failed and we were unable to recover it. 00:30:31.645 [2024-12-09 06:29:25.979493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.645 [2024-12-09 06:29:25.979502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.645 qpair failed and we were unable to recover it. 00:30:31.645 [2024-12-09 06:29:25.979681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.645 [2024-12-09 06:29:25.979690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.645 qpair failed and we were unable to recover it. 00:30:31.645 [2024-12-09 06:29:25.979854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.645 [2024-12-09 06:29:25.979863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.645 qpair failed and we were unable to recover it. 00:30:31.645 [2024-12-09 06:29:25.980174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.645 [2024-12-09 06:29:25.980183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.645 qpair failed and we were unable to recover it. 00:30:31.645 [2024-12-09 06:29:25.980247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.645 [2024-12-09 06:29:25.980256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.645 qpair failed and we were unable to recover it. 00:30:31.645 [2024-12-09 06:29:25.980433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.645 [2024-12-09 06:29:25.980443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.645 qpair failed and we were unable to recover it. 00:30:31.645 [2024-12-09 06:29:25.980729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.645 [2024-12-09 06:29:25.980739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.645 qpair failed and we were unable to recover it. 00:30:31.645 [2024-12-09 06:29:25.981042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.645 [2024-12-09 06:29:25.981051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.645 qpair failed and we were unable to recover it. 00:30:31.645 [2024-12-09 06:29:25.981333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.645 [2024-12-09 06:29:25.981342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.645 qpair failed and we were unable to recover it. 00:30:31.645 [2024-12-09 06:29:25.981548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.645 [2024-12-09 06:29:25.981557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.645 qpair failed and we were unable to recover it. 00:30:31.645 [2024-12-09 06:29:25.981619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.645 [2024-12-09 06:29:25.981627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.645 qpair failed and we were unable to recover it. 00:30:31.645 [2024-12-09 06:29:25.981925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.645 [2024-12-09 06:29:25.981935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.645 qpair failed and we were unable to recover it. 00:30:31.645 [2024-12-09 06:29:25.982200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.645 [2024-12-09 06:29:25.982210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.645 qpair failed and we were unable to recover it. 00:30:31.645 [2024-12-09 06:29:25.982375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.645 [2024-12-09 06:29:25.982385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.645 qpair failed and we were unable to recover it. 00:30:31.645 [2024-12-09 06:29:25.982737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.645 [2024-12-09 06:29:25.982747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.645 qpair failed and we were unable to recover it. 00:30:31.645 [2024-12-09 06:29:25.983094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.645 [2024-12-09 06:29:25.983103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.645 qpair failed and we were unable to recover it. 00:30:31.645 [2024-12-09 06:29:25.983276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.645 [2024-12-09 06:29:25.983285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.645 qpair failed and we were unable to recover it. 00:30:31.645 [2024-12-09 06:29:25.983446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.645 [2024-12-09 06:29:25.983460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.645 qpair failed and we were unable to recover it. 00:30:31.645 [2024-12-09 06:29:25.983791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.645 [2024-12-09 06:29:25.983801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.645 qpair failed and we were unable to recover it. 00:30:31.645 [2024-12-09 06:29:25.983984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.645 [2024-12-09 06:29:25.983994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.645 qpair failed and we were unable to recover it. 00:30:31.645 [2024-12-09 06:29:25.984330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.645 [2024-12-09 06:29:25.984340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.645 qpair failed and we were unable to recover it. 00:30:31.645 [2024-12-09 06:29:25.984387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.645 [2024-12-09 06:29:25.984396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.645 qpair failed and we were unable to recover it. 00:30:31.645 [2024-12-09 06:29:25.984721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.645 [2024-12-09 06:29:25.984731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.645 qpair failed and we were unable to recover it. 00:30:31.645 [2024-12-09 06:29:25.985030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.645 [2024-12-09 06:29:25.985040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.645 qpair failed and we were unable to recover it. 00:30:31.645 [2024-12-09 06:29:25.985308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.645 [2024-12-09 06:29:25.985318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.645 qpair failed and we were unable to recover it. 00:30:31.645 [2024-12-09 06:29:25.985487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.645 [2024-12-09 06:29:25.985497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.645 qpair failed and we were unable to recover it. 00:30:31.645 [2024-12-09 06:29:25.985662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.646 [2024-12-09 06:29:25.985671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.646 qpair failed and we were unable to recover it. 00:30:31.646 [2024-12-09 06:29:25.985944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.646 [2024-12-09 06:29:25.985953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.646 qpair failed and we were unable to recover it. 00:30:31.646 [2024-12-09 06:29:25.986143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.646 [2024-12-09 06:29:25.986152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.646 qpair failed and we were unable to recover it. 00:30:31.646 [2024-12-09 06:29:25.986550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.646 [2024-12-09 06:29:25.986560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.646 qpair failed and we were unable to recover it. 00:30:31.646 [2024-12-09 06:29:25.986880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.646 [2024-12-09 06:29:25.986890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.646 qpair failed and we were unable to recover it. 00:30:31.646 [2024-12-09 06:29:25.987060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.646 [2024-12-09 06:29:25.987069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.646 qpair failed and we were unable to recover it. 00:30:31.646 [2024-12-09 06:29:25.987242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.646 [2024-12-09 06:29:25.987251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.646 qpair failed and we were unable to recover it. 00:30:31.646 [2024-12-09 06:29:25.987421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.646 [2024-12-09 06:29:25.987431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.646 qpair failed and we were unable to recover it. 00:30:31.646 [2024-12-09 06:29:25.987603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.646 [2024-12-09 06:29:25.987613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.646 qpair failed and we were unable to recover it. 00:30:31.646 [2024-12-09 06:29:25.987793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.646 [2024-12-09 06:29:25.987802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.646 qpair failed and we were unable to recover it. 00:30:31.646 [2024-12-09 06:29:25.988100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.646 [2024-12-09 06:29:25.988109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.646 qpair failed and we were unable to recover it. 00:30:31.646 [2024-12-09 06:29:25.988268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.646 [2024-12-09 06:29:25.988279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.646 qpair failed and we were unable to recover it. 00:30:31.646 [2024-12-09 06:29:25.988453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.646 [2024-12-09 06:29:25.988464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.646 qpair failed and we were unable to recover it. 00:30:31.646 [2024-12-09 06:29:25.988657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.646 [2024-12-09 06:29:25.988667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.646 qpair failed and we were unable to recover it. 00:30:31.646 [2024-12-09 06:29:25.988979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.646 [2024-12-09 06:29:25.988989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.646 qpair failed and we were unable to recover it. 00:30:31.646 [2024-12-09 06:29:25.989337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.646 [2024-12-09 06:29:25.989346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.646 qpair failed and we were unable to recover it. 00:30:31.646 [2024-12-09 06:29:25.989537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.646 [2024-12-09 06:29:25.989547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.646 qpair failed and we were unable to recover it. 00:30:31.646 [2024-12-09 06:29:25.989818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.646 [2024-12-09 06:29:25.989827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.646 qpair failed and we were unable to recover it. 00:30:31.646 [2024-12-09 06:29:25.989993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.646 [2024-12-09 06:29:25.990002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.646 qpair failed and we were unable to recover it. 00:30:31.646 [2024-12-09 06:29:25.990224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.646 [2024-12-09 06:29:25.990234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.646 qpair failed and we were unable to recover it. 00:30:31.646 [2024-12-09 06:29:25.990276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.646 [2024-12-09 06:29:25.990286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.646 qpair failed and we were unable to recover it. 00:30:31.646 [2024-12-09 06:29:25.990431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.646 [2024-12-09 06:29:25.990440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.646 qpair failed and we were unable to recover it. 00:30:31.646 [2024-12-09 06:29:25.990496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.646 [2024-12-09 06:29:25.990506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.646 qpair failed and we were unable to recover it. 00:30:31.646 [2024-12-09 06:29:25.990789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.646 [2024-12-09 06:29:25.990798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.646 qpair failed and we were unable to recover it. 00:30:31.646 [2024-12-09 06:29:25.990962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.646 [2024-12-09 06:29:25.990971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.646 qpair failed and we were unable to recover it. 00:30:31.646 [2024-12-09 06:29:25.991337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.646 [2024-12-09 06:29:25.991346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.646 qpair failed and we were unable to recover it. 00:30:31.646 [2024-12-09 06:29:25.991655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.646 [2024-12-09 06:29:25.991666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.646 qpair failed and we were unable to recover it. 00:30:31.646 [2024-12-09 06:29:25.991998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.646 [2024-12-09 06:29:25.992007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.646 qpair failed and we were unable to recover it. 00:30:31.646 [2024-12-09 06:29:25.992199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.646 [2024-12-09 06:29:25.992209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.646 qpair failed and we were unable to recover it. 00:30:31.646 [2024-12-09 06:29:25.992513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.646 [2024-12-09 06:29:25.992524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.646 qpair failed and we were unable to recover it. 00:30:31.646 [2024-12-09 06:29:25.992688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.646 [2024-12-09 06:29:25.992698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.646 qpair failed and we were unable to recover it. 00:30:31.646 [2024-12-09 06:29:25.993044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.646 [2024-12-09 06:29:25.993053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.646 qpair failed and we were unable to recover it. 00:30:31.646 [2024-12-09 06:29:25.993407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.646 [2024-12-09 06:29:25.993416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.646 qpair failed and we were unable to recover it. 00:30:31.646 [2024-12-09 06:29:25.993596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.646 [2024-12-09 06:29:25.993606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.646 qpair failed and we were unable to recover it. 00:30:31.646 [2024-12-09 06:29:25.993746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.646 [2024-12-09 06:29:25.993755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.646 qpair failed and we were unable to recover it. 00:30:31.646 [2024-12-09 06:29:25.994022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.646 [2024-12-09 06:29:25.994032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.646 qpair failed and we were unable to recover it. 00:30:31.646 [2024-12-09 06:29:25.994324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.647 [2024-12-09 06:29:25.994334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.647 qpair failed and we were unable to recover it. 00:30:31.647 [2024-12-09 06:29:25.994538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.647 [2024-12-09 06:29:25.994549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.647 qpair failed and we were unable to recover it. 00:30:31.647 [2024-12-09 06:29:25.994597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.647 [2024-12-09 06:29:25.994606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.647 qpair failed and we were unable to recover it. 00:30:31.647 [2024-12-09 06:29:25.994652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.647 [2024-12-09 06:29:25.994661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.647 qpair failed and we were unable to recover it. 00:30:31.647 [2024-12-09 06:29:25.994851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.647 [2024-12-09 06:29:25.994861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.647 qpair failed and we were unable to recover it. 00:30:31.647 [2024-12-09 06:29:25.995157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.647 [2024-12-09 06:29:25.995167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.647 qpair failed and we were unable to recover it. 00:30:31.647 [2024-12-09 06:29:25.995370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.647 [2024-12-09 06:29:25.995379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.647 qpair failed and we were unable to recover it. 00:30:31.647 [2024-12-09 06:29:25.995548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.647 [2024-12-09 06:29:25.995558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.647 qpair failed and we were unable to recover it. 00:30:31.647 [2024-12-09 06:29:25.995740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.647 [2024-12-09 06:29:25.995749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.647 qpair failed and we were unable to recover it. 00:30:31.647 [2024-12-09 06:29:25.995953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.647 [2024-12-09 06:29:25.995962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.647 qpair failed and we were unable to recover it. 00:30:31.647 [2024-12-09 06:29:25.996324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.647 [2024-12-09 06:29:25.996333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.647 qpair failed and we were unable to recover it. 00:30:31.647 [2024-12-09 06:29:25.996491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.647 [2024-12-09 06:29:25.996501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.647 qpair failed and we were unable to recover it. 00:30:31.647 [2024-12-09 06:29:25.996708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.647 [2024-12-09 06:29:25.996718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.647 qpair failed and we were unable to recover it. 00:30:31.647 [2024-12-09 06:29:25.996904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.647 [2024-12-09 06:29:25.996914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.647 qpair failed and we were unable to recover it. 00:30:31.647 [2024-12-09 06:29:25.997180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.647 [2024-12-09 06:29:25.997189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.647 qpair failed and we were unable to recover it. 00:30:31.647 [2024-12-09 06:29:25.997382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.647 [2024-12-09 06:29:25.997394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.647 qpair failed and we were unable to recover it. 00:30:31.647 [2024-12-09 06:29:25.997691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.647 [2024-12-09 06:29:25.997701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.647 qpair failed and we were unable to recover it. 00:30:31.647 [2024-12-09 06:29:25.997975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.647 [2024-12-09 06:29:25.997985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.647 qpair failed and we were unable to recover it. 00:30:31.647 [2024-12-09 06:29:25.998287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.647 [2024-12-09 06:29:25.998297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.647 qpair failed and we were unable to recover it. 00:30:31.647 [2024-12-09 06:29:25.998614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.647 [2024-12-09 06:29:25.998624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.647 qpair failed and we were unable to recover it. 00:30:31.647 [2024-12-09 06:29:25.998948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.647 [2024-12-09 06:29:25.998958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.647 qpair failed and we were unable to recover it. 00:30:31.647 [2024-12-09 06:29:25.999256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.647 [2024-12-09 06:29:25.999265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.647 qpair failed and we were unable to recover it. 00:30:31.647 [2024-12-09 06:29:25.999663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.647 [2024-12-09 06:29:25.999672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.647 qpair failed and we were unable to recover it. 00:30:31.647 [2024-12-09 06:29:25.999854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.647 [2024-12-09 06:29:25.999863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.647 qpair failed and we were unable to recover it. 00:30:31.647 [2024-12-09 06:29:26.000185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.647 [2024-12-09 06:29:26.000195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.647 qpair failed and we were unable to recover it. 00:30:31.647 [2024-12-09 06:29:26.000495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.647 [2024-12-09 06:29:26.000505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.647 qpair failed and we were unable to recover it. 00:30:31.647 [2024-12-09 06:29:26.000808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.647 [2024-12-09 06:29:26.000818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.647 qpair failed and we were unable to recover it. 00:30:31.647 [2024-12-09 06:29:26.000980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.647 [2024-12-09 06:29:26.000990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.647 qpair failed and we were unable to recover it. 00:30:31.647 [2024-12-09 06:29:26.001357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.647 [2024-12-09 06:29:26.001367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.647 qpair failed and we were unable to recover it. 00:30:31.647 [2024-12-09 06:29:26.001673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.647 [2024-12-09 06:29:26.001682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.647 qpair failed and we were unable to recover it. 00:30:31.647 [2024-12-09 06:29:26.001992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.647 [2024-12-09 06:29:26.002001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.647 qpair failed and we were unable to recover it. 00:30:31.647 [2024-12-09 06:29:26.002057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.647 [2024-12-09 06:29:26.002065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.647 qpair failed and we were unable to recover it. 00:30:31.647 [2024-12-09 06:29:26.002346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.647 [2024-12-09 06:29:26.002355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.647 qpair failed and we were unable to recover it. 00:30:31.647 [2024-12-09 06:29:26.002512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.647 [2024-12-09 06:29:26.002521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.647 qpair failed and we were unable to recover it. 00:30:31.647 [2024-12-09 06:29:26.002825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.647 [2024-12-09 06:29:26.002834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.647 qpair failed and we were unable to recover it. 00:30:31.647 [2024-12-09 06:29:26.002980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.647 [2024-12-09 06:29:26.002991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.647 qpair failed and we were unable to recover it. 00:30:31.648 [2024-12-09 06:29:26.003365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.648 [2024-12-09 06:29:26.003375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.648 qpair failed and we were unable to recover it. 00:30:31.648 [2024-12-09 06:29:26.003555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.648 [2024-12-09 06:29:26.003565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.648 qpair failed and we were unable to recover it. 00:30:31.648 [2024-12-09 06:29:26.003748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.648 [2024-12-09 06:29:26.003757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.648 qpair failed and we were unable to recover it. 00:30:31.648 [2024-12-09 06:29:26.003916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.648 [2024-12-09 06:29:26.003934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.648 qpair failed and we were unable to recover it. 00:30:31.648 [2024-12-09 06:29:26.004108] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa38a0 is same with the state(6) to be set 00:30:31.648 [2024-12-09 06:29:26.004521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.648 [2024-12-09 06:29:26.004609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.648 qpair failed and we were unable to recover it. 00:30:31.648 [2024-12-09 06:29:26.004987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.648 [2024-12-09 06:29:26.005021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.648 qpair failed and we were unable to recover it. 00:30:31.648 [2024-12-09 06:29:26.005224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.648 [2024-12-09 06:29:26.005255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.648 qpair failed and we were unable to recover it. 00:30:31.648 [2024-12-09 06:29:26.005442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.648 [2024-12-09 06:29:26.005464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.648 qpair failed and we were unable to recover it. 00:30:31.648 [2024-12-09 06:29:26.005631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.648 [2024-12-09 06:29:26.005641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.648 qpair failed and we were unable to recover it. 00:30:31.648 [2024-12-09 06:29:26.005912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.648 [2024-12-09 06:29:26.005922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.648 qpair failed and we were unable to recover it. 00:30:31.648 [2024-12-09 06:29:26.006213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.648 [2024-12-09 06:29:26.006223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.648 qpair failed and we were unable to recover it. 00:30:31.648 [2024-12-09 06:29:26.006510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.648 [2024-12-09 06:29:26.006520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.648 qpair failed and we were unable to recover it. 00:30:31.648 [2024-12-09 06:29:26.006686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.648 [2024-12-09 06:29:26.006695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.648 qpair failed and we were unable to recover it. 00:30:31.648 [2024-12-09 06:29:26.006975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.648 [2024-12-09 06:29:26.006985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.648 qpair failed and we were unable to recover it. 00:30:31.648 [2024-12-09 06:29:26.007197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.648 [2024-12-09 06:29:26.007206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.648 qpair failed and we were unable to recover it. 00:30:31.648 [2024-12-09 06:29:26.007491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.648 [2024-12-09 06:29:26.007501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.648 qpair failed and we were unable to recover it. 00:30:31.648 [2024-12-09 06:29:26.007948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.648 [2024-12-09 06:29:26.007957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.648 qpair failed and we were unable to recover it. 00:30:31.648 [2024-12-09 06:29:26.008235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.648 [2024-12-09 06:29:26.008244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.648 qpair failed and we were unable to recover it. 00:30:31.648 [2024-12-09 06:29:26.008426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.648 [2024-12-09 06:29:26.008435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.648 qpair failed and we were unable to recover it. 00:30:31.648 [2024-12-09 06:29:26.008776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.648 [2024-12-09 06:29:26.008787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.648 qpair failed and we were unable to recover it. 00:30:31.648 [2024-12-09 06:29:26.009089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.648 [2024-12-09 06:29:26.009098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.648 qpair failed and we were unable to recover it. 00:30:31.648 [2024-12-09 06:29:26.009401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.648 [2024-12-09 06:29:26.009410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.648 qpair failed and we were unable to recover it. 00:30:31.648 [2024-12-09 06:29:26.009618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.648 [2024-12-09 06:29:26.009628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.648 qpair failed and we were unable to recover it. 00:30:31.648 [2024-12-09 06:29:26.009904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.648 [2024-12-09 06:29:26.009912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.648 qpair failed and we were unable to recover it. 00:30:31.648 [2024-12-09 06:29:26.010224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.648 [2024-12-09 06:29:26.010233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.648 qpair failed and we were unable to recover it. 00:30:31.648 [2024-12-09 06:29:26.010512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.648 [2024-12-09 06:29:26.010522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.648 qpair failed and we were unable to recover it. 00:30:31.648 [2024-12-09 06:29:26.010840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.648 [2024-12-09 06:29:26.010850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.648 qpair failed and we were unable to recover it. 00:30:31.648 [2024-12-09 06:29:26.010996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.648 [2024-12-09 06:29:26.011006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.648 qpair failed and we were unable to recover it. 00:30:31.648 [2024-12-09 06:29:26.011154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.648 [2024-12-09 06:29:26.011162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.648 qpair failed and we were unable to recover it. 00:30:31.648 [2024-12-09 06:29:26.011443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.648 [2024-12-09 06:29:26.011456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.648 qpair failed and we were unable to recover it. 00:30:31.648 [2024-12-09 06:29:26.011496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.648 [2024-12-09 06:29:26.011505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.648 qpair failed and we were unable to recover it. 00:30:31.648 [2024-12-09 06:29:26.011547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.648 [2024-12-09 06:29:26.011555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.648 qpair failed and we were unable to recover it. 00:30:31.648 [2024-12-09 06:29:26.011867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.648 [2024-12-09 06:29:26.011880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.648 qpair failed and we were unable to recover it. 00:30:31.648 [2024-12-09 06:29:26.012173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.648 [2024-12-09 06:29:26.012182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.648 qpair failed and we were unable to recover it. 00:30:31.648 [2024-12-09 06:29:26.012489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.648 [2024-12-09 06:29:26.012498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.649 qpair failed and we were unable to recover it. 00:30:31.649 [2024-12-09 06:29:26.012692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.649 [2024-12-09 06:29:26.012701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.649 qpair failed and we were unable to recover it. 00:30:31.649 [2024-12-09 06:29:26.012896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.649 [2024-12-09 06:29:26.012905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.649 qpair failed and we were unable to recover it. 00:30:31.649 [2024-12-09 06:29:26.013062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.649 [2024-12-09 06:29:26.013071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.649 qpair failed and we were unable to recover it. 00:30:31.649 [2024-12-09 06:29:26.013377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.649 [2024-12-09 06:29:26.013387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.649 qpair failed and we were unable to recover it. 00:30:31.649 [2024-12-09 06:29:26.013686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.649 [2024-12-09 06:29:26.013696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.649 qpair failed and we were unable to recover it. 00:30:31.649 [2024-12-09 06:29:26.013884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.649 [2024-12-09 06:29:26.013894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.649 qpair failed and we were unable to recover it. 00:30:31.649 [2024-12-09 06:29:26.014068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.649 [2024-12-09 06:29:26.014077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.649 qpair failed and we were unable to recover it. 00:30:31.649 [2024-12-09 06:29:26.014259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.649 [2024-12-09 06:29:26.014268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.649 qpair failed and we were unable to recover it. 00:30:31.649 [2024-12-09 06:29:26.014464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.649 [2024-12-09 06:29:26.014474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.649 qpair failed and we were unable to recover it. 00:30:31.649 [2024-12-09 06:29:26.014635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.649 [2024-12-09 06:29:26.014644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.649 qpair failed and we were unable to recover it. 00:30:31.649 [2024-12-09 06:29:26.014691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.649 [2024-12-09 06:29:26.014700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.649 qpair failed and we were unable to recover it. 00:30:31.649 [2024-12-09 06:29:26.014876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.649 [2024-12-09 06:29:26.014886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.649 qpair failed and we were unable to recover it. 00:30:31.649 [2024-12-09 06:29:26.015026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.649 [2024-12-09 06:29:26.015109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:31.649 qpair failed and we were unable to recover it. 00:30:31.649 [2024-12-09 06:29:26.015383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.649 [2024-12-09 06:29:26.015432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.649 qpair failed and we were unable to recover it. 00:30:31.649 [2024-12-09 06:29:26.015535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.649 [2024-12-09 06:29:26.015546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.649 qpair failed and we were unable to recover it. 00:30:31.649 [2024-12-09 06:29:26.015731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.649 [2024-12-09 06:29:26.015740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.649 qpair failed and we were unable to recover it. 00:30:31.649 [2024-12-09 06:29:26.015940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.649 [2024-12-09 06:29:26.015950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.649 qpair failed and we were unable to recover it. 00:30:31.649 [2024-12-09 06:29:26.016277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.649 [2024-12-09 06:29:26.016287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.649 qpair failed and we were unable to recover it. 00:30:31.649 [2024-12-09 06:29:26.016478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.649 [2024-12-09 06:29:26.016488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.649 qpair failed and we were unable to recover it. 00:30:31.649 [2024-12-09 06:29:26.016758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.649 [2024-12-09 06:29:26.016768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.649 qpair failed and we were unable to recover it. 00:30:31.649 [2024-12-09 06:29:26.017156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.649 [2024-12-09 06:29:26.017166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.649 qpair failed and we were unable to recover it. 00:30:31.649 [2024-12-09 06:29:26.017433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.649 [2024-12-09 06:29:26.017442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.649 qpair failed and we were unable to recover it. 00:30:31.649 [2024-12-09 06:29:26.017732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.649 [2024-12-09 06:29:26.017741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.649 qpair failed and we were unable to recover it. 00:30:31.649 [2024-12-09 06:29:26.017918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.649 [2024-12-09 06:29:26.017927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.649 qpair failed and we were unable to recover it. 00:30:31.649 [2024-12-09 06:29:26.018279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.649 [2024-12-09 06:29:26.018288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.649 qpair failed and we were unable to recover it. 00:30:31.649 [2024-12-09 06:29:26.018482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.649 [2024-12-09 06:29:26.018492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.649 qpair failed and we were unable to recover it. 00:30:31.649 [2024-12-09 06:29:26.018768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.649 [2024-12-09 06:29:26.018777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.649 qpair failed and we were unable to recover it. 00:30:31.649 [2024-12-09 06:29:26.019109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.649 [2024-12-09 06:29:26.019118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.649 qpair failed and we were unable to recover it. 00:30:31.649 [2024-12-09 06:29:26.019274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.649 [2024-12-09 06:29:26.019284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.649 qpair failed and we were unable to recover it. 00:30:31.649 [2024-12-09 06:29:26.019602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.649 [2024-12-09 06:29:26.019612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.649 qpair failed and we were unable to recover it. 00:30:31.649 [2024-12-09 06:29:26.019815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.649 [2024-12-09 06:29:26.019824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.649 qpair failed and we were unable to recover it. 00:30:31.649 [2024-12-09 06:29:26.020005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.649 [2024-12-09 06:29:26.020014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.649 qpair failed and we were unable to recover it. 00:30:31.649 [2024-12-09 06:29:26.020056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.649 [2024-12-09 06:29:26.020064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.649 qpair failed and we were unable to recover it. 00:30:31.649 [2024-12-09 06:29:26.020254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.649 [2024-12-09 06:29:26.020265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.649 qpair failed and we were unable to recover it. 00:30:31.649 [2024-12-09 06:29:26.020433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.649 [2024-12-09 06:29:26.020442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.649 qpair failed and we were unable to recover it. 00:30:31.649 [2024-12-09 06:29:26.020759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.649 [2024-12-09 06:29:26.020768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.649 qpair failed and we were unable to recover it. 00:30:31.650 [2024-12-09 06:29:26.021050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.650 [2024-12-09 06:29:26.021060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.650 qpair failed and we were unable to recover it. 00:30:31.650 [2024-12-09 06:29:26.021371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.650 [2024-12-09 06:29:26.021384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.650 qpair failed and we were unable to recover it. 00:30:31.650 [2024-12-09 06:29:26.021608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.650 [2024-12-09 06:29:26.021618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.650 qpair failed and we were unable to recover it. 00:30:31.650 [2024-12-09 06:29:26.021926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.650 [2024-12-09 06:29:26.021936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.650 qpair failed and we were unable to recover it. 00:30:31.650 [2024-12-09 06:29:26.022233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.650 [2024-12-09 06:29:26.022243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.650 qpair failed and we were unable to recover it. 00:30:31.650 [2024-12-09 06:29:26.022426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.650 [2024-12-09 06:29:26.022436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.650 qpair failed and we were unable to recover it. 00:30:31.650 [2024-12-09 06:29:26.022601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.650 [2024-12-09 06:29:26.022611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.650 qpair failed and we were unable to recover it. 00:30:31.650 [2024-12-09 06:29:26.022779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.650 [2024-12-09 06:29:26.022788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.650 qpair failed and we were unable to recover it. 00:30:31.650 [2024-12-09 06:29:26.022958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.650 [2024-12-09 06:29:26.022967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.650 qpair failed and we were unable to recover it. 00:30:31.650 [2024-12-09 06:29:26.023317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.650 [2024-12-09 06:29:26.023326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.650 qpair failed and we were unable to recover it. 00:30:31.650 [2024-12-09 06:29:26.023650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.650 [2024-12-09 06:29:26.023660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.650 qpair failed and we were unable to recover it. 00:30:31.650 [2024-12-09 06:29:26.023741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.650 [2024-12-09 06:29:26.023749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.650 qpair failed and we were unable to recover it. 00:30:31.650 [2024-12-09 06:29:26.024029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.650 [2024-12-09 06:29:26.024038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.650 qpair failed and we were unable to recover it. 00:30:31.650 [2024-12-09 06:29:26.024196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.650 [2024-12-09 06:29:26.024207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.650 qpair failed and we were unable to recover it. 00:30:31.650 [2024-12-09 06:29:26.024515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.650 [2024-12-09 06:29:26.024525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.650 qpair failed and we were unable to recover it. 00:30:31.650 [2024-12-09 06:29:26.024719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.650 [2024-12-09 06:29:26.024728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.650 qpair failed and we were unable to recover it. 00:30:31.650 [2024-12-09 06:29:26.025049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.650 [2024-12-09 06:29:26.025059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.650 qpair failed and we were unable to recover it. 00:30:31.650 [2024-12-09 06:29:26.025391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.650 [2024-12-09 06:29:26.025400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.650 qpair failed and we were unable to recover it. 00:30:31.650 [2024-12-09 06:29:26.025705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.650 [2024-12-09 06:29:26.025715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.650 qpair failed and we were unable to recover it. 00:30:31.650 [2024-12-09 06:29:26.026002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.650 [2024-12-09 06:29:26.026011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.650 qpair failed and we were unable to recover it. 00:30:31.650 [2024-12-09 06:29:26.026218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.650 [2024-12-09 06:29:26.026226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.650 qpair failed and we were unable to recover it. 00:30:31.650 [2024-12-09 06:29:26.026381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.650 [2024-12-09 06:29:26.026390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.650 qpair failed and we were unable to recover it. 00:30:31.650 [2024-12-09 06:29:26.026677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.650 [2024-12-09 06:29:26.026687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.650 qpair failed and we were unable to recover it. 00:30:31.650 [2024-12-09 06:29:26.026995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.650 [2024-12-09 06:29:26.027004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.650 qpair failed and we were unable to recover it. 00:30:31.650 [2024-12-09 06:29:26.027314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.650 [2024-12-09 06:29:26.027323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.650 qpair failed and we were unable to recover it. 00:30:31.650 [2024-12-09 06:29:26.027591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.650 [2024-12-09 06:29:26.027601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.650 qpair failed and we were unable to recover it. 00:30:31.650 [2024-12-09 06:29:26.027935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.650 [2024-12-09 06:29:26.027945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.650 qpair failed and we were unable to recover it. 00:30:31.650 [2024-12-09 06:29:26.028106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.650 [2024-12-09 06:29:26.028116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.650 qpair failed and we were unable to recover it. 00:30:31.650 [2024-12-09 06:29:26.028297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.650 [2024-12-09 06:29:26.028307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.650 qpair failed and we were unable to recover it. 00:30:31.650 [2024-12-09 06:29:26.028487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.650 [2024-12-09 06:29:26.028497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.650 qpair failed and we were unable to recover it. 00:30:31.650 [2024-12-09 06:29:26.028676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.650 [2024-12-09 06:29:26.028685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.650 qpair failed and we were unable to recover it. 00:30:31.650 [2024-12-09 06:29:26.029080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.650 [2024-12-09 06:29:26.029162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:31.650 qpair failed and we were unable to recover it. 00:30:31.650 [2024-12-09 06:29:26.029409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.650 [2024-12-09 06:29:26.029441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:31.650 qpair failed and we were unable to recover it. 00:30:31.650 [2024-12-09 06:29:26.029637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.650 [2024-12-09 06:29:26.029649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.651 qpair failed and we were unable to recover it. 00:30:31.651 [2024-12-09 06:29:26.029916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.651 [2024-12-09 06:29:26.029926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.651 qpair failed and we were unable to recover it. 00:30:31.651 [2024-12-09 06:29:26.030269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.651 [2024-12-09 06:29:26.030278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.651 qpair failed and we were unable to recover it. 00:30:31.651 [2024-12-09 06:29:26.030463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.651 [2024-12-09 06:29:26.030473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.651 qpair failed and we were unable to recover it. 00:30:31.651 [2024-12-09 06:29:26.030823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.651 [2024-12-09 06:29:26.030832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.651 qpair failed and we were unable to recover it. 00:30:31.651 [2024-12-09 06:29:26.031190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.651 [2024-12-09 06:29:26.031201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.651 qpair failed and we were unable to recover it. 00:30:31.651 [2024-12-09 06:29:26.031493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.651 [2024-12-09 06:29:26.031503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.651 qpair failed and we were unable to recover it. 00:30:31.651 [2024-12-09 06:29:26.031786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.651 [2024-12-09 06:29:26.031796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.651 qpair failed and we were unable to recover it. 00:30:31.651 [2024-12-09 06:29:26.032085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.651 [2024-12-09 06:29:26.032097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.651 qpair failed and we were unable to recover it. 00:30:31.651 [2024-12-09 06:29:26.032268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.651 [2024-12-09 06:29:26.032278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.651 qpair failed and we were unable to recover it. 00:30:31.651 [2024-12-09 06:29:26.032474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.651 [2024-12-09 06:29:26.032484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.651 qpair failed and we were unable to recover it. 00:30:31.651 [2024-12-09 06:29:26.032745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.651 [2024-12-09 06:29:26.032754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.651 qpair failed and we were unable to recover it. 00:30:31.651 [2024-12-09 06:29:26.033046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.651 [2024-12-09 06:29:26.033056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.651 qpair failed and we were unable to recover it. 00:30:31.651 [2024-12-09 06:29:26.033225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.651 [2024-12-09 06:29:26.033235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.651 qpair failed and we were unable to recover it. 00:30:31.651 [2024-12-09 06:29:26.033408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.651 [2024-12-09 06:29:26.033418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.651 qpair failed and we were unable to recover it. 00:30:31.651 [2024-12-09 06:29:26.033709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.651 [2024-12-09 06:29:26.033719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.651 qpair failed and we were unable to recover it. 00:30:31.651 [2024-12-09 06:29:26.034020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.651 [2024-12-09 06:29:26.034028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.651 qpair failed and we were unable to recover it. 00:30:31.651 [2024-12-09 06:29:26.034342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.651 [2024-12-09 06:29:26.034351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.651 qpair failed and we were unable to recover it. 00:30:31.651 [2024-12-09 06:29:26.034641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.651 [2024-12-09 06:29:26.034650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.651 qpair failed and we were unable to recover it. 00:30:31.651 [2024-12-09 06:29:26.034960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.651 [2024-12-09 06:29:26.034969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.651 qpair failed and we were unable to recover it. 00:30:31.651 [2024-12-09 06:29:26.035151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.651 [2024-12-09 06:29:26.035160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.651 qpair failed and we were unable to recover it. 00:30:31.651 [2024-12-09 06:29:26.035375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.651 [2024-12-09 06:29:26.035384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.651 qpair failed and we were unable to recover it. 00:30:31.651 [2024-12-09 06:29:26.035739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.651 [2024-12-09 06:29:26.035749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.651 qpair failed and we were unable to recover it. 00:30:31.651 [2024-12-09 06:29:26.036041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.651 [2024-12-09 06:29:26.036051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.651 qpair failed and we were unable to recover it. 00:30:31.651 [2024-12-09 06:29:26.036345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.651 [2024-12-09 06:29:26.036354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.651 qpair failed and we were unable to recover it. 00:30:31.651 [2024-12-09 06:29:26.036530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.651 [2024-12-09 06:29:26.036539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.651 qpair failed and we were unable to recover it. 00:30:31.651 [2024-12-09 06:29:26.036946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.651 [2024-12-09 06:29:26.036954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.651 qpair failed and we were unable to recover it. 00:30:31.651 [2024-12-09 06:29:26.037117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.651 [2024-12-09 06:29:26.037126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.651 qpair failed and we were unable to recover it. 00:30:31.651 [2024-12-09 06:29:26.037368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.651 [2024-12-09 06:29:26.037377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.651 qpair failed and we were unable to recover it. 00:30:31.651 [2024-12-09 06:29:26.037442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.651 [2024-12-09 06:29:26.037463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.651 qpair failed and we were unable to recover it. 00:30:31.651 [2024-12-09 06:29:26.037724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.651 [2024-12-09 06:29:26.037734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.651 qpair failed and we were unable to recover it. 00:30:31.651 [2024-12-09 06:29:26.038006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.651 [2024-12-09 06:29:26.038016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.651 qpair failed and we were unable to recover it. 00:30:31.651 [2024-12-09 06:29:26.038334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.651 [2024-12-09 06:29:26.038344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.651 qpair failed and we were unable to recover it. 00:30:31.651 [2024-12-09 06:29:26.038649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.651 [2024-12-09 06:29:26.038659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.651 qpair failed and we were unable to recover it. 00:30:31.651 [2024-12-09 06:29:26.038974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.651 [2024-12-09 06:29:26.038984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.651 qpair failed and we were unable to recover it. 00:30:31.651 [2024-12-09 06:29:26.039319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.651 [2024-12-09 06:29:26.039329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.651 qpair failed and we were unable to recover it. 00:30:31.651 [2024-12-09 06:29:26.039512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.651 [2024-12-09 06:29:26.039523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.651 qpair failed and we were unable to recover it. 00:30:31.651 [2024-12-09 06:29:26.039899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.651 [2024-12-09 06:29:26.039908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.651 qpair failed and we were unable to recover it. 00:30:31.652 [2024-12-09 06:29:26.039952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.652 [2024-12-09 06:29:26.039960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.652 qpair failed and we were unable to recover it. 00:30:31.652 [2024-12-09 06:29:26.040148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.652 [2024-12-09 06:29:26.040157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.652 qpair failed and we were unable to recover it. 00:30:31.652 [2024-12-09 06:29:26.040334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.652 [2024-12-09 06:29:26.040343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.652 qpair failed and we were unable to recover it. 00:30:31.652 [2024-12-09 06:29:26.040695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.652 [2024-12-09 06:29:26.040704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.652 qpair failed and we were unable to recover it. 00:30:31.652 [2024-12-09 06:29:26.040987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.652 [2024-12-09 06:29:26.040997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.652 qpair failed and we were unable to recover it. 00:30:31.652 [2024-12-09 06:29:26.041299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.652 [2024-12-09 06:29:26.041309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.652 qpair failed and we were unable to recover it. 00:30:31.652 [2024-12-09 06:29:26.041612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.652 [2024-12-09 06:29:26.041621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.652 qpair failed and we were unable to recover it. 00:30:31.652 [2024-12-09 06:29:26.041835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.652 [2024-12-09 06:29:26.041844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.652 qpair failed and we were unable to recover it. 00:30:31.652 [2024-12-09 06:29:26.042149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.652 [2024-12-09 06:29:26.042158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.652 qpair failed and we were unable to recover it. 00:30:31.652 [2024-12-09 06:29:26.042461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.652 [2024-12-09 06:29:26.042470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.652 qpair failed and we were unable to recover it. 00:30:31.652 [2024-12-09 06:29:26.042547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.652 [2024-12-09 06:29:26.042558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.652 qpair failed and we were unable to recover it. 00:30:31.652 [2024-12-09 06:29:26.042819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.652 [2024-12-09 06:29:26.042828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.652 qpair failed and we were unable to recover it. 00:30:31.652 [2024-12-09 06:29:26.043145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.652 [2024-12-09 06:29:26.043153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.652 qpair failed and we were unable to recover it. 00:30:31.652 [2024-12-09 06:29:26.043319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.652 [2024-12-09 06:29:26.043329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.652 qpair failed and we were unable to recover it. 00:30:31.652 [2024-12-09 06:29:26.043611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.652 [2024-12-09 06:29:26.043621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.652 qpair failed and we were unable to recover it. 00:30:31.652 [2024-12-09 06:29:26.043794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.652 [2024-12-09 06:29:26.043803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.652 qpair failed and we were unable to recover it. 00:30:31.652 [2024-12-09 06:29:26.044045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.652 [2024-12-09 06:29:26.044055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.652 qpair failed and we were unable to recover it. 00:30:31.652 [2024-12-09 06:29:26.044377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.652 [2024-12-09 06:29:26.044387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.652 qpair failed and we were unable to recover it. 00:30:31.652 [2024-12-09 06:29:26.044557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.652 [2024-12-09 06:29:26.044567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.652 qpair failed and we were unable to recover it. 00:30:31.652 [2024-12-09 06:29:26.044764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.652 [2024-12-09 06:29:26.044773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.652 qpair failed and we were unable to recover it. 00:30:31.652 [2024-12-09 06:29:26.044921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.652 [2024-12-09 06:29:26.044931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.652 qpair failed and we were unable to recover it. 00:30:31.652 [2024-12-09 06:29:26.045109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.652 [2024-12-09 06:29:26.045118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.652 qpair failed and we were unable to recover it. 00:30:31.652 [2024-12-09 06:29:26.045275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.652 [2024-12-09 06:29:26.045284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.652 qpair failed and we were unable to recover it. 00:30:31.652 [2024-12-09 06:29:26.045460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.652 [2024-12-09 06:29:26.045470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.652 qpair failed and we were unable to recover it. 00:30:31.652 [2024-12-09 06:29:26.045740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.652 [2024-12-09 06:29:26.045749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.652 qpair failed and we were unable to recover it. 00:30:31.652 [2024-12-09 06:29:26.045833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.652 [2024-12-09 06:29:26.045843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.652 qpair failed and we were unable to recover it. 00:30:31.652 [2024-12-09 06:29:26.046093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.652 [2024-12-09 06:29:26.046102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.652 qpair failed and we were unable to recover it. 00:30:31.652 [2024-12-09 06:29:26.046396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.652 [2024-12-09 06:29:26.046405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.652 qpair failed and we were unable to recover it. 00:30:31.652 [2024-12-09 06:29:26.046704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.652 [2024-12-09 06:29:26.046714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.652 qpair failed and we were unable to recover it. 00:30:31.652 [2024-12-09 06:29:26.047000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.652 [2024-12-09 06:29:26.047009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.652 qpair failed and we were unable to recover it. 00:30:31.652 [2024-12-09 06:29:26.047306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.652 [2024-12-09 06:29:26.047316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.652 qpair failed and we were unable to recover it. 00:30:31.652 [2024-12-09 06:29:26.047620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.652 [2024-12-09 06:29:26.047629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.652 qpair failed and we were unable to recover it. 00:30:31.652 [2024-12-09 06:29:26.047672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.652 [2024-12-09 06:29:26.047682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.652 qpair failed and we were unable to recover it. 00:30:31.652 [2024-12-09 06:29:26.047958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.652 [2024-12-09 06:29:26.047967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.652 qpair failed and we were unable to recover it. 00:30:31.652 [2024-12-09 06:29:26.048307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.652 [2024-12-09 06:29:26.048316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.652 qpair failed and we were unable to recover it. 00:30:31.652 [2024-12-09 06:29:26.048640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.653 [2024-12-09 06:29:26.048650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.653 qpair failed and we were unable to recover it. 00:30:31.653 [2024-12-09 06:29:26.048967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.653 [2024-12-09 06:29:26.048976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.653 qpair failed and we were unable to recover it. 00:30:31.653 [2024-12-09 06:29:26.049171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.653 [2024-12-09 06:29:26.049181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.653 qpair failed and we were unable to recover it. 00:30:31.653 [2024-12-09 06:29:26.049354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.653 [2024-12-09 06:29:26.049363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.653 qpair failed and we were unable to recover it. 00:30:31.653 [2024-12-09 06:29:26.049692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.653 [2024-12-09 06:29:26.049702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.653 qpair failed and we were unable to recover it. 00:30:31.653 [2024-12-09 06:29:26.050059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.653 [2024-12-09 06:29:26.050069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.653 qpair failed and we were unable to recover it. 00:30:31.653 [2024-12-09 06:29:26.050430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.653 [2024-12-09 06:29:26.050439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.653 qpair failed and we were unable to recover it. 00:30:31.653 [2024-12-09 06:29:26.050749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.653 [2024-12-09 06:29:26.050759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.653 qpair failed and we were unable to recover it. 00:30:31.653 [2024-12-09 06:29:26.050956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.653 [2024-12-09 06:29:26.050966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.653 qpair failed and we were unable to recover it. 00:30:31.653 [2024-12-09 06:29:26.051293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.653 [2024-12-09 06:29:26.051304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.653 qpair failed and we were unable to recover it. 00:30:31.653 [2024-12-09 06:29:26.051509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.653 [2024-12-09 06:29:26.051520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.653 qpair failed and we were unable to recover it. 00:30:31.653 [2024-12-09 06:29:26.051878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.653 [2024-12-09 06:29:26.051887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.653 qpair failed and we were unable to recover it. 00:30:31.653 [2024-12-09 06:29:26.052191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.653 [2024-12-09 06:29:26.052200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.653 qpair failed and we were unable to recover it. 00:30:31.653 [2024-12-09 06:29:26.052496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.653 [2024-12-09 06:29:26.052506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.653 qpair failed and we were unable to recover it. 00:30:31.653 [2024-12-09 06:29:26.052794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.653 [2024-12-09 06:29:26.052803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.653 qpair failed and we were unable to recover it. 00:30:31.653 [2024-12-09 06:29:26.053090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.653 [2024-12-09 06:29:26.053101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.653 qpair failed and we were unable to recover it. 00:30:31.653 [2024-12-09 06:29:26.053395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.653 [2024-12-09 06:29:26.053405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.653 qpair failed and we were unable to recover it. 00:30:31.653 [2024-12-09 06:29:26.053762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.653 [2024-12-09 06:29:26.053771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.653 qpair failed and we were unable to recover it. 00:30:31.653 [2024-12-09 06:29:26.053964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.653 [2024-12-09 06:29:26.053973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.653 qpair failed and we were unable to recover it. 00:30:31.653 [2024-12-09 06:29:26.054227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.653 [2024-12-09 06:29:26.054236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.653 qpair failed and we were unable to recover it. 00:30:31.653 [2024-12-09 06:29:26.054442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.653 [2024-12-09 06:29:26.054457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.653 qpair failed and we were unable to recover it. 00:30:31.653 [2024-12-09 06:29:26.054646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.653 [2024-12-09 06:29:26.054656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.653 qpair failed and we were unable to recover it. 00:30:31.653 [2024-12-09 06:29:26.054960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.653 [2024-12-09 06:29:26.054969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.653 qpair failed and we were unable to recover it. 00:30:31.653 [2024-12-09 06:29:26.055245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.653 [2024-12-09 06:29:26.055254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.653 qpair failed and we were unable to recover it. 00:30:31.653 [2024-12-09 06:29:26.055543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.653 [2024-12-09 06:29:26.055552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.653 qpair failed and we were unable to recover it. 00:30:31.653 [2024-12-09 06:29:26.055867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.653 [2024-12-09 06:29:26.055877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.653 qpair failed and we were unable to recover it. 00:30:31.653 [2024-12-09 06:29:26.056179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.653 [2024-12-09 06:29:26.056188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.653 qpair failed and we were unable to recover it. 00:30:31.653 [2024-12-09 06:29:26.056466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.653 [2024-12-09 06:29:26.056475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.653 qpair failed and we were unable to recover it. 00:30:31.653 [2024-12-09 06:29:26.056789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.653 [2024-12-09 06:29:26.056798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.653 qpair failed and we were unable to recover it. 00:30:31.653 [2024-12-09 06:29:26.057095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.653 [2024-12-09 06:29:26.057105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.653 qpair failed and we were unable to recover it. 00:30:31.653 [2024-12-09 06:29:26.057403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.653 [2024-12-09 06:29:26.057413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.653 qpair failed and we were unable to recover it. 00:30:31.653 [2024-12-09 06:29:26.057652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.653 [2024-12-09 06:29:26.057662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.653 qpair failed and we were unable to recover it. 00:30:31.653 [2024-12-09 06:29:26.057827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.653 [2024-12-09 06:29:26.057836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.653 qpair failed and we were unable to recover it. 00:30:31.654 [2024-12-09 06:29:26.058206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.654 [2024-12-09 06:29:26.058215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.654 qpair failed and we were unable to recover it. 00:30:31.654 [2024-12-09 06:29:26.058496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.654 [2024-12-09 06:29:26.058505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.654 qpair failed and we were unable to recover it. 00:30:31.654 [2024-12-09 06:29:26.058669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.654 [2024-12-09 06:29:26.058677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.654 qpair failed and we were unable to recover it. 00:30:31.654 [2024-12-09 06:29:26.059018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.654 [2024-12-09 06:29:26.059027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.654 qpair failed and we were unable to recover it. 00:30:31.654 [2024-12-09 06:29:26.059193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.654 [2024-12-09 06:29:26.059203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.654 qpair failed and we were unable to recover it. 00:30:31.654 [2024-12-09 06:29:26.059505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.654 [2024-12-09 06:29:26.059515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.654 qpair failed and we were unable to recover it. 00:30:31.654 [2024-12-09 06:29:26.059808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.654 [2024-12-09 06:29:26.059817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.654 qpair failed and we were unable to recover it. 00:30:31.654 [2024-12-09 06:29:26.060089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.654 [2024-12-09 06:29:26.060099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.654 qpair failed and we were unable to recover it. 00:30:31.654 [2024-12-09 06:29:26.060308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.654 [2024-12-09 06:29:26.060317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.654 qpair failed and we were unable to recover it. 00:30:31.654 [2024-12-09 06:29:26.060638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.654 [2024-12-09 06:29:26.060647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.654 qpair failed and we were unable to recover it. 00:30:31.654 [2024-12-09 06:29:26.060803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.654 [2024-12-09 06:29:26.060812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.654 qpair failed and we were unable to recover it. 00:30:31.654 [2024-12-09 06:29:26.060966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.654 [2024-12-09 06:29:26.060975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.654 qpair failed and we were unable to recover it. 00:30:31.654 [2024-12-09 06:29:26.061147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.654 [2024-12-09 06:29:26.061156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.654 qpair failed and we were unable to recover it. 00:30:31.654 [2024-12-09 06:29:26.061466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.654 [2024-12-09 06:29:26.061477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.654 qpair failed and we were unable to recover it. 00:30:31.654 [2024-12-09 06:29:26.061652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.654 [2024-12-09 06:29:26.061663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.654 qpair failed and we were unable to recover it. 00:30:31.654 [2024-12-09 06:29:26.061932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.654 [2024-12-09 06:29:26.061942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.654 qpair failed and we were unable to recover it. 00:30:31.654 [2024-12-09 06:29:26.062133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.654 [2024-12-09 06:29:26.062142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.654 qpair failed and we were unable to recover it. 00:30:31.654 [2024-12-09 06:29:26.062538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.654 [2024-12-09 06:29:26.062549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.654 qpair failed and we were unable to recover it. 00:30:31.654 [2024-12-09 06:29:26.062708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.654 [2024-12-09 06:29:26.062718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.654 qpair failed and we were unable to recover it. 00:30:31.654 [2024-12-09 06:29:26.063004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.654 [2024-12-09 06:29:26.063013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.654 qpair failed and we were unable to recover it. 00:30:31.654 [2024-12-09 06:29:26.063214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.654 [2024-12-09 06:29:26.063223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.654 qpair failed and we were unable to recover it. 00:30:31.654 [2024-12-09 06:29:26.063513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.654 [2024-12-09 06:29:26.063523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.654 qpair failed and we were unable to recover it. 00:30:31.654 [2024-12-09 06:29:26.063692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.654 [2024-12-09 06:29:26.063704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.654 qpair failed and we were unable to recover it. 00:30:31.654 [2024-12-09 06:29:26.063857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.654 [2024-12-09 06:29:26.063866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.654 qpair failed and we were unable to recover it. 00:30:31.654 [2024-12-09 06:29:26.064048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.654 [2024-12-09 06:29:26.064058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.654 qpair failed and we were unable to recover it. 00:30:31.654 [2024-12-09 06:29:26.064361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.654 [2024-12-09 06:29:26.064370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.654 qpair failed and we were unable to recover it. 00:30:31.654 [2024-12-09 06:29:26.064668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.654 [2024-12-09 06:29:26.064678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.654 qpair failed and we were unable to recover it. 00:30:31.654 [2024-12-09 06:29:26.064989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.654 [2024-12-09 06:29:26.064998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.654 qpair failed and we were unable to recover it. 00:30:31.654 [2024-12-09 06:29:26.065275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.654 [2024-12-09 06:29:26.065285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.654 qpair failed and we were unable to recover it. 00:30:31.654 [2024-12-09 06:29:26.065543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.654 [2024-12-09 06:29:26.065553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.654 qpair failed and we were unable to recover it. 00:30:31.654 [2024-12-09 06:29:26.065594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.654 [2024-12-09 06:29:26.065603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.654 qpair failed and we were unable to recover it. 00:30:31.654 [2024-12-09 06:29:26.065726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.654 [2024-12-09 06:29:26.065735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.654 qpair failed and we were unable to recover it. 00:30:31.654 [2024-12-09 06:29:26.065919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.654 [2024-12-09 06:29:26.065928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.654 qpair failed and we were unable to recover it. 00:30:31.654 [2024-12-09 06:29:26.066239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.654 [2024-12-09 06:29:26.066249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.654 qpair failed and we were unable to recover it. 00:30:31.654 [2024-12-09 06:29:26.066545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.654 [2024-12-09 06:29:26.066555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.654 qpair failed and we were unable to recover it. 00:30:31.654 [2024-12-09 06:29:26.066870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.654 [2024-12-09 06:29:26.066880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.655 qpair failed and we were unable to recover it. 00:30:31.655 [2024-12-09 06:29:26.067068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.655 [2024-12-09 06:29:26.067077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.655 qpair failed and we were unable to recover it. 00:30:31.655 [2024-12-09 06:29:26.067383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.655 [2024-12-09 06:29:26.067392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.655 qpair failed and we were unable to recover it. 00:30:31.655 [2024-12-09 06:29:26.067688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.655 [2024-12-09 06:29:26.067699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.655 qpair failed and we were unable to recover it. 00:30:31.655 [2024-12-09 06:29:26.068000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.655 [2024-12-09 06:29:26.068010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.655 qpair failed and we were unable to recover it. 00:30:31.655 [2024-12-09 06:29:26.068087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.655 [2024-12-09 06:29:26.068095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.655 qpair failed and we were unable to recover it. 00:30:31.655 [2024-12-09 06:29:26.068366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.655 [2024-12-09 06:29:26.068376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.655 qpair failed and we were unable to recover it. 00:30:31.655 [2024-12-09 06:29:26.068685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.655 [2024-12-09 06:29:26.068695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.655 qpair failed and we were unable to recover it. 00:30:31.655 [2024-12-09 06:29:26.068998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.655 [2024-12-09 06:29:26.069007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.655 qpair failed and we were unable to recover it. 00:30:31.655 [2024-12-09 06:29:26.069304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.655 [2024-12-09 06:29:26.069313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.655 qpair failed and we were unable to recover it. 00:30:31.655 [2024-12-09 06:29:26.069551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.655 [2024-12-09 06:29:26.069560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.655 qpair failed and we were unable to recover it. 00:30:31.655 [2024-12-09 06:29:26.069918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.655 [2024-12-09 06:29:26.069928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.655 qpair failed and we were unable to recover it. 00:30:31.655 [2024-12-09 06:29:26.070193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.655 [2024-12-09 06:29:26.070203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.655 qpair failed and we were unable to recover it. 00:30:31.655 [2024-12-09 06:29:26.070357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.655 [2024-12-09 06:29:26.070366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.655 qpair failed and we were unable to recover it. 00:30:31.655 [2024-12-09 06:29:26.070631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.655 [2024-12-09 06:29:26.070642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.655 qpair failed and we were unable to recover it. 00:30:31.655 [2024-12-09 06:29:26.070989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.655 [2024-12-09 06:29:26.070998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.655 qpair failed and we were unable to recover it. 00:30:31.655 [2024-12-09 06:29:26.071166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.655 [2024-12-09 06:29:26.071175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.655 qpair failed and we were unable to recover it. 00:30:31.655 [2024-12-09 06:29:26.071267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.655 [2024-12-09 06:29:26.071276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.655 qpair failed and we were unable to recover it. 00:30:31.655 [2024-12-09 06:29:26.071456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.655 [2024-12-09 06:29:26.071466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.655 qpair failed and we were unable to recover it. 00:30:31.655 [2024-12-09 06:29:26.071777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.655 [2024-12-09 06:29:26.071787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.655 qpair failed and we were unable to recover it. 00:30:31.655 [2024-12-09 06:29:26.071991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.655 [2024-12-09 06:29:26.072000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.655 qpair failed and we were unable to recover it. 00:30:31.655 [2024-12-09 06:29:26.072323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.655 [2024-12-09 06:29:26.072332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.655 qpair failed and we were unable to recover it. 00:30:31.655 [2024-12-09 06:29:26.072686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.655 [2024-12-09 06:29:26.072696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.655 qpair failed and we were unable to recover it. 00:30:31.655 [2024-12-09 06:29:26.073025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.655 [2024-12-09 06:29:26.073034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.655 qpair failed and we were unable to recover it. 00:30:31.655 [2024-12-09 06:29:26.073394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.655 [2024-12-09 06:29:26.073403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.655 qpair failed and we were unable to recover it. 00:30:31.655 [2024-12-09 06:29:26.073684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.655 [2024-12-09 06:29:26.073694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.655 qpair failed and we were unable to recover it. 00:30:31.655 [2024-12-09 06:29:26.074003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.655 [2024-12-09 06:29:26.074013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.655 qpair failed and we were unable to recover it. 00:30:31.655 [2024-12-09 06:29:26.074286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.655 [2024-12-09 06:29:26.074297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.655 qpair failed and we were unable to recover it. 00:30:31.655 [2024-12-09 06:29:26.074594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.655 [2024-12-09 06:29:26.074603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.655 qpair failed and we were unable to recover it. 00:30:31.655 [2024-12-09 06:29:26.074908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.655 [2024-12-09 06:29:26.074918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.655 qpair failed and we were unable to recover it. 00:30:31.655 [2024-12-09 06:29:26.075056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.655 [2024-12-09 06:29:26.075064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.655 qpair failed and we were unable to recover it. 00:30:31.655 [2024-12-09 06:29:26.075366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.655 [2024-12-09 06:29:26.075376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.655 qpair failed and we were unable to recover it. 00:30:31.655 [2024-12-09 06:29:26.075705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.655 [2024-12-09 06:29:26.075715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.655 qpair failed and we were unable to recover it. 00:30:31.656 [2024-12-09 06:29:26.076014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.656 [2024-12-09 06:29:26.076023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.656 qpair failed and we were unable to recover it. 00:30:31.656 [2024-12-09 06:29:26.076301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.656 [2024-12-09 06:29:26.076311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.656 qpair failed and we were unable to recover it. 00:30:31.656 [2024-12-09 06:29:26.076605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.656 [2024-12-09 06:29:26.076615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.656 qpair failed and we were unable to recover it. 00:30:31.656 [2024-12-09 06:29:26.076785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.656 [2024-12-09 06:29:26.076794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.656 qpair failed and we were unable to recover it. 00:30:31.656 [2024-12-09 06:29:26.076975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.656 [2024-12-09 06:29:26.076984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.656 qpair failed and we were unable to recover it. 00:30:31.656 [2024-12-09 06:29:26.077282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.656 [2024-12-09 06:29:26.077292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.656 qpair failed and we were unable to recover it. 00:30:31.656 [2024-12-09 06:29:26.077583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.656 [2024-12-09 06:29:26.077592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.656 qpair failed and we were unable to recover it. 00:30:31.656 [2024-12-09 06:29:26.077897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.656 [2024-12-09 06:29:26.077906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.656 qpair failed and we were unable to recover it. 00:30:31.656 [2024-12-09 06:29:26.078203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.656 [2024-12-09 06:29:26.078213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.656 qpair failed and we were unable to recover it. 00:30:31.656 [2024-12-09 06:29:26.078502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.656 [2024-12-09 06:29:26.078513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.656 qpair failed and we were unable to recover it. 00:30:31.656 [2024-12-09 06:29:26.078854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.656 [2024-12-09 06:29:26.078863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.656 qpair failed and we were unable to recover it. 00:30:31.656 [2024-12-09 06:29:26.079039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.656 [2024-12-09 06:29:26.079048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.656 qpair failed and we were unable to recover it. 00:30:31.656 [2024-12-09 06:29:26.079397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.656 [2024-12-09 06:29:26.079406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.656 qpair failed and we were unable to recover it. 00:30:31.656 [2024-12-09 06:29:26.079575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.656 [2024-12-09 06:29:26.079585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.656 qpair failed and we were unable to recover it. 00:30:31.656 [2024-12-09 06:29:26.079951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.656 [2024-12-09 06:29:26.079961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.656 qpair failed and we were unable to recover it. 00:30:31.656 [2024-12-09 06:29:26.080247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.656 [2024-12-09 06:29:26.080257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.656 qpair failed and we were unable to recover it. 00:30:31.656 [2024-12-09 06:29:26.080409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.656 [2024-12-09 06:29:26.080419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.656 qpair failed and we were unable to recover it. 00:30:31.656 [2024-12-09 06:29:26.080677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.656 [2024-12-09 06:29:26.080686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.656 qpair failed and we were unable to recover it. 00:30:31.656 [2024-12-09 06:29:26.080975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.656 [2024-12-09 06:29:26.080984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.656 qpair failed and we were unable to recover it. 00:30:31.656 [2024-12-09 06:29:26.081285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.656 [2024-12-09 06:29:26.081294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.656 qpair failed and we were unable to recover it. 00:30:31.656 [2024-12-09 06:29:26.081455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.656 [2024-12-09 06:29:26.081465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.656 qpair failed and we were unable to recover it. 00:30:31.656 [2024-12-09 06:29:26.081629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.656 [2024-12-09 06:29:26.081641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.656 qpair failed and we were unable to recover it. 00:30:31.656 [2024-12-09 06:29:26.081919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.656 [2024-12-09 06:29:26.081929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.656 qpair failed and we were unable to recover it. 00:30:31.656 [2024-12-09 06:29:26.082226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.656 [2024-12-09 06:29:26.082235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.656 qpair failed and we were unable to recover it. 00:30:31.656 [2024-12-09 06:29:26.082460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.656 [2024-12-09 06:29:26.082470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.656 qpair failed and we were unable to recover it. 00:30:31.656 [2024-12-09 06:29:26.082789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.656 [2024-12-09 06:29:26.082805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.656 qpair failed and we were unable to recover it. 00:30:31.656 [2024-12-09 06:29:26.083116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.656 [2024-12-09 06:29:26.083124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.656 qpair failed and we were unable to recover it. 00:30:31.656 [2024-12-09 06:29:26.083490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.656 [2024-12-09 06:29:26.083500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.656 qpair failed and we were unable to recover it. 00:30:31.656 [2024-12-09 06:29:26.083670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.656 [2024-12-09 06:29:26.083680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.656 qpair failed and we were unable to recover it. 00:30:31.656 [2024-12-09 06:29:26.083871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.656 [2024-12-09 06:29:26.083879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.656 qpair failed and we were unable to recover it. 00:30:31.656 [2024-12-09 06:29:26.084044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.656 [2024-12-09 06:29:26.084053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.656 qpair failed and we were unable to recover it. 00:30:31.656 [2024-12-09 06:29:26.084327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.656 [2024-12-09 06:29:26.084337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.656 qpair failed and we were unable to recover it. 00:30:31.656 [2024-12-09 06:29:26.084518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.656 [2024-12-09 06:29:26.084528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.656 qpair failed and we were unable to recover it. 00:30:31.657 [2024-12-09 06:29:26.084700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.657 [2024-12-09 06:29:26.084709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.657 qpair failed and we were unable to recover it. 00:30:31.657 [2024-12-09 06:29:26.084907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.657 [2024-12-09 06:29:26.084916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.657 qpair failed and we were unable to recover it. 00:30:31.657 [2024-12-09 06:29:26.085218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.657 [2024-12-09 06:29:26.085227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.657 qpair failed and we were unable to recover it. 00:30:31.657 [2024-12-09 06:29:26.085527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.657 [2024-12-09 06:29:26.085536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.657 qpair failed and we were unable to recover it. 00:30:31.657 [2024-12-09 06:29:26.085822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.657 [2024-12-09 06:29:26.085831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.657 qpair failed and we were unable to recover it. 00:30:31.657 [2024-12-09 06:29:26.086126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.657 [2024-12-09 06:29:26.086136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.657 qpair failed and we were unable to recover it. 00:30:31.657 [2024-12-09 06:29:26.086470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.657 [2024-12-09 06:29:26.086480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.657 qpair failed and we were unable to recover it. 00:30:31.657 [2024-12-09 06:29:26.086659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.657 [2024-12-09 06:29:26.086668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.657 qpair failed and we were unable to recover it. 00:30:31.657 [2024-12-09 06:29:26.086967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.657 [2024-12-09 06:29:26.086977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.657 qpair failed and we were unable to recover it. 00:30:31.657 [2024-12-09 06:29:26.087297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.657 [2024-12-09 06:29:26.087306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.657 qpair failed and we were unable to recover it. 00:30:31.657 [2024-12-09 06:29:26.087482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.657 [2024-12-09 06:29:26.087492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.657 qpair failed and we were unable to recover it. 00:30:31.657 [2024-12-09 06:29:26.087772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.657 [2024-12-09 06:29:26.087781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.657 qpair failed and we were unable to recover it. 00:30:31.657 [2024-12-09 06:29:26.088079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.657 [2024-12-09 06:29:26.088089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.657 qpair failed and we were unable to recover it. 00:30:31.657 [2024-12-09 06:29:26.088472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.657 [2024-12-09 06:29:26.088482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.657 qpair failed and we were unable to recover it. 00:30:31.657 [2024-12-09 06:29:26.088769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.657 [2024-12-09 06:29:26.088778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.657 qpair failed and we were unable to recover it. 00:30:31.657 [2024-12-09 06:29:26.088957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.657 [2024-12-09 06:29:26.088966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.657 qpair failed and we were unable to recover it. 00:30:31.657 [2024-12-09 06:29:26.089077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.657 [2024-12-09 06:29:26.089086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.657 qpair failed and we were unable to recover it. 00:30:31.657 [2024-12-09 06:29:26.089344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.657 [2024-12-09 06:29:26.089353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.657 qpair failed and we were unable to recover it. 00:30:31.657 [2024-12-09 06:29:26.089539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.657 [2024-12-09 06:29:26.089550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.657 qpair failed and we were unable to recover it. 00:30:31.657 [2024-12-09 06:29:26.089854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.657 [2024-12-09 06:29:26.089863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.657 qpair failed and we were unable to recover it. 00:30:31.657 [2024-12-09 06:29:26.090158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.657 [2024-12-09 06:29:26.090168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.657 qpair failed and we were unable to recover it. 00:30:31.657 [2024-12-09 06:29:26.090323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.657 [2024-12-09 06:29:26.090333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.657 qpair failed and we were unable to recover it. 00:30:31.657 [2024-12-09 06:29:26.090521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.657 [2024-12-09 06:29:26.090532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.657 qpair failed and we were unable to recover it. 00:30:31.657 [2024-12-09 06:29:26.090866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.657 [2024-12-09 06:29:26.090875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.657 qpair failed and we were unable to recover it. 00:30:31.657 [2024-12-09 06:29:26.091178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.657 [2024-12-09 06:29:26.091188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.657 qpair failed and we were unable to recover it. 00:30:31.657 [2024-12-09 06:29:26.091488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.657 [2024-12-09 06:29:26.091497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.657 qpair failed and we were unable to recover it. 00:30:31.657 [2024-12-09 06:29:26.091802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.657 [2024-12-09 06:29:26.091811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.657 qpair failed and we were unable to recover it. 00:30:31.657 [2024-12-09 06:29:26.092118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.657 [2024-12-09 06:29:26.092126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.657 qpair failed and we were unable to recover it. 00:30:31.657 [2024-12-09 06:29:26.092280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.657 [2024-12-09 06:29:26.092293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.657 qpair failed and we were unable to recover it. 00:30:31.657 [2024-12-09 06:29:26.092632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.657 [2024-12-09 06:29:26.092642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.657 qpair failed and we were unable to recover it. 00:30:31.657 [2024-12-09 06:29:26.092806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.657 [2024-12-09 06:29:26.092816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.657 qpair failed and we were unable to recover it. 00:30:31.657 [2024-12-09 06:29:26.093137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.657 [2024-12-09 06:29:26.093147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.657 qpair failed and we were unable to recover it. 00:30:31.657 [2024-12-09 06:29:26.093441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.657 [2024-12-09 06:29:26.093452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.657 qpair failed and we were unable to recover it. 00:30:31.657 [2024-12-09 06:29:26.093746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.657 [2024-12-09 06:29:26.093755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.657 qpair failed and we were unable to recover it. 00:30:31.657 [2024-12-09 06:29:26.094054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.657 [2024-12-09 06:29:26.094065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.657 qpair failed and we were unable to recover it. 00:30:31.657 [2024-12-09 06:29:26.094388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.657 [2024-12-09 06:29:26.094399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.657 qpair failed and we were unable to recover it. 00:30:31.657 [2024-12-09 06:29:26.094558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.658 [2024-12-09 06:29:26.094568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.658 qpair failed and we were unable to recover it. 00:30:31.658 [2024-12-09 06:29:26.094738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.658 [2024-12-09 06:29:26.094747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.658 qpair failed and we were unable to recover it. 00:30:31.658 [2024-12-09 06:29:26.095011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.658 [2024-12-09 06:29:26.095020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.658 qpair failed and we were unable to recover it. 00:30:31.658 [2024-12-09 06:29:26.095187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.658 [2024-12-09 06:29:26.095197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.658 qpair failed and we were unable to recover it. 00:30:31.658 [2024-12-09 06:29:26.095495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.658 [2024-12-09 06:29:26.095505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.658 qpair failed and we were unable to recover it. 00:30:31.658 [2024-12-09 06:29:26.095800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.658 [2024-12-09 06:29:26.095810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.658 qpair failed and we were unable to recover it. 00:30:31.658 [2024-12-09 06:29:26.096116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.658 [2024-12-09 06:29:26.096125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.658 qpair failed and we were unable to recover it. 00:30:31.658 [2024-12-09 06:29:26.096425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.658 [2024-12-09 06:29:26.096435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.658 qpair failed and we were unable to recover it. 00:30:31.658 [2024-12-09 06:29:26.096630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.658 [2024-12-09 06:29:26.096640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.658 qpair failed and we were unable to recover it. 00:30:31.658 [2024-12-09 06:29:26.096936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.658 [2024-12-09 06:29:26.096945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.658 qpair failed and we were unable to recover it. 00:30:31.658 [2024-12-09 06:29:26.097224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.658 [2024-12-09 06:29:26.097233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.658 qpair failed and we were unable to recover it. 00:30:31.658 [2024-12-09 06:29:26.097433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.658 [2024-12-09 06:29:26.097442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.658 qpair failed and we were unable to recover it. 00:30:31.658 [2024-12-09 06:29:26.097751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.658 [2024-12-09 06:29:26.097760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.658 qpair failed and we were unable to recover it. 00:30:31.658 [2024-12-09 06:29:26.097980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.658 [2024-12-09 06:29:26.097990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.658 qpair failed and we were unable to recover it. 00:30:31.658 [2024-12-09 06:29:26.098303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.658 [2024-12-09 06:29:26.098312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.658 qpair failed and we were unable to recover it. 00:30:31.658 [2024-12-09 06:29:26.098621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.658 [2024-12-09 06:29:26.098630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.658 qpair failed and we were unable to recover it. 00:30:31.658 [2024-12-09 06:29:26.098939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.658 [2024-12-09 06:29:26.098949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.658 qpair failed and we were unable to recover it. 00:30:31.658 [2024-12-09 06:29:26.099290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.658 [2024-12-09 06:29:26.099299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.658 qpair failed and we were unable to recover it. 00:30:31.658 [2024-12-09 06:29:26.099604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.658 [2024-12-09 06:29:26.099613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.658 qpair failed and we were unable to recover it. 00:30:31.658 [2024-12-09 06:29:26.099660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.658 [2024-12-09 06:29:26.099669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.658 qpair failed and we were unable to recover it. 00:30:31.658 [2024-12-09 06:29:26.099869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.658 [2024-12-09 06:29:26.099879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.658 qpair failed and we were unable to recover it. 00:30:31.658 [2024-12-09 06:29:26.100198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.658 [2024-12-09 06:29:26.100208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.658 qpair failed and we were unable to recover it. 00:30:31.658 [2024-12-09 06:29:26.100543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.658 [2024-12-09 06:29:26.100553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.658 qpair failed and we were unable to recover it. 00:30:31.658 [2024-12-09 06:29:26.100731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.658 [2024-12-09 06:29:26.100741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.658 qpair failed and we were unable to recover it. 00:30:31.658 [2024-12-09 06:29:26.101125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.658 [2024-12-09 06:29:26.101134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.658 qpair failed and we were unable to recover it. 00:30:31.658 [2024-12-09 06:29:26.101426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.658 [2024-12-09 06:29:26.101435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.658 qpair failed and we were unable to recover it. 00:30:31.658 [2024-12-09 06:29:26.101742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.658 [2024-12-09 06:29:26.101751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.658 qpair failed and we were unable to recover it. 00:30:31.658 [2024-12-09 06:29:26.101934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.658 [2024-12-09 06:29:26.101943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.658 qpair failed and we were unable to recover it. 00:30:31.658 [2024-12-09 06:29:26.102293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.658 [2024-12-09 06:29:26.102303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.658 qpair failed and we were unable to recover it. 00:30:31.658 [2024-12-09 06:29:26.102605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.658 [2024-12-09 06:29:26.102615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.658 qpair failed and we were unable to recover it. 00:30:31.658 [2024-12-09 06:29:26.102776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.658 [2024-12-09 06:29:26.102787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.658 qpair failed and we were unable to recover it. 00:30:31.658 [2024-12-09 06:29:26.103091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.658 [2024-12-09 06:29:26.103101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.658 qpair failed and we were unable to recover it. 00:30:31.658 [2024-12-09 06:29:26.103376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.658 [2024-12-09 06:29:26.103387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.658 qpair failed and we were unable to recover it. 00:30:31.658 [2024-12-09 06:29:26.103467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.658 [2024-12-09 06:29:26.103477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.658 qpair failed and we were unable to recover it. 00:30:31.658 [2024-12-09 06:29:26.103525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.658 [2024-12-09 06:29:26.103536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.658 qpair failed and we were unable to recover it. 00:30:31.658 [2024-12-09 06:29:26.103691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.658 [2024-12-09 06:29:26.103700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.658 qpair failed and we were unable to recover it. 00:30:31.658 [2024-12-09 06:29:26.104018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.658 [2024-12-09 06:29:26.104027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.658 qpair failed and we were unable to recover it. 00:30:31.658 [2024-12-09 06:29:26.104323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.658 [2024-12-09 06:29:26.104333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.658 qpair failed and we were unable to recover it. 00:30:31.658 [2024-12-09 06:29:26.104650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.659 [2024-12-09 06:29:26.104660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.659 qpair failed and we were unable to recover it. 00:30:31.659 [2024-12-09 06:29:26.104964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.659 [2024-12-09 06:29:26.104974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.659 qpair failed and we were unable to recover it. 00:30:31.659 [2024-12-09 06:29:26.105267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.659 [2024-12-09 06:29:26.105276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.659 qpair failed and we were unable to recover it. 00:30:31.659 [2024-12-09 06:29:26.105578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.659 [2024-12-09 06:29:26.105587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.659 qpair failed and we were unable to recover it. 00:30:31.659 [2024-12-09 06:29:26.105863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.659 [2024-12-09 06:29:26.105873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.659 qpair failed and we were unable to recover it. 00:30:31.659 [2024-12-09 06:29:26.106170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.659 [2024-12-09 06:29:26.106180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.659 qpair failed and we were unable to recover it. 00:30:31.659 [2024-12-09 06:29:26.106349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.659 [2024-12-09 06:29:26.106359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.659 qpair failed and we were unable to recover it. 00:30:31.659 [2024-12-09 06:29:26.106650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.659 [2024-12-09 06:29:26.106660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.659 qpair failed and we were unable to recover it. 00:30:31.659 [2024-12-09 06:29:26.106936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.659 [2024-12-09 06:29:26.106946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.659 qpair failed and we were unable to recover it. 00:30:31.659 [2024-12-09 06:29:26.107288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.659 [2024-12-09 06:29:26.107297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.659 qpair failed and we were unable to recover it. 00:30:31.659 [2024-12-09 06:29:26.107492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.659 [2024-12-09 06:29:26.107501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.659 qpair failed and we were unable to recover it. 00:30:31.659 [2024-12-09 06:29:26.107647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.659 [2024-12-09 06:29:26.107656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.659 qpair failed and we were unable to recover it. 00:30:31.659 [2024-12-09 06:29:26.107840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.659 [2024-12-09 06:29:26.107850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.659 qpair failed and we were unable to recover it. 00:30:31.659 [2024-12-09 06:29:26.108109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.659 [2024-12-09 06:29:26.108118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.659 qpair failed and we were unable to recover it. 00:30:31.659 [2024-12-09 06:29:26.108403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.659 [2024-12-09 06:29:26.108412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.659 qpair failed and we were unable to recover it. 00:30:31.659 [2024-12-09 06:29:26.108730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.659 [2024-12-09 06:29:26.108740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.659 qpair failed and we were unable to recover it. 00:30:31.659 [2024-12-09 06:29:26.108924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.659 [2024-12-09 06:29:26.108933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.659 qpair failed and we were unable to recover it. 00:30:31.659 [2024-12-09 06:29:26.109128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.659 [2024-12-09 06:29:26.109137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.659 qpair failed and we were unable to recover it. 00:30:31.659 [2024-12-09 06:29:26.109487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.659 [2024-12-09 06:29:26.109496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.659 qpair failed and we were unable to recover it. 00:30:31.659 [2024-12-09 06:29:26.109692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.659 [2024-12-09 06:29:26.109702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.659 qpair failed and we were unable to recover it. 00:30:31.659 [2024-12-09 06:29:26.109904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.659 [2024-12-09 06:29:26.109914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.659 qpair failed and we were unable to recover it. 00:30:31.659 [2024-12-09 06:29:26.110096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.659 [2024-12-09 06:29:26.110113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.659 qpair failed and we were unable to recover it. 00:30:31.659 [2024-12-09 06:29:26.110318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.659 [2024-12-09 06:29:26.110327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.659 qpair failed and we were unable to recover it. 00:30:31.659 [2024-12-09 06:29:26.110485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.659 [2024-12-09 06:29:26.110495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.659 qpair failed and we were unable to recover it. 00:30:31.659 [2024-12-09 06:29:26.110725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.659 [2024-12-09 06:29:26.110734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.659 qpair failed and we were unable to recover it. 00:30:31.659 [2024-12-09 06:29:26.111057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.659 [2024-12-09 06:29:26.111068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.659 qpair failed and we were unable to recover it. 00:30:31.659 [2024-12-09 06:29:26.111348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.659 [2024-12-09 06:29:26.111357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.659 qpair failed and we were unable to recover it. 00:30:31.659 [2024-12-09 06:29:26.111521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.659 [2024-12-09 06:29:26.111531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.659 qpair failed and we were unable to recover it. 00:30:31.659 [2024-12-09 06:29:26.111833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.659 [2024-12-09 06:29:26.111842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.659 qpair failed and we were unable to recover it. 00:30:31.659 [2024-12-09 06:29:26.112117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.659 [2024-12-09 06:29:26.112126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.659 qpair failed and we were unable to recover it. 00:30:31.659 [2024-12-09 06:29:26.112283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.659 [2024-12-09 06:29:26.112293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.659 qpair failed and we were unable to recover it. 00:30:31.659 [2024-12-09 06:29:26.112618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.659 [2024-12-09 06:29:26.112628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.659 qpair failed and we were unable to recover it. 00:30:31.659 [2024-12-09 06:29:26.112946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.659 [2024-12-09 06:29:26.112956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.659 qpair failed and we were unable to recover it. 00:30:31.659 [2024-12-09 06:29:26.113160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.659 [2024-12-09 06:29:26.113169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.659 qpair failed and we were unable to recover it. 00:30:31.659 [2024-12-09 06:29:26.113325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.659 [2024-12-09 06:29:26.113337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.659 qpair failed and we were unable to recover it. 00:30:31.659 [2024-12-09 06:29:26.113590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.659 [2024-12-09 06:29:26.113600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.659 qpair failed and we were unable to recover it. 00:30:31.659 [2024-12-09 06:29:26.113892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.659 [2024-12-09 06:29:26.113902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.659 qpair failed and we were unable to recover it. 00:30:31.659 [2024-12-09 06:29:26.114104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.659 [2024-12-09 06:29:26.114113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.659 qpair failed and we were unable to recover it. 00:30:31.660 [2024-12-09 06:29:26.114494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.660 [2024-12-09 06:29:26.114504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.660 qpair failed and we were unable to recover it. 00:30:31.660 [2024-12-09 06:29:26.114797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.660 [2024-12-09 06:29:26.114806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.660 qpair failed and we were unable to recover it. 00:30:31.660 [2024-12-09 06:29:26.115137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.660 [2024-12-09 06:29:26.115147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.660 qpair failed and we were unable to recover it. 00:30:31.660 [2024-12-09 06:29:26.115326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.660 [2024-12-09 06:29:26.115335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.660 qpair failed and we were unable to recover it. 00:30:31.660 [2024-12-09 06:29:26.115648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.660 [2024-12-09 06:29:26.115657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.660 qpair failed and we were unable to recover it. 00:30:31.660 [2024-12-09 06:29:26.115954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.660 [2024-12-09 06:29:26.115963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.660 qpair failed and we were unable to recover it. 00:30:31.660 [2024-12-09 06:29:26.116137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.660 [2024-12-09 06:29:26.116147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.660 qpair failed and we were unable to recover it. 00:30:31.660 [2024-12-09 06:29:26.116304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.660 [2024-12-09 06:29:26.116314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.660 qpair failed and we were unable to recover it. 00:30:31.660 [2024-12-09 06:29:26.116525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.660 [2024-12-09 06:29:26.116535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.660 qpair failed and we were unable to recover it. 00:30:31.660 [2024-12-09 06:29:26.116897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.660 [2024-12-09 06:29:26.116906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.660 qpair failed and we were unable to recover it. 00:30:31.660 [2024-12-09 06:29:26.117204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.660 [2024-12-09 06:29:26.117213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.660 qpair failed and we were unable to recover it. 00:30:31.660 [2024-12-09 06:29:26.117531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.660 [2024-12-09 06:29:26.117541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.660 qpair failed and we were unable to recover it. 00:30:31.660 [2024-12-09 06:29:26.117700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.660 [2024-12-09 06:29:26.117710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.660 qpair failed and we were unable to recover it. 00:30:31.660 [2024-12-09 06:29:26.117973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.660 [2024-12-09 06:29:26.117982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.660 qpair failed and we were unable to recover it. 00:30:31.660 [2024-12-09 06:29:26.118159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.660 [2024-12-09 06:29:26.118168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.660 qpair failed and we were unable to recover it. 00:30:31.660 [2024-12-09 06:29:26.118326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.660 [2024-12-09 06:29:26.118336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.660 qpair failed and we were unable to recover it. 00:30:31.660 [2024-12-09 06:29:26.118665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.660 [2024-12-09 06:29:26.118675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.660 qpair failed and we were unable to recover it. 00:30:31.660 [2024-12-09 06:29:26.118917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.660 [2024-12-09 06:29:26.118926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.660 qpair failed and we were unable to recover it. 00:30:31.660 [2024-12-09 06:29:26.119241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.660 [2024-12-09 06:29:26.119250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.660 qpair failed and we were unable to recover it. 00:30:31.660 [2024-12-09 06:29:26.119406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.660 [2024-12-09 06:29:26.119422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.660 qpair failed and we were unable to recover it. 00:30:31.660 [2024-12-09 06:29:26.119705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.660 [2024-12-09 06:29:26.119715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.660 qpair failed and we were unable to recover it. 00:30:31.660 [2024-12-09 06:29:26.119883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.660 [2024-12-09 06:29:26.119893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.660 qpair failed and we were unable to recover it. 00:30:31.660 [2024-12-09 06:29:26.120109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.660 [2024-12-09 06:29:26.120118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.660 qpair failed and we were unable to recover it. 00:30:31.660 [2024-12-09 06:29:26.120281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.660 [2024-12-09 06:29:26.120296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.660 qpair failed and we were unable to recover it. 00:30:31.660 [2024-12-09 06:29:26.120601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.660 [2024-12-09 06:29:26.120610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.660 qpair failed and we were unable to recover it. 00:30:31.660 [2024-12-09 06:29:26.120895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.660 [2024-12-09 06:29:26.120904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.660 qpair failed and we were unable to recover it. 00:30:31.660 [2024-12-09 06:29:26.121194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.660 [2024-12-09 06:29:26.121204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.660 qpair failed and we were unable to recover it. 00:30:31.660 [2024-12-09 06:29:26.121501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.660 [2024-12-09 06:29:26.121510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.660 qpair failed and we were unable to recover it. 00:30:31.660 [2024-12-09 06:29:26.121810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.660 [2024-12-09 06:29:26.121819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.660 qpair failed and we were unable to recover it. 00:30:31.660 [2024-12-09 06:29:26.122118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.660 [2024-12-09 06:29:26.122127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.660 qpair failed and we were unable to recover it. 00:30:31.660 [2024-12-09 06:29:26.122281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.660 [2024-12-09 06:29:26.122291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.660 qpair failed and we were unable to recover it. 00:30:31.660 [2024-12-09 06:29:26.122464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.660 [2024-12-09 06:29:26.122474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.660 qpair failed and we were unable to recover it. 00:30:31.660 [2024-12-09 06:29:26.122622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.660 [2024-12-09 06:29:26.122631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.660 qpair failed and we were unable to recover it. 00:30:31.660 [2024-12-09 06:29:26.122892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.661 [2024-12-09 06:29:26.122900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.661 qpair failed and we were unable to recover it. 00:30:31.661 [2024-12-09 06:29:26.123062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.661 [2024-12-09 06:29:26.123072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.661 qpair failed and we were unable to recover it. 00:30:31.661 [2024-12-09 06:29:26.123374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.661 [2024-12-09 06:29:26.123383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.661 qpair failed and we were unable to recover it. 00:30:31.661 [2024-12-09 06:29:26.123543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.661 [2024-12-09 06:29:26.123555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.661 qpair failed and we were unable to recover it. 00:30:31.661 [2024-12-09 06:29:26.123923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.661 [2024-12-09 06:29:26.123933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.661 qpair failed and we were unable to recover it. 00:30:31.661 [2024-12-09 06:29:26.124264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.661 [2024-12-09 06:29:26.124273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.661 qpair failed and we were unable to recover it. 00:30:31.661 [2024-12-09 06:29:26.124571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.661 [2024-12-09 06:29:26.124580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.661 qpair failed and we were unable to recover it. 00:30:31.661 [2024-12-09 06:29:26.124882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.661 [2024-12-09 06:29:26.124892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.661 qpair failed and we were unable to recover it. 00:30:31.661 [2024-12-09 06:29:26.125178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.661 [2024-12-09 06:29:26.125188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.661 qpair failed and we were unable to recover it. 00:30:31.661 [2024-12-09 06:29:26.125489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.661 [2024-12-09 06:29:26.125499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.661 qpair failed and we were unable to recover it. 00:30:31.661 [2024-12-09 06:29:26.125665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.661 [2024-12-09 06:29:26.125674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.661 qpair failed and we were unable to recover it. 00:30:31.661 [2024-12-09 06:29:26.125826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.661 [2024-12-09 06:29:26.125836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.661 qpair failed and we were unable to recover it. 00:30:31.661 [2024-12-09 06:29:26.126029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.661 [2024-12-09 06:29:26.126038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.661 qpair failed and we were unable to recover it. 00:30:31.661 [2024-12-09 06:29:26.126186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.661 [2024-12-09 06:29:26.126196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.661 qpair failed and we were unable to recover it. 00:30:31.661 [2024-12-09 06:29:26.126509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.661 [2024-12-09 06:29:26.126519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.661 qpair failed and we were unable to recover it. 00:30:31.661 [2024-12-09 06:29:26.126816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.661 [2024-12-09 06:29:26.126826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.661 qpair failed and we were unable to recover it. 00:30:31.661 [2024-12-09 06:29:26.127005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.661 [2024-12-09 06:29:26.127015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.661 qpair failed and we were unable to recover it. 00:30:31.661 [2024-12-09 06:29:26.127313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.661 [2024-12-09 06:29:26.127323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.661 qpair failed and we were unable to recover it. 00:30:31.661 [2024-12-09 06:29:26.127627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.661 [2024-12-09 06:29:26.127637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.661 qpair failed and we were unable to recover it. 00:30:31.661 [2024-12-09 06:29:26.127933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.661 [2024-12-09 06:29:26.127943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.661 qpair failed and we were unable to recover it. 00:30:31.661 [2024-12-09 06:29:26.128240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.661 [2024-12-09 06:29:26.128249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.661 qpair failed and we were unable to recover it. 00:30:31.661 [2024-12-09 06:29:26.128551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.661 [2024-12-09 06:29:26.128561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.661 qpair failed and we were unable to recover it. 00:30:31.661 [2024-12-09 06:29:26.128767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.661 [2024-12-09 06:29:26.128777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.661 qpair failed and we were unable to recover it. 00:30:31.661 [2024-12-09 06:29:26.129030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.661 [2024-12-09 06:29:26.129039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.661 qpair failed and we were unable to recover it. 00:30:31.661 [2024-12-09 06:29:26.129218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.661 [2024-12-09 06:29:26.129228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.661 qpair failed and we were unable to recover it. 00:30:31.661 [2024-12-09 06:29:26.129506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.661 [2024-12-09 06:29:26.129515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.661 qpair failed and we were unable to recover it. 00:30:31.661 [2024-12-09 06:29:26.129794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.661 [2024-12-09 06:29:26.129810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.661 qpair failed and we were unable to recover it. 00:30:31.661 [2024-12-09 06:29:26.130100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.661 [2024-12-09 06:29:26.130109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.661 qpair failed and we were unable to recover it. 00:30:31.661 [2024-12-09 06:29:26.130299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.661 [2024-12-09 06:29:26.130309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.661 qpair failed and we were unable to recover it. 00:30:31.661 [2024-12-09 06:29:26.130662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.661 [2024-12-09 06:29:26.130672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.661 qpair failed and we were unable to recover it. 00:30:31.661 [2024-12-09 06:29:26.130832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.661 [2024-12-09 06:29:26.130842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.661 qpair failed and we were unable to recover it. 00:30:31.661 [2024-12-09 06:29:26.131047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.661 [2024-12-09 06:29:26.131056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.661 qpair failed and we were unable to recover it. 00:30:31.661 [2024-12-09 06:29:26.131367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.661 [2024-12-09 06:29:26.131376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.661 qpair failed and we were unable to recover it. 00:30:31.661 [2024-12-09 06:29:26.131674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.661 [2024-12-09 06:29:26.131683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.661 qpair failed and we were unable to recover it. 00:30:31.661 [2024-12-09 06:29:26.131845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.661 [2024-12-09 06:29:26.131854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.661 qpair failed and we were unable to recover it. 00:30:31.661 [2024-12-09 06:29:26.132127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.661 [2024-12-09 06:29:26.132137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.661 qpair failed and we were unable to recover it. 00:30:31.661 [2024-12-09 06:29:26.132308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.661 [2024-12-09 06:29:26.132318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.661 qpair failed and we were unable to recover it. 00:30:31.661 [2024-12-09 06:29:26.132642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.661 [2024-12-09 06:29:26.132651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.661 qpair failed and we were unable to recover it. 00:30:31.662 [2024-12-09 06:29:26.132994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.662 [2024-12-09 06:29:26.133004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.662 qpair failed and we were unable to recover it. 00:30:31.662 [2024-12-09 06:29:26.133292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.662 [2024-12-09 06:29:26.133301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.662 qpair failed and we were unable to recover it. 00:30:31.662 [2024-12-09 06:29:26.133611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.662 [2024-12-09 06:29:26.133620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.662 qpair failed and we were unable to recover it. 00:30:31.662 [2024-12-09 06:29:26.133896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.662 [2024-12-09 06:29:26.133905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.662 qpair failed and we were unable to recover it. 00:30:31.662 [2024-12-09 06:29:26.134205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.662 [2024-12-09 06:29:26.134214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.662 qpair failed and we were unable to recover it. 00:30:31.662 [2024-12-09 06:29:26.134488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.662 [2024-12-09 06:29:26.134499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.662 qpair failed and we were unable to recover it. 00:30:31.662 [2024-12-09 06:29:26.134657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.662 [2024-12-09 06:29:26.134666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.662 qpair failed and we were unable to recover it. 00:30:31.662 [2024-12-09 06:29:26.134963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.662 [2024-12-09 06:29:26.134972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.662 qpair failed and we were unable to recover it. 00:30:31.662 [2024-12-09 06:29:26.135246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.662 [2024-12-09 06:29:26.135256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.662 qpair failed and we were unable to recover it. 00:30:31.662 [2024-12-09 06:29:26.135554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.662 [2024-12-09 06:29:26.135564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.662 qpair failed and we were unable to recover it. 00:30:31.662 [2024-12-09 06:29:26.135748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.662 [2024-12-09 06:29:26.135757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.662 qpair failed and we were unable to recover it. 00:30:31.662 [2024-12-09 06:29:26.136056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.662 [2024-12-09 06:29:26.136065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.662 qpair failed and we were unable to recover it. 00:30:31.662 [2024-12-09 06:29:26.136221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.662 [2024-12-09 06:29:26.136230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.662 qpair failed and we were unable to recover it. 00:30:31.662 [2024-12-09 06:29:26.136605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.662 [2024-12-09 06:29:26.136615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.662 qpair failed and we were unable to recover it. 00:30:31.662 [2024-12-09 06:29:26.136910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.662 [2024-12-09 06:29:26.136920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.662 qpair failed and we were unable to recover it. 00:30:31.662 [2024-12-09 06:29:26.137220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.662 [2024-12-09 06:29:26.137230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.662 qpair failed and we were unable to recover it. 00:30:31.662 [2024-12-09 06:29:26.137526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.662 [2024-12-09 06:29:26.137536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.662 qpair failed and we were unable to recover it. 00:30:31.662 [2024-12-09 06:29:26.137816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.662 [2024-12-09 06:29:26.137825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.662 qpair failed and we were unable to recover it. 00:30:31.662 [2024-12-09 06:29:26.138120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.662 [2024-12-09 06:29:26.138129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.662 qpair failed and we were unable to recover it. 00:30:31.662 [2024-12-09 06:29:26.138293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.662 [2024-12-09 06:29:26.138302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.662 qpair failed and we were unable to recover it. 00:30:31.662 [2024-12-09 06:29:26.138590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.662 [2024-12-09 06:29:26.138600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.662 qpair failed and we were unable to recover it. 00:30:31.662 [2024-12-09 06:29:26.138864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.662 [2024-12-09 06:29:26.138874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.662 qpair failed and we were unable to recover it. 00:30:31.662 [2024-12-09 06:29:26.139050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.662 [2024-12-09 06:29:26.139060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.662 qpair failed and we were unable to recover it. 00:30:31.662 [2024-12-09 06:29:26.139325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.662 [2024-12-09 06:29:26.139335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.662 qpair failed and we were unable to recover it. 00:30:31.662 [2024-12-09 06:29:26.139505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.662 [2024-12-09 06:29:26.139514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.662 qpair failed and we were unable to recover it. 00:30:31.662 [2024-12-09 06:29:26.139873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.662 [2024-12-09 06:29:26.139881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.662 qpair failed and we were unable to recover it. 00:30:31.662 [2024-12-09 06:29:26.140175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.662 [2024-12-09 06:29:26.140184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.662 qpair failed and we were unable to recover it. 00:30:31.662 [2024-12-09 06:29:26.140358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.662 [2024-12-09 06:29:26.140367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.662 qpair failed and we were unable to recover it. 00:30:31.662 [2024-12-09 06:29:26.140652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.662 [2024-12-09 06:29:26.140661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.662 qpair failed and we were unable to recover it. 00:30:31.662 [2024-12-09 06:29:26.140702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.662 [2024-12-09 06:29:26.140711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.662 qpair failed and we were unable to recover it. 00:30:31.662 [2024-12-09 06:29:26.140860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.662 [2024-12-09 06:29:26.140869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.662 qpair failed and we were unable to recover it. 00:30:31.662 [2024-12-09 06:29:26.141223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.662 [2024-12-09 06:29:26.141233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.662 qpair failed and we were unable to recover it. 00:30:31.662 [2024-12-09 06:29:26.141587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.662 [2024-12-09 06:29:26.141598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.662 qpair failed and we were unable to recover it. 00:30:31.662 [2024-12-09 06:29:26.141892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.662 [2024-12-09 06:29:26.141901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.662 qpair failed and we were unable to recover it. 00:30:31.662 [2024-12-09 06:29:26.142057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.662 [2024-12-09 06:29:26.142065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.662 qpair failed and we were unable to recover it. 00:30:31.662 [2024-12-09 06:29:26.142268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.662 [2024-12-09 06:29:26.142278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.662 qpair failed and we were unable to recover it. 00:30:31.662 [2024-12-09 06:29:26.142577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.662 [2024-12-09 06:29:26.142587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.662 qpair failed and we were unable to recover it. 00:30:31.662 [2024-12-09 06:29:26.142757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.662 [2024-12-09 06:29:26.142766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.662 qpair failed and we were unable to recover it. 00:30:31.663 [2024-12-09 06:29:26.142971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.663 [2024-12-09 06:29:26.142980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.663 qpair failed and we were unable to recover it. 00:30:31.663 [2024-12-09 06:29:26.143272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.663 [2024-12-09 06:29:26.143283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.663 qpair failed and we were unable to recover it. 00:30:31.663 [2024-12-09 06:29:26.143563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.663 [2024-12-09 06:29:26.143573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.663 qpair failed and we were unable to recover it. 00:30:31.663 [2024-12-09 06:29:26.143755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.663 [2024-12-09 06:29:26.143764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.663 qpair failed and we were unable to recover it. 00:30:31.663 [2024-12-09 06:29:26.144106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.663 [2024-12-09 06:29:26.144115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.663 qpair failed and we were unable to recover it. 00:30:31.663 [2024-12-09 06:29:26.144282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.663 [2024-12-09 06:29:26.144291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.663 qpair failed and we were unable to recover it. 00:30:31.663 [2024-12-09 06:29:26.144573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.663 [2024-12-09 06:29:26.144583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.663 qpair failed and we were unable to recover it. 00:30:31.663 [2024-12-09 06:29:26.144929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.663 [2024-12-09 06:29:26.144940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.663 qpair failed and we were unable to recover it. 00:30:31.663 [2024-12-09 06:29:26.145252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.663 [2024-12-09 06:29:26.145262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.663 qpair failed and we were unable to recover it. 00:30:31.663 [2024-12-09 06:29:26.145562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.663 [2024-12-09 06:29:26.145571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.663 qpair failed and we were unable to recover it. 00:30:31.663 [2024-12-09 06:29:26.145728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.663 [2024-12-09 06:29:26.145737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.663 qpair failed and we were unable to recover it. 00:30:31.663 [2024-12-09 06:29:26.146070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.663 [2024-12-09 06:29:26.146079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.663 qpair failed and we were unable to recover it. 00:30:31.663 [2024-12-09 06:29:26.146377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.663 [2024-12-09 06:29:26.146386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.663 qpair failed and we were unable to recover it. 00:30:31.663 [2024-12-09 06:29:26.146665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.663 [2024-12-09 06:29:26.146675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.663 qpair failed and we were unable to recover it. 00:30:31.663 [2024-12-09 06:29:26.146840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.663 [2024-12-09 06:29:26.146850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.663 qpair failed and we were unable to recover it. 00:30:31.663 [2024-12-09 06:29:26.147118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.663 [2024-12-09 06:29:26.147127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.663 qpair failed and we were unable to recover it. 00:30:31.663 [2024-12-09 06:29:26.147469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.663 [2024-12-09 06:29:26.147479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.663 qpair failed and we were unable to recover it. 00:30:31.663 [2024-12-09 06:29:26.147801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.663 [2024-12-09 06:29:26.147810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.663 qpair failed and we were unable to recover it. 00:30:31.663 [2024-12-09 06:29:26.148111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.663 [2024-12-09 06:29:26.148120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.663 qpair failed and we were unable to recover it. 00:30:31.663 [2024-12-09 06:29:26.148395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.663 [2024-12-09 06:29:26.148404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.663 qpair failed and we were unable to recover it. 00:30:31.663 [2024-12-09 06:29:26.148607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.663 [2024-12-09 06:29:26.148617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.663 qpair failed and we were unable to recover it. 00:30:31.663 [2024-12-09 06:29:26.148800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.663 [2024-12-09 06:29:26.148810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.663 qpair failed and we were unable to recover it. 00:30:31.663 [2024-12-09 06:29:26.149087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.663 [2024-12-09 06:29:26.149096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.663 qpair failed and we were unable to recover it. 00:30:31.663 [2024-12-09 06:29:26.149427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.663 [2024-12-09 06:29:26.149436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.663 qpair failed and we were unable to recover it. 00:30:31.663 [2024-12-09 06:29:26.149806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.663 [2024-12-09 06:29:26.149816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.663 qpair failed and we were unable to recover it. 00:30:31.663 [2024-12-09 06:29:26.150111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.663 [2024-12-09 06:29:26.150120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.663 qpair failed and we were unable to recover it. 00:30:31.663 [2024-12-09 06:29:26.150242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.663 [2024-12-09 06:29:26.150251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.663 qpair failed and we were unable to recover it. 00:30:31.663 [2024-12-09 06:29:26.150510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.663 [2024-12-09 06:29:26.150519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.663 qpair failed and we were unable to recover it. 00:30:31.663 [2024-12-09 06:29:26.150869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.663 [2024-12-09 06:29:26.150878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.663 qpair failed and we were unable to recover it. 00:30:31.663 [2024-12-09 06:29:26.151155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.663 [2024-12-09 06:29:26.151165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.663 qpair failed and we were unable to recover it. 00:30:31.663 [2024-12-09 06:29:26.151457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.663 [2024-12-09 06:29:26.151468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.663 qpair failed and we were unable to recover it. 00:30:31.663 [2024-12-09 06:29:26.151771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.663 [2024-12-09 06:29:26.151780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.663 qpair failed and we were unable to recover it. 00:30:31.663 [2024-12-09 06:29:26.151934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.663 [2024-12-09 06:29:26.151943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.663 qpair failed and we were unable to recover it. 00:30:31.663 [2024-12-09 06:29:26.152292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.663 [2024-12-09 06:29:26.152301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.663 qpair failed and we were unable to recover it. 00:30:31.663 [2024-12-09 06:29:26.152604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.663 [2024-12-09 06:29:26.152614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.663 qpair failed and we were unable to recover it. 00:30:31.663 [2024-12-09 06:29:26.152924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.663 [2024-12-09 06:29:26.152933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.663 qpair failed and we were unable to recover it. 00:30:31.663 [2024-12-09 06:29:26.153205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.663 [2024-12-09 06:29:26.153215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.663 qpair failed and we were unable to recover it. 00:30:31.663 [2024-12-09 06:29:26.153404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.664 [2024-12-09 06:29:26.153413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.664 qpair failed and we were unable to recover it. 00:30:31.664 [2024-12-09 06:29:26.153680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.664 [2024-12-09 06:29:26.153689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.664 qpair failed and we were unable to recover it. 00:30:31.664 [2024-12-09 06:29:26.153956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.664 [2024-12-09 06:29:26.153965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.664 qpair failed and we were unable to recover it. 00:30:31.664 [2024-12-09 06:29:26.154259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.664 [2024-12-09 06:29:26.154268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.664 qpair failed and we were unable to recover it. 00:30:31.664 [2024-12-09 06:29:26.154562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.664 [2024-12-09 06:29:26.154572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.664 qpair failed and we were unable to recover it. 00:30:31.664 [2024-12-09 06:29:26.154870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.664 [2024-12-09 06:29:26.154880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.664 qpair failed and we were unable to recover it. 00:30:31.664 [2024-12-09 06:29:26.155068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.664 [2024-12-09 06:29:26.155077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.664 qpair failed and we were unable to recover it. 00:30:31.664 [2024-12-09 06:29:26.155225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.664 [2024-12-09 06:29:26.155235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.664 qpair failed and we were unable to recover it. 00:30:31.664 [2024-12-09 06:29:26.155413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.664 [2024-12-09 06:29:26.155422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.664 qpair failed and we were unable to recover it. 00:30:31.664 [2024-12-09 06:29:26.155736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.664 [2024-12-09 06:29:26.155747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.664 qpair failed and we were unable to recover it. 00:30:31.664 [2024-12-09 06:29:26.155926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.664 [2024-12-09 06:29:26.155939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.664 qpair failed and we were unable to recover it. 00:30:31.664 [2024-12-09 06:29:26.156238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.664 [2024-12-09 06:29:26.156248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.664 qpair failed and we were unable to recover it. 00:30:31.664 [2024-12-09 06:29:26.156517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.664 [2024-12-09 06:29:26.156527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.664 qpair failed and we were unable to recover it. 00:30:31.664 [2024-12-09 06:29:26.156811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.664 [2024-12-09 06:29:26.156820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.664 qpair failed and we were unable to recover it. 00:30:31.664 [2024-12-09 06:29:26.157118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.664 [2024-12-09 06:29:26.157128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.664 qpair failed and we were unable to recover it. 00:30:31.664 [2024-12-09 06:29:26.157429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.664 [2024-12-09 06:29:26.157438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.664 qpair failed and we were unable to recover it. 00:30:31.664 [2024-12-09 06:29:26.157727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.664 [2024-12-09 06:29:26.157738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.664 qpair failed and we were unable to recover it. 00:30:31.664 [2024-12-09 06:29:26.157885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.664 [2024-12-09 06:29:26.157895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.664 qpair failed and we were unable to recover it. 00:30:31.664 [2024-12-09 06:29:26.158210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.664 [2024-12-09 06:29:26.158220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.664 qpair failed and we were unable to recover it. 00:30:31.664 [2024-12-09 06:29:26.158547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.664 [2024-12-09 06:29:26.158559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.664 qpair failed and we were unable to recover it. 00:30:31.664 [2024-12-09 06:29:26.158911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.664 [2024-12-09 06:29:26.158920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.664 qpair failed and we were unable to recover it. 00:30:31.664 [2024-12-09 06:29:26.159235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.664 [2024-12-09 06:29:26.159244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.664 qpair failed and we were unable to recover it. 00:30:31.664 [2024-12-09 06:29:26.159442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.664 [2024-12-09 06:29:26.159454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.664 qpair failed and we were unable to recover it. 00:30:31.664 [2024-12-09 06:29:26.159776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.664 [2024-12-09 06:29:26.159785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.664 qpair failed and we were unable to recover it. 00:30:31.664 [2024-12-09 06:29:26.160085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.664 [2024-12-09 06:29:26.160094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.664 qpair failed and we were unable to recover it. 00:30:31.664 [2024-12-09 06:29:26.160350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.664 [2024-12-09 06:29:26.160359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.664 qpair failed and we were unable to recover it. 00:30:31.664 [2024-12-09 06:29:26.160625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.664 [2024-12-09 06:29:26.160634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.664 qpair failed and we were unable to recover it. 00:30:31.664 [2024-12-09 06:29:26.160789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.664 [2024-12-09 06:29:26.160799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.665 qpair failed and we were unable to recover it. 00:30:31.665 [2024-12-09 06:29:26.161131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.665 [2024-12-09 06:29:26.161141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.665 qpair failed and we were unable to recover it. 00:30:31.665 [2024-12-09 06:29:26.161298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.665 [2024-12-09 06:29:26.161308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.665 qpair failed and we were unable to recover it. 00:30:31.665 [2024-12-09 06:29:26.161579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.665 [2024-12-09 06:29:26.161589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.665 qpair failed and we were unable to recover it. 00:30:31.665 [2024-12-09 06:29:26.161778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.665 [2024-12-09 06:29:26.161787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.665 qpair failed and we were unable to recover it. 00:30:31.665 [2024-12-09 06:29:26.162145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.665 [2024-12-09 06:29:26.162154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.665 qpair failed and we were unable to recover it. 00:30:31.665 [2024-12-09 06:29:26.162340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.665 [2024-12-09 06:29:26.162349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.665 qpair failed and we were unable to recover it. 00:30:31.665 [2024-12-09 06:29:26.162625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.665 [2024-12-09 06:29:26.162635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.665 qpair failed and we were unable to recover it. 00:30:31.665 [2024-12-09 06:29:26.162944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.665 [2024-12-09 06:29:26.162953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.665 qpair failed and we were unable to recover it. 00:30:31.665 [2024-12-09 06:29:26.163231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.665 [2024-12-09 06:29:26.163240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.665 qpair failed and we were unable to recover it. 00:30:31.665 [2024-12-09 06:29:26.163551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.665 [2024-12-09 06:29:26.163561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.665 qpair failed and we were unable to recover it. 00:30:31.665 [2024-12-09 06:29:26.163746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.665 [2024-12-09 06:29:26.163756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.665 qpair failed and we were unable to recover it. 00:30:31.665 [2024-12-09 06:29:26.164157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.665 [2024-12-09 06:29:26.164166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.665 qpair failed and we were unable to recover it. 00:30:31.665 [2024-12-09 06:29:26.164318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.665 [2024-12-09 06:29:26.164327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.665 qpair failed and we were unable to recover it. 00:30:31.665 [2024-12-09 06:29:26.164403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.665 [2024-12-09 06:29:26.164411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.665 qpair failed and we were unable to recover it. 00:30:31.665 [2024-12-09 06:29:26.164710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.665 [2024-12-09 06:29:26.164719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.665 qpair failed and we were unable to recover it. 00:30:31.665 [2024-12-09 06:29:26.165021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.665 [2024-12-09 06:29:26.165030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.665 qpair failed and we were unable to recover it. 00:30:31.665 [2024-12-09 06:29:26.165343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.665 [2024-12-09 06:29:26.165353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.665 qpair failed and we were unable to recover it. 00:30:31.665 [2024-12-09 06:29:26.165577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.665 [2024-12-09 06:29:26.165586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.665 qpair failed and we were unable to recover it. 00:30:31.665 [2024-12-09 06:29:26.165761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.665 [2024-12-09 06:29:26.165770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.665 qpair failed and we were unable to recover it. 00:30:31.665 [2024-12-09 06:29:26.166033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.665 [2024-12-09 06:29:26.166041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.665 qpair failed and we were unable to recover it. 00:30:31.665 [2024-12-09 06:29:26.166222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.665 [2024-12-09 06:29:26.166232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.665 qpair failed and we were unable to recover it. 00:30:31.665 [2024-12-09 06:29:26.166523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.665 [2024-12-09 06:29:26.166533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.665 qpair failed and we were unable to recover it. 00:30:31.665 [2024-12-09 06:29:26.166823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.665 [2024-12-09 06:29:26.166834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.665 qpair failed and we were unable to recover it. 00:30:31.665 [2024-12-09 06:29:26.166978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.665 [2024-12-09 06:29:26.166988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.665 qpair failed and we were unable to recover it. 00:30:31.665 [2024-12-09 06:29:26.167285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.665 [2024-12-09 06:29:26.167294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.665 qpair failed and we were unable to recover it. 00:30:31.665 [2024-12-09 06:29:26.167649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.665 [2024-12-09 06:29:26.167660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.665 qpair failed and we were unable to recover it. 00:30:31.665 [2024-12-09 06:29:26.168013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.665 [2024-12-09 06:29:26.168022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.665 qpair failed and we were unable to recover it. 00:30:31.665 [2024-12-09 06:29:26.168185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.665 [2024-12-09 06:29:26.168194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.665 qpair failed and we were unable to recover it. 00:30:31.665 [2024-12-09 06:29:26.168373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.665 [2024-12-09 06:29:26.168382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.665 qpair failed and we were unable to recover it. 00:30:31.665 [2024-12-09 06:29:26.168669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.665 [2024-12-09 06:29:26.168679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.665 qpair failed and we were unable to recover it. 00:30:31.665 [2024-12-09 06:29:26.168956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.665 [2024-12-09 06:29:26.168965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.665 qpair failed and we were unable to recover it. 00:30:31.665 [2024-12-09 06:29:26.169247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.665 [2024-12-09 06:29:26.169256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.665 qpair failed and we were unable to recover it. 00:30:31.665 [2024-12-09 06:29:26.169543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.665 [2024-12-09 06:29:26.169553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.665 qpair failed and we were unable to recover it. 00:30:31.665 [2024-12-09 06:29:26.169829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.665 [2024-12-09 06:29:26.169838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.665 qpair failed and we were unable to recover it. 00:30:31.665 [2024-12-09 06:29:26.169989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.665 [2024-12-09 06:29:26.169999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.665 qpair failed and we were unable to recover it. 00:30:31.665 [2024-12-09 06:29:26.170331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.665 [2024-12-09 06:29:26.170341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.665 qpair failed and we were unable to recover it. 00:30:31.665 [2024-12-09 06:29:26.170524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.666 [2024-12-09 06:29:26.170534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.666 qpair failed and we were unable to recover it. 00:30:31.666 [2024-12-09 06:29:26.170575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.666 [2024-12-09 06:29:26.170583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.666 qpair failed and we were unable to recover it. 00:30:31.666 [2024-12-09 06:29:26.170855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.666 [2024-12-09 06:29:26.170865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.666 qpair failed and we were unable to recover it. 00:30:31.666 [2024-12-09 06:29:26.171025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.666 [2024-12-09 06:29:26.171034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.666 qpair failed and we were unable to recover it. 00:30:31.666 [2024-12-09 06:29:26.171222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.666 [2024-12-09 06:29:26.171231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.666 qpair failed and we were unable to recover it. 00:30:31.666 [2024-12-09 06:29:26.171454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.666 [2024-12-09 06:29:26.171464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.666 qpair failed and we were unable to recover it. 00:30:31.666 [2024-12-09 06:29:26.171766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.666 [2024-12-09 06:29:26.171776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.666 qpair failed and we were unable to recover it. 00:30:31.666 [2024-12-09 06:29:26.171819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.666 [2024-12-09 06:29:26.171828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.666 qpair failed and we were unable to recover it. 00:30:31.666 [2024-12-09 06:29:26.172170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.666 [2024-12-09 06:29:26.172179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.666 qpair failed and we were unable to recover it. 00:30:31.666 [2024-12-09 06:29:26.172492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.666 [2024-12-09 06:29:26.172502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.666 qpair failed and we were unable to recover it. 00:30:31.666 [2024-12-09 06:29:26.172785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.666 [2024-12-09 06:29:26.172794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.666 qpair failed and we were unable to recover it. 00:30:31.666 [2024-12-09 06:29:26.172950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.666 [2024-12-09 06:29:26.172961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.666 qpair failed and we were unable to recover it. 00:30:31.666 [2024-12-09 06:29:26.173127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.666 [2024-12-09 06:29:26.173137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.666 qpair failed and we were unable to recover it. 00:30:31.666 [2024-12-09 06:29:26.173429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.666 [2024-12-09 06:29:26.173439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.666 qpair failed and we were unable to recover it. 00:30:31.666 [2024-12-09 06:29:26.173712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.666 [2024-12-09 06:29:26.173721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.666 qpair failed and we were unable to recover it. 00:30:31.666 [2024-12-09 06:29:26.174037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.666 [2024-12-09 06:29:26.174047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.666 qpair failed and we were unable to recover it. 00:30:31.666 [2024-12-09 06:29:26.174356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.666 [2024-12-09 06:29:26.174366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.666 qpair failed and we were unable to recover it. 00:30:31.666 [2024-12-09 06:29:26.174666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.666 [2024-12-09 06:29:26.174675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.666 qpair failed and we were unable to recover it. 00:30:31.666 [2024-12-09 06:29:26.174868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.666 [2024-12-09 06:29:26.174878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.666 qpair failed and we were unable to recover it. 00:30:31.666 [2024-12-09 06:29:26.175151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.666 [2024-12-09 06:29:26.175160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.666 qpair failed and we were unable to recover it. 00:30:31.666 [2024-12-09 06:29:26.175325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.666 [2024-12-09 06:29:26.175335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.666 qpair failed and we were unable to recover it. 00:30:31.666 [2024-12-09 06:29:26.175690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.666 [2024-12-09 06:29:26.175699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.666 qpair failed and we were unable to recover it. 00:30:31.666 [2024-12-09 06:29:26.175982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.666 [2024-12-09 06:29:26.175992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.666 qpair failed and we were unable to recover it. 00:30:31.666 [2024-12-09 06:29:26.176310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.666 [2024-12-09 06:29:26.176319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.666 qpair failed and we were unable to recover it. 00:30:31.666 [2024-12-09 06:29:26.176601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.666 [2024-12-09 06:29:26.176611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.666 qpair failed and we were unable to recover it. 00:30:31.666 [2024-12-09 06:29:26.176910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.666 [2024-12-09 06:29:26.176920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.666 qpair failed and we were unable to recover it. 00:30:31.666 [2024-12-09 06:29:26.177224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.666 [2024-12-09 06:29:26.177235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.666 qpair failed and we were unable to recover it. 00:30:31.666 [2024-12-09 06:29:26.177389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.666 [2024-12-09 06:29:26.177399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.666 qpair failed and we were unable to recover it. 00:30:31.666 [2024-12-09 06:29:26.177625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.666 [2024-12-09 06:29:26.177634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.666 qpair failed and we were unable to recover it. 00:30:31.666 [2024-12-09 06:29:26.177938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.666 [2024-12-09 06:29:26.177947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.666 qpair failed and we were unable to recover it. 00:30:31.666 [2024-12-09 06:29:26.178212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.666 [2024-12-09 06:29:26.178221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.666 qpair failed and we were unable to recover it. 00:30:31.666 [2024-12-09 06:29:26.178405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.666 [2024-12-09 06:29:26.178415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.666 qpair failed and we were unable to recover it. 00:30:31.666 [2024-12-09 06:29:26.178731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.666 [2024-12-09 06:29:26.178741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.666 qpair failed and we were unable to recover it. 00:30:31.666 [2024-12-09 06:29:26.178897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.666 [2024-12-09 06:29:26.178907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.666 qpair failed and we were unable to recover it. 00:30:31.666 [2024-12-09 06:29:26.179291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.666 [2024-12-09 06:29:26.179301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.666 qpair failed and we were unable to recover it. 00:30:31.666 [2024-12-09 06:29:26.179596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.666 [2024-12-09 06:29:26.179605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.666 qpair failed and we were unable to recover it. 00:30:31.666 [2024-12-09 06:29:26.179773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.666 [2024-12-09 06:29:26.179783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.666 qpair failed and we were unable to recover it. 00:30:31.666 [2024-12-09 06:29:26.179970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.667 [2024-12-09 06:29:26.179979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.667 qpair failed and we were unable to recover it. 00:30:31.667 [2024-12-09 06:29:26.180137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.667 [2024-12-09 06:29:26.180146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.667 qpair failed and we were unable to recover it. 00:30:31.667 [2024-12-09 06:29:26.180456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.667 [2024-12-09 06:29:26.180466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.667 qpair failed and we were unable to recover it. 00:30:31.667 [2024-12-09 06:29:26.180788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.667 [2024-12-09 06:29:26.180798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.667 qpair failed and we were unable to recover it. 00:30:31.667 [2024-12-09 06:29:26.180959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.667 [2024-12-09 06:29:26.180968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.667 qpair failed and we were unable to recover it. 00:30:31.667 [2024-12-09 06:29:26.181232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.667 [2024-12-09 06:29:26.181241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.667 qpair failed and we were unable to recover it. 00:30:31.667 [2024-12-09 06:29:26.181419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.667 [2024-12-09 06:29:26.181428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.667 qpair failed and we were unable to recover it. 00:30:31.667 [2024-12-09 06:29:26.181765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.667 [2024-12-09 06:29:26.181775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.667 qpair failed and we were unable to recover it. 00:30:31.667 [2024-12-09 06:29:26.182069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.667 [2024-12-09 06:29:26.182079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.667 qpair failed and we were unable to recover it. 00:30:31.667 [2024-12-09 06:29:26.182265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.667 [2024-12-09 06:29:26.182275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.667 qpair failed and we were unable to recover it. 00:30:31.667 [2024-12-09 06:29:26.182573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.667 [2024-12-09 06:29:26.182584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.667 qpair failed and we were unable to recover it. 00:30:31.667 [2024-12-09 06:29:26.182737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.667 [2024-12-09 06:29:26.182747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.667 qpair failed and we were unable to recover it. 00:30:31.667 [2024-12-09 06:29:26.182896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.667 [2024-12-09 06:29:26.182905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.667 qpair failed and we were unable to recover it. 00:30:31.667 [2024-12-09 06:29:26.183068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.667 [2024-12-09 06:29:26.183078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.667 qpair failed and we were unable to recover it. 00:30:31.667 [2024-12-09 06:29:26.183409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.667 [2024-12-09 06:29:26.183419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.667 qpair failed and we were unable to recover it. 00:30:31.667 [2024-12-09 06:29:26.183738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.667 [2024-12-09 06:29:26.183747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.667 qpair failed and we were unable to recover it. 00:30:31.667 [2024-12-09 06:29:26.183907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.667 [2024-12-09 06:29:26.183917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.667 qpair failed and we were unable to recover it. 00:30:31.667 [2024-12-09 06:29:26.184206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.667 [2024-12-09 06:29:26.184216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.667 qpair failed and we were unable to recover it. 00:30:31.667 [2024-12-09 06:29:26.184419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.667 [2024-12-09 06:29:26.184428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.667 qpair failed and we were unable to recover it. 00:30:31.667 [2024-12-09 06:29:26.184768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.667 [2024-12-09 06:29:26.184778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.667 qpair failed and we were unable to recover it. 00:30:31.667 [2024-12-09 06:29:26.184932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.667 [2024-12-09 06:29:26.184941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.667 qpair failed and we were unable to recover it. 00:30:31.667 [2024-12-09 06:29:26.185203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.667 [2024-12-09 06:29:26.185213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.667 qpair failed and we were unable to recover it. 00:30:31.667 [2024-12-09 06:29:26.185520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.667 [2024-12-09 06:29:26.185530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.667 qpair failed and we were unable to recover it. 00:30:31.667 [2024-12-09 06:29:26.185842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.667 [2024-12-09 06:29:26.185851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.667 qpair failed and we were unable to recover it. 00:30:31.667 [2024-12-09 06:29:26.186027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.667 [2024-12-09 06:29:26.186036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.667 qpair failed and we were unable to recover it. 00:30:31.667 [2024-12-09 06:29:26.186245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.667 [2024-12-09 06:29:26.186255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.667 qpair failed and we were unable to recover it. 00:30:31.667 [2024-12-09 06:29:26.186528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.667 [2024-12-09 06:29:26.186538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.667 qpair failed and we were unable to recover it. 00:30:31.667 [2024-12-09 06:29:26.186862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.667 [2024-12-09 06:29:26.186872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.667 qpair failed and we were unable to recover it. 00:30:31.667 [2024-12-09 06:29:26.187024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.667 [2024-12-09 06:29:26.187034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.667 qpair failed and we were unable to recover it. 00:30:31.667 [2024-12-09 06:29:26.187322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.667 [2024-12-09 06:29:26.187335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.667 qpair failed and we were unable to recover it. 00:30:31.667 [2024-12-09 06:29:26.187533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.667 [2024-12-09 06:29:26.187542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.667 qpair failed and we were unable to recover it. 00:30:31.667 [2024-12-09 06:29:26.187804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.667 [2024-12-09 06:29:26.187814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.667 qpair failed and we were unable to recover it. 00:30:31.667 [2024-12-09 06:29:26.188122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.667 [2024-12-09 06:29:26.188131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.667 qpair failed and we were unable to recover it. 00:30:31.667 [2024-12-09 06:29:26.188385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.667 [2024-12-09 06:29:26.188394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.667 qpair failed and we were unable to recover it. 00:30:31.667 [2024-12-09 06:29:26.188564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.667 [2024-12-09 06:29:26.188573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.667 qpair failed and we were unable to recover it. 00:30:31.667 [2024-12-09 06:29:26.188887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.667 [2024-12-09 06:29:26.188897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.667 qpair failed and we were unable to recover it. 00:30:31.667 [2024-12-09 06:29:26.189191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.667 [2024-12-09 06:29:26.189202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.667 qpair failed and we were unable to recover it. 00:30:31.667 [2024-12-09 06:29:26.189365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.667 [2024-12-09 06:29:26.189374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.667 qpair failed and we were unable to recover it. 00:30:31.667 [2024-12-09 06:29:26.189671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.668 [2024-12-09 06:29:26.189681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.668 qpair failed and we were unable to recover it. 00:30:31.668 [2024-12-09 06:29:26.189990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.668 [2024-12-09 06:29:26.189999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.668 qpair failed and we were unable to recover it. 00:30:31.668 [2024-12-09 06:29:26.190162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.668 [2024-12-09 06:29:26.190172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.668 qpair failed and we were unable to recover it. 00:30:31.668 [2024-12-09 06:29:26.190428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.668 [2024-12-09 06:29:26.190438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.668 qpair failed and we were unable to recover it. 00:30:31.668 [2024-12-09 06:29:26.190609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.668 [2024-12-09 06:29:26.190619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.668 qpair failed and we were unable to recover it. 00:30:31.668 [2024-12-09 06:29:26.190915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.668 [2024-12-09 06:29:26.190924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.668 qpair failed and we were unable to recover it. 00:30:31.668 [2024-12-09 06:29:26.191222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.668 [2024-12-09 06:29:26.191232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.668 qpair failed and we were unable to recover it. 00:30:31.668 [2024-12-09 06:29:26.191391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.668 [2024-12-09 06:29:26.191401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.668 qpair failed and we were unable to recover it. 00:30:31.668 [2024-12-09 06:29:26.191437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.668 [2024-12-09 06:29:26.191446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.668 qpair failed and we were unable to recover it. 00:30:31.668 [2024-12-09 06:29:26.191627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.668 [2024-12-09 06:29:26.191637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.668 qpair failed and we were unable to recover it. 00:30:31.668 [2024-12-09 06:29:26.191943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.668 [2024-12-09 06:29:26.191952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.668 qpair failed and we were unable to recover it. 00:30:31.668 [2024-12-09 06:29:26.192143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.668 [2024-12-09 06:29:26.192152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.668 qpair failed and we were unable to recover it. 00:30:31.668 [2024-12-09 06:29:26.192270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.668 [2024-12-09 06:29:26.192280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.668 qpair failed and we were unable to recover it. 00:30:31.668 [2024-12-09 06:29:26.192465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.668 [2024-12-09 06:29:26.192475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.668 qpair failed and we were unable to recover it. 00:30:31.668 [2024-12-09 06:29:26.192643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.668 [2024-12-09 06:29:26.192652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.668 qpair failed and we were unable to recover it. 00:30:31.668 [2024-12-09 06:29:26.193103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.668 [2024-12-09 06:29:26.193182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.668 qpair failed and we were unable to recover it. 00:30:31.668 [2024-12-09 06:29:26.193701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.668 [2024-12-09 06:29:26.193781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.668 qpair failed and we were unable to recover it. 00:30:31.668 [2024-12-09 06:29:26.194126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.668 [2024-12-09 06:29:26.194136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.668 qpair failed and we were unable to recover it. 00:30:31.668 [2024-12-09 06:29:26.194444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.668 [2024-12-09 06:29:26.194458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.668 qpair failed and we were unable to recover it. 00:30:31.668 [2024-12-09 06:29:26.194614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.668 [2024-12-09 06:29:26.194623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.668 qpair failed and we were unable to recover it. 00:30:31.668 [2024-12-09 06:29:26.194915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.668 [2024-12-09 06:29:26.194924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.668 qpair failed and we were unable to recover it. 00:30:31.668 [2024-12-09 06:29:26.195215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.668 [2024-12-09 06:29:26.195225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.668 qpair failed and we were unable to recover it. 00:30:31.668 [2024-12-09 06:29:26.195589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.668 [2024-12-09 06:29:26.195599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.668 qpair failed and we were unable to recover it. 00:30:31.668 [2024-12-09 06:29:26.195728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.668 [2024-12-09 06:29:26.195738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.668 qpair failed and we were unable to recover it. 00:30:31.668 [2024-12-09 06:29:26.195998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.668 [2024-12-09 06:29:26.196008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.668 qpair failed and we were unable to recover it. 00:30:31.668 [2024-12-09 06:29:26.196310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.668 [2024-12-09 06:29:26.196319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.668 qpair failed and we were unable to recover it. 00:30:31.668 [2024-12-09 06:29:26.196594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.668 [2024-12-09 06:29:26.196603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.668 qpair failed and we were unable to recover it. 00:30:31.668 [2024-12-09 06:29:26.196882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.668 [2024-12-09 06:29:26.196891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.668 qpair failed and we were unable to recover it. 00:30:31.668 [2024-12-09 06:29:26.197166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.668 [2024-12-09 06:29:26.197176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.668 qpair failed and we were unable to recover it. 00:30:31.668 [2024-12-09 06:29:26.197346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.668 [2024-12-09 06:29:26.197354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.668 qpair failed and we were unable to recover it. 00:30:31.668 [2024-12-09 06:29:26.197520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.668 [2024-12-09 06:29:26.197529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.668 qpair failed and we were unable to recover it. 00:30:31.668 [2024-12-09 06:29:26.197853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.668 [2024-12-09 06:29:26.197865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.668 qpair failed and we were unable to recover it. 00:30:31.668 [2024-12-09 06:29:26.198160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.668 [2024-12-09 06:29:26.198169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.668 qpair failed and we were unable to recover it. 00:30:31.668 [2024-12-09 06:29:26.198444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.668 [2024-12-09 06:29:26.198457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.668 qpair failed and we were unable to recover it. 00:30:31.668 [2024-12-09 06:29:26.198793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.668 [2024-12-09 06:29:26.198802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.668 qpair failed and we were unable to recover it. 00:30:31.668 [2024-12-09 06:29:26.199193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.668 [2024-12-09 06:29:26.199202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.668 qpair failed and we were unable to recover it. 00:30:31.668 [2024-12-09 06:29:26.199494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.668 [2024-12-09 06:29:26.199503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.668 qpair failed and we were unable to recover it. 00:30:31.668 [2024-12-09 06:29:26.199770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.668 [2024-12-09 06:29:26.199780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.669 qpair failed and we were unable to recover it. 00:30:31.669 [2024-12-09 06:29:26.200065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.669 [2024-12-09 06:29:26.200074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.669 qpair failed and we were unable to recover it. 00:30:31.669 [2024-12-09 06:29:26.200393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.669 [2024-12-09 06:29:26.200402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.669 qpair failed and we were unable to recover it. 00:30:31.669 [2024-12-09 06:29:26.200561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.669 [2024-12-09 06:29:26.200570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.669 qpair failed and we were unable to recover it. 00:30:31.669 [2024-12-09 06:29:26.200879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.669 [2024-12-09 06:29:26.200888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.669 qpair failed and we were unable to recover it. 00:30:31.669 [2024-12-09 06:29:26.201094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.669 [2024-12-09 06:29:26.201104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.669 qpair failed and we were unable to recover it. 00:30:31.669 [2024-12-09 06:29:26.201401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.669 [2024-12-09 06:29:26.201411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.669 qpair failed and we were unable to recover it. 00:30:31.669 [2024-12-09 06:29:26.201714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.669 [2024-12-09 06:29:26.201724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.669 qpair failed and we were unable to recover it. 00:30:31.669 [2024-12-09 06:29:26.202033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.669 [2024-12-09 06:29:26.202043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.669 qpair failed and we were unable to recover it. 00:30:31.669 [2024-12-09 06:29:26.202353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.669 [2024-12-09 06:29:26.202363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.669 qpair failed and we were unable to recover it. 00:30:31.669 [2024-12-09 06:29:26.202566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.669 [2024-12-09 06:29:26.202576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.669 qpair failed and we were unable to recover it. 00:30:31.669 [2024-12-09 06:29:26.202874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.669 [2024-12-09 06:29:26.202884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.669 qpair failed and we were unable to recover it. 00:30:31.669 [2024-12-09 06:29:26.203170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.669 [2024-12-09 06:29:26.203179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.669 qpair failed and we were unable to recover it. 00:30:31.669 [2024-12-09 06:29:26.203457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.669 [2024-12-09 06:29:26.203468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.669 qpair failed and we were unable to recover it. 00:30:31.669 [2024-12-09 06:29:26.203513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.669 [2024-12-09 06:29:26.203521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.669 qpair failed and we were unable to recover it. 00:30:31.669 [2024-12-09 06:29:26.203852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.669 [2024-12-09 06:29:26.203861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.669 qpair failed and we were unable to recover it. 00:30:31.669 [2024-12-09 06:29:26.204153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.669 [2024-12-09 06:29:26.204162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.669 qpair failed and we were unable to recover it. 00:30:31.669 [2024-12-09 06:29:26.204464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.669 [2024-12-09 06:29:26.204474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.669 qpair failed and we were unable to recover it. 00:30:31.669 [2024-12-09 06:29:26.204520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.669 [2024-12-09 06:29:26.204529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.669 qpair failed and we were unable to recover it. 00:30:31.944 [2024-12-09 06:29:26.204864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.944 [2024-12-09 06:29:26.204875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.944 qpair failed and we were unable to recover it. 00:30:31.944 [2024-12-09 06:29:26.205175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.944 [2024-12-09 06:29:26.205184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.944 qpair failed and we were unable to recover it. 00:30:31.944 [2024-12-09 06:29:26.205467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.944 [2024-12-09 06:29:26.205477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.944 qpair failed and we were unable to recover it. 00:30:31.944 [2024-12-09 06:29:26.205650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.944 [2024-12-09 06:29:26.205659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.944 qpair failed and we were unable to recover it. 00:30:31.944 [2024-12-09 06:29:26.205816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.944 [2024-12-09 06:29:26.205825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.944 qpair failed and we were unable to recover it. 00:30:31.944 [2024-12-09 06:29:26.206039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.944 [2024-12-09 06:29:26.206049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.944 qpair failed and we were unable to recover it. 00:30:31.944 [2024-12-09 06:29:26.206173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.944 [2024-12-09 06:29:26.206182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.944 qpair failed and we were unable to recover it. 00:30:31.944 [2024-12-09 06:29:26.206354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.944 [2024-12-09 06:29:26.206364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.944 qpair failed and we were unable to recover it. 00:30:31.944 [2024-12-09 06:29:26.206643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.944 [2024-12-09 06:29:26.206653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.944 qpair failed and we were unable to recover it. 00:30:31.944 [2024-12-09 06:29:26.206967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.944 [2024-12-09 06:29:26.206976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.944 qpair failed and we were unable to recover it. 00:30:31.944 [2024-12-09 06:29:26.207194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.944 [2024-12-09 06:29:26.207203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.944 qpair failed and we were unable to recover it. 00:30:31.944 [2024-12-09 06:29:26.207398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.944 [2024-12-09 06:29:26.207407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.944 qpair failed and we were unable to recover it. 00:30:31.944 [2024-12-09 06:29:26.207683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.944 [2024-12-09 06:29:26.207692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.944 qpair failed and we were unable to recover it. 00:30:31.944 [2024-12-09 06:29:26.207886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.944 [2024-12-09 06:29:26.207895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.944 qpair failed and we were unable to recover it. 00:30:31.944 [2024-12-09 06:29:26.208201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.944 [2024-12-09 06:29:26.208211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.944 qpair failed and we were unable to recover it. 00:30:31.944 [2024-12-09 06:29:26.208516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.944 [2024-12-09 06:29:26.208528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.944 qpair failed and we were unable to recover it. 00:30:31.944 [2024-12-09 06:29:26.208853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.944 [2024-12-09 06:29:26.208864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.944 qpair failed and we were unable to recover it. 00:30:31.944 [2024-12-09 06:29:26.209044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.944 [2024-12-09 06:29:26.209054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.944 qpair failed and we were unable to recover it. 00:30:31.944 [2024-12-09 06:29:26.209240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.944 [2024-12-09 06:29:26.209250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.944 qpair failed and we were unable to recover it. 00:30:31.944 [2024-12-09 06:29:26.209411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.944 [2024-12-09 06:29:26.209421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.944 qpair failed and we were unable to recover it. 00:30:31.944 [2024-12-09 06:29:26.209630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.944 [2024-12-09 06:29:26.209640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.944 qpair failed and we were unable to recover it. 00:30:31.945 [2024-12-09 06:29:26.209813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.945 [2024-12-09 06:29:26.209822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.945 qpair failed and we were unable to recover it. 00:30:31.945 [2024-12-09 06:29:26.210005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.945 [2024-12-09 06:29:26.210014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.945 qpair failed and we were unable to recover it. 00:30:31.945 [2024-12-09 06:29:26.210288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.945 [2024-12-09 06:29:26.210297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.945 qpair failed and we were unable to recover it. 00:30:31.945 [2024-12-09 06:29:26.210519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.945 [2024-12-09 06:29:26.210529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.945 qpair failed and we were unable to recover it. 00:30:31.945 [2024-12-09 06:29:26.210865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.945 [2024-12-09 06:29:26.210875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.945 qpair failed and we were unable to recover it. 00:30:31.945 [2024-12-09 06:29:26.211071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.945 [2024-12-09 06:29:26.211080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.945 qpair failed and we were unable to recover it. 00:30:31.945 [2024-12-09 06:29:26.211246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.945 [2024-12-09 06:29:26.211255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.945 qpair failed and we were unable to recover it. 00:30:31.945 [2024-12-09 06:29:26.211625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.945 [2024-12-09 06:29:26.211635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.945 qpair failed and we were unable to recover it. 00:30:31.945 [2024-12-09 06:29:26.211937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.945 [2024-12-09 06:29:26.211946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.945 qpair failed and we were unable to recover it. 00:30:31.945 [2024-12-09 06:29:26.212122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.945 [2024-12-09 06:29:26.212139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.945 qpair failed and we were unable to recover it. 00:30:31.945 [2024-12-09 06:29:26.212454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.945 [2024-12-09 06:29:26.212465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.945 qpair failed and we were unable to recover it. 00:30:31.945 [2024-12-09 06:29:26.212654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.945 [2024-12-09 06:29:26.212663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.945 qpair failed and we were unable to recover it. 00:30:31.945 [2024-12-09 06:29:26.212978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.945 [2024-12-09 06:29:26.212987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.945 qpair failed and we were unable to recover it. 00:30:31.945 [2024-12-09 06:29:26.213296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.945 [2024-12-09 06:29:26.213305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.945 qpair failed and we were unable to recover it. 00:30:31.945 [2024-12-09 06:29:26.213458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.945 [2024-12-09 06:29:26.213468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.945 qpair failed and we were unable to recover it. 00:30:31.945 [2024-12-09 06:29:26.213739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.945 [2024-12-09 06:29:26.213748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.945 qpair failed and we were unable to recover it. 00:30:31.945 [2024-12-09 06:29:26.213954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.945 [2024-12-09 06:29:26.213964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.945 qpair failed and we were unable to recover it. 00:30:31.945 [2024-12-09 06:29:26.214344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.945 [2024-12-09 06:29:26.214354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.945 qpair failed and we were unable to recover it. 00:30:31.945 [2024-12-09 06:29:26.214653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.945 [2024-12-09 06:29:26.214664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.945 qpair failed and we were unable to recover it. 00:30:31.945 [2024-12-09 06:29:26.214957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.945 [2024-12-09 06:29:26.214967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.945 qpair failed and we were unable to recover it. 00:30:31.945 [2024-12-09 06:29:26.215274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.945 [2024-12-09 06:29:26.215283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.945 qpair failed and we were unable to recover it. 00:30:31.945 [2024-12-09 06:29:26.215456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.945 [2024-12-09 06:29:26.215468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.945 qpair failed and we were unable to recover it. 00:30:31.945 [2024-12-09 06:29:26.215815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.945 [2024-12-09 06:29:26.215825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.945 qpair failed and we were unable to recover it. 00:30:31.945 [2024-12-09 06:29:26.215890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.945 [2024-12-09 06:29:26.215899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.945 qpair failed and we were unable to recover it. 00:30:31.945 [2024-12-09 06:29:26.216180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.945 [2024-12-09 06:29:26.216189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.945 qpair failed and we were unable to recover it. 00:30:31.945 [2024-12-09 06:29:26.216500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.945 [2024-12-09 06:29:26.216509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.945 qpair failed and we were unable to recover it. 00:30:31.945 [2024-12-09 06:29:26.216826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.945 [2024-12-09 06:29:26.216835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.945 qpair failed and we were unable to recover it. 00:30:31.945 [2024-12-09 06:29:26.216995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.945 [2024-12-09 06:29:26.217005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.945 qpair failed and we were unable to recover it. 00:30:31.945 [2024-12-09 06:29:26.217156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.945 [2024-12-09 06:29:26.217166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.945 qpair failed and we were unable to recover it. 00:30:31.945 [2024-12-09 06:29:26.217483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.945 [2024-12-09 06:29:26.217493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.945 qpair failed and we were unable to recover it. 00:30:31.945 [2024-12-09 06:29:26.217549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.945 [2024-12-09 06:29:26.217558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.945 qpair failed and we were unable to recover it. 00:30:31.945 [2024-12-09 06:29:26.217723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.945 [2024-12-09 06:29:26.217731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.945 qpair failed and we were unable to recover it. 00:30:31.945 [2024-12-09 06:29:26.218031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.945 [2024-12-09 06:29:26.218040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.945 qpair failed and we were unable to recover it. 00:30:31.945 [2024-12-09 06:29:26.218191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.945 [2024-12-09 06:29:26.218200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.945 qpair failed and we were unable to recover it. 00:30:31.945 [2024-12-09 06:29:26.218499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.945 [2024-12-09 06:29:26.218509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.945 qpair failed and we were unable to recover it. 00:30:31.945 [2024-12-09 06:29:26.218800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.945 [2024-12-09 06:29:26.218810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.945 qpair failed and we were unable to recover it. 00:30:31.945 [2024-12-09 06:29:26.219109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.945 [2024-12-09 06:29:26.219118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.945 qpair failed and we were unable to recover it. 00:30:31.945 [2024-12-09 06:29:26.219285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.945 [2024-12-09 06:29:26.219295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.946 qpair failed and we were unable to recover it. 00:30:31.946 [2024-12-09 06:29:26.219563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.946 [2024-12-09 06:29:26.219573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.946 qpair failed and we were unable to recover it. 00:30:31.946 [2024-12-09 06:29:26.219873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.946 [2024-12-09 06:29:26.219882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.946 qpair failed and we were unable to recover it. 00:30:31.946 [2024-12-09 06:29:26.220164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.946 [2024-12-09 06:29:26.220173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.946 qpair failed and we were unable to recover it. 00:30:31.946 [2024-12-09 06:29:26.220337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.946 [2024-12-09 06:29:26.220347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.946 qpair failed and we were unable to recover it. 00:30:31.946 [2024-12-09 06:29:26.220591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.946 [2024-12-09 06:29:26.220601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.946 qpair failed and we were unable to recover it. 00:30:31.946 [2024-12-09 06:29:26.220882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.946 [2024-12-09 06:29:26.220892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.946 qpair failed and we were unable to recover it. 00:30:31.946 [2024-12-09 06:29:26.221189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.946 [2024-12-09 06:29:26.221198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.946 qpair failed and we were unable to recover it. 00:30:31.946 [2024-12-09 06:29:26.221366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.946 [2024-12-09 06:29:26.221376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.946 qpair failed and we were unable to recover it. 00:30:31.946 [2024-12-09 06:29:26.221661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.946 [2024-12-09 06:29:26.221671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.946 qpair failed and we were unable to recover it. 00:30:31.946 [2024-12-09 06:29:26.221952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.946 [2024-12-09 06:29:26.221962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.946 qpair failed and we were unable to recover it. 00:30:31.946 [2024-12-09 06:29:26.222154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.946 [2024-12-09 06:29:26.222164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.946 qpair failed and we were unable to recover it. 00:30:31.946 [2024-12-09 06:29:26.222395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.946 [2024-12-09 06:29:26.222405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.946 qpair failed and we were unable to recover it. 00:30:31.946 [2024-12-09 06:29:26.222557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.946 [2024-12-09 06:29:26.222568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.946 qpair failed and we were unable to recover it. 00:30:31.946 [2024-12-09 06:29:26.222842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.946 [2024-12-09 06:29:26.222852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.946 qpair failed and we were unable to recover it. 00:30:31.946 [2024-12-09 06:29:26.223166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.946 [2024-12-09 06:29:26.223176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.946 qpair failed and we were unable to recover it. 00:30:31.946 [2024-12-09 06:29:26.223340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.946 [2024-12-09 06:29:26.223350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.946 qpair failed and we were unable to recover it. 00:30:31.946 [2024-12-09 06:29:26.223522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.946 [2024-12-09 06:29:26.223533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.946 qpair failed and we were unable to recover it. 00:30:31.946 [2024-12-09 06:29:26.223817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.946 [2024-12-09 06:29:26.223826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.946 qpair failed and we were unable to recover it. 00:30:31.946 [2024-12-09 06:29:26.224129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.946 [2024-12-09 06:29:26.224139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.946 qpair failed and we were unable to recover it. 00:30:31.946 [2024-12-09 06:29:26.224336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.946 [2024-12-09 06:29:26.224346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.946 qpair failed and we were unable to recover it. 00:30:31.946 [2024-12-09 06:29:26.224552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.946 [2024-12-09 06:29:26.224562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.946 qpair failed and we were unable to recover it. 00:30:31.946 [2024-12-09 06:29:26.224807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.946 [2024-12-09 06:29:26.224817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.946 qpair failed and we were unable to recover it. 00:30:31.946 [2024-12-09 06:29:26.224976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.946 [2024-12-09 06:29:26.224986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.946 qpair failed and we were unable to recover it. 00:30:31.946 [2024-12-09 06:29:26.225292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.946 [2024-12-09 06:29:26.225304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.946 qpair failed and we were unable to recover it. 00:30:31.946 [2024-12-09 06:29:26.225487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.946 [2024-12-09 06:29:26.225497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.946 qpair failed and we were unable to recover it. 00:30:31.946 [2024-12-09 06:29:26.225654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.946 [2024-12-09 06:29:26.225663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.946 qpair failed and we were unable to recover it. 00:30:31.946 [2024-12-09 06:29:26.225955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.946 [2024-12-09 06:29:26.225964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.946 qpair failed and we were unable to recover it. 00:30:31.946 [2024-12-09 06:29:26.226280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.946 [2024-12-09 06:29:26.226289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.946 qpair failed and we were unable to recover it. 00:30:31.946 [2024-12-09 06:29:26.226460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.946 [2024-12-09 06:29:26.226470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.946 qpair failed and we were unable to recover it. 00:30:31.946 [2024-12-09 06:29:26.226842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.946 [2024-12-09 06:29:26.226852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.946 qpair failed and we were unable to recover it. 00:30:31.946 [2024-12-09 06:29:26.227027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.946 [2024-12-09 06:29:26.227036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.946 qpair failed and we were unable to recover it. 00:30:31.946 [2024-12-09 06:29:26.227314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.946 [2024-12-09 06:29:26.227324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.946 qpair failed and we were unable to recover it. 00:30:31.946 [2024-12-09 06:29:26.227495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.946 [2024-12-09 06:29:26.227506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.946 qpair failed and we were unable to recover it. 00:30:31.946 [2024-12-09 06:29:26.227789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.946 [2024-12-09 06:29:26.227799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.946 qpair failed and we were unable to recover it. 00:30:31.946 [2024-12-09 06:29:26.227965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.946 [2024-12-09 06:29:26.227975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.946 qpair failed and we were unable to recover it. 00:30:31.946 [2024-12-09 06:29:26.228161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.946 [2024-12-09 06:29:26.228172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.946 qpair failed and we were unable to recover it. 00:30:31.946 [2024-12-09 06:29:26.228335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.946 [2024-12-09 06:29:26.228345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.946 qpair failed and we were unable to recover it. 00:30:31.946 [2024-12-09 06:29:26.228525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.947 [2024-12-09 06:29:26.228534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.947 qpair failed and we were unable to recover it. 00:30:31.947 [2024-12-09 06:29:26.228828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.947 [2024-12-09 06:29:26.228838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.947 qpair failed and we were unable to recover it. 00:30:31.947 [2024-12-09 06:29:26.229174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.947 [2024-12-09 06:29:26.229183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.947 qpair failed and we were unable to recover it. 00:30:31.947 [2024-12-09 06:29:26.229478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.947 [2024-12-09 06:29:26.229488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.947 qpair failed and we were unable to recover it. 00:30:31.947 [2024-12-09 06:29:26.229717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.947 [2024-12-09 06:29:26.229726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.947 qpair failed and we were unable to recover it. 00:30:31.947 [2024-12-09 06:29:26.229888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.947 [2024-12-09 06:29:26.229897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.947 qpair failed and we were unable to recover it. 00:30:31.947 [2024-12-09 06:29:26.230176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.947 [2024-12-09 06:29:26.230185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.947 qpair failed and we were unable to recover it. 00:30:31.947 [2024-12-09 06:29:26.230487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.947 [2024-12-09 06:29:26.230497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.947 qpair failed and we were unable to recover it. 00:30:31.947 [2024-12-09 06:29:26.230540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.947 [2024-12-09 06:29:26.230549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.947 qpair failed and we were unable to recover it. 00:30:31.947 [2024-12-09 06:29:26.230704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.947 [2024-12-09 06:29:26.230713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.947 qpair failed and we were unable to recover it. 00:30:31.947 [2024-12-09 06:29:26.231034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.947 [2024-12-09 06:29:26.231043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.947 qpair failed and we were unable to recover it. 00:30:31.947 [2024-12-09 06:29:26.231345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.947 [2024-12-09 06:29:26.231354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.947 qpair failed and we were unable to recover it. 00:30:31.947 [2024-12-09 06:29:26.231657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.947 [2024-12-09 06:29:26.231666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.947 qpair failed and we were unable to recover it. 00:30:31.947 [2024-12-09 06:29:26.232005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.947 [2024-12-09 06:29:26.232015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.947 qpair failed and we were unable to recover it. 00:30:31.947 [2024-12-09 06:29:26.232319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.947 [2024-12-09 06:29:26.232328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.947 qpair failed and we were unable to recover it. 00:30:31.947 [2024-12-09 06:29:26.232495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.947 [2024-12-09 06:29:26.232506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.947 qpair failed and we were unable to recover it. 00:30:31.947 [2024-12-09 06:29:26.232546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.947 [2024-12-09 06:29:26.232555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.947 qpair failed and we were unable to recover it. 00:30:31.947 [2024-12-09 06:29:26.232776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.947 [2024-12-09 06:29:26.232785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.947 qpair failed and we were unable to recover it. 00:30:31.947 [2024-12-09 06:29:26.232940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.947 [2024-12-09 06:29:26.232950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.947 qpair failed and we were unable to recover it. 00:30:31.947 [2024-12-09 06:29:26.233207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.947 [2024-12-09 06:29:26.233216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.947 qpair failed and we were unable to recover it. 00:30:31.947 [2024-12-09 06:29:26.233423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.947 [2024-12-09 06:29:26.233433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.947 qpair failed and we were unable to recover it. 00:30:31.947 [2024-12-09 06:29:26.233626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.947 [2024-12-09 06:29:26.233636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.947 qpair failed and we were unable to recover it. 00:30:31.947 [2024-12-09 06:29:26.233997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.947 [2024-12-09 06:29:26.234006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.947 qpair failed and we were unable to recover it. 00:30:31.947 [2024-12-09 06:29:26.234159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.947 [2024-12-09 06:29:26.234170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.947 qpair failed and we were unable to recover it. 00:30:31.947 [2024-12-09 06:29:26.234452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.947 [2024-12-09 06:29:26.234463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.947 qpair failed and we were unable to recover it. 00:30:31.947 [2024-12-09 06:29:26.234642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.947 [2024-12-09 06:29:26.234651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.947 qpair failed and we were unable to recover it. 00:30:31.947 [2024-12-09 06:29:26.234960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.947 [2024-12-09 06:29:26.234971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.947 qpair failed and we were unable to recover it. 00:30:31.947 [2024-12-09 06:29:26.235279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.947 [2024-12-09 06:29:26.235288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.947 qpair failed and we were unable to recover it. 00:30:31.947 [2024-12-09 06:29:26.235568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.947 [2024-12-09 06:29:26.235578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.947 qpair failed and we were unable to recover it. 00:30:31.947 [2024-12-09 06:29:26.235870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.947 [2024-12-09 06:29:26.235879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.947 qpair failed and we were unable to recover it. 00:30:31.947 [2024-12-09 06:29:26.236154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.947 [2024-12-09 06:29:26.236164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.947 qpair failed and we were unable to recover it. 00:30:31.947 [2024-12-09 06:29:26.236361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.947 [2024-12-09 06:29:26.236371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.947 qpair failed and we were unable to recover it. 00:30:31.947 [2024-12-09 06:29:26.236459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.947 [2024-12-09 06:29:26.236468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.947 qpair failed and we were unable to recover it. 00:30:31.947 [2024-12-09 06:29:26.236642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.947 [2024-12-09 06:29:26.236651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.947 qpair failed and we were unable to recover it. 00:30:31.947 [2024-12-09 06:29:26.236832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.947 [2024-12-09 06:29:26.236842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.947 qpair failed and we were unable to recover it. 00:30:31.947 [2024-12-09 06:29:26.237112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.947 [2024-12-09 06:29:26.237121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.947 qpair failed and we were unable to recover it. 00:30:31.947 [2024-12-09 06:29:26.237284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.947 [2024-12-09 06:29:26.237293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.947 qpair failed and we were unable to recover it. 00:30:31.947 [2024-12-09 06:29:26.237466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.948 [2024-12-09 06:29:26.237475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.948 qpair failed and we were unable to recover it. 00:30:31.948 [2024-12-09 06:29:26.237752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.948 [2024-12-09 06:29:26.237761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.948 qpair failed and we were unable to recover it. 00:30:31.948 [2024-12-09 06:29:26.237951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.948 [2024-12-09 06:29:26.237960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.948 qpair failed and we were unable to recover it. 00:30:31.948 [2024-12-09 06:29:26.238242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.948 [2024-12-09 06:29:26.238252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.948 qpair failed and we were unable to recover it. 00:30:31.948 [2024-12-09 06:29:26.238455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.948 [2024-12-09 06:29:26.238465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.948 qpair failed and we were unable to recover it. 00:30:31.948 [2024-12-09 06:29:26.238782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.948 [2024-12-09 06:29:26.238791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.948 qpair failed and we were unable to recover it. 00:30:31.948 [2024-12-09 06:29:26.238947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.948 [2024-12-09 06:29:26.238956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.948 qpair failed and we were unable to recover it. 00:30:31.948 [2024-12-09 06:29:26.239227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.948 [2024-12-09 06:29:26.239236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.948 qpair failed and we were unable to recover it. 00:30:31.948 [2024-12-09 06:29:26.239429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.948 [2024-12-09 06:29:26.239439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.948 qpair failed and we were unable to recover it. 00:30:31.948 [2024-12-09 06:29:26.239811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.948 [2024-12-09 06:29:26.239821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.948 qpair failed and we were unable to recover it. 00:30:31.948 [2024-12-09 06:29:26.240153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.948 [2024-12-09 06:29:26.240162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.948 qpair failed and we were unable to recover it. 00:30:31.948 [2024-12-09 06:29:26.240472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.948 [2024-12-09 06:29:26.240483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.948 qpair failed and we were unable to recover it. 00:30:31.948 [2024-12-09 06:29:26.240752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.948 [2024-12-09 06:29:26.240761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.948 qpair failed and we were unable to recover it. 00:30:31.948 [2024-12-09 06:29:26.241110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.948 [2024-12-09 06:29:26.241120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.948 qpair failed and we were unable to recover it. 00:30:31.948 [2024-12-09 06:29:26.241307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.948 [2024-12-09 06:29:26.241317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.948 qpair failed and we were unable to recover it. 00:30:31.948 [2024-12-09 06:29:26.241601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.948 [2024-12-09 06:29:26.241612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.948 qpair failed and we were unable to recover it. 00:30:31.948 [2024-12-09 06:29:26.241797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.948 [2024-12-09 06:29:26.241807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.948 qpair failed and we were unable to recover it. 00:30:31.948 [2024-12-09 06:29:26.242050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.948 [2024-12-09 06:29:26.242060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.948 qpair failed and we were unable to recover it. 00:30:31.948 [2024-12-09 06:29:26.242254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.948 [2024-12-09 06:29:26.242265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.948 qpair failed and we were unable to recover it. 00:30:31.948 [2024-12-09 06:29:26.242442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.948 [2024-12-09 06:29:26.242455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.948 qpair failed and we were unable to recover it. 00:30:31.948 [2024-12-09 06:29:26.242741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.948 [2024-12-09 06:29:26.242751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.948 qpair failed and we were unable to recover it. 00:30:31.948 [2024-12-09 06:29:26.243057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.948 [2024-12-09 06:29:26.243066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.948 qpair failed and we were unable to recover it. 00:30:31.948 [2024-12-09 06:29:26.243371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.948 [2024-12-09 06:29:26.243380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.948 qpair failed and we were unable to recover it. 00:30:31.948 [2024-12-09 06:29:26.243547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.948 [2024-12-09 06:29:26.243557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.948 qpair failed and we were unable to recover it. 00:30:31.948 [2024-12-09 06:29:26.243896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.948 [2024-12-09 06:29:26.243906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.948 qpair failed and we were unable to recover it. 00:30:31.948 [2024-12-09 06:29:26.243977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.948 [2024-12-09 06:29:26.243986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.948 qpair failed and we were unable to recover it. 00:30:31.948 [2024-12-09 06:29:26.244197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.948 [2024-12-09 06:29:26.244206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.948 qpair failed and we were unable to recover it. 00:30:31.948 [2024-12-09 06:29:26.244507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.948 [2024-12-09 06:29:26.244518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.948 qpair failed and we were unable to recover it. 00:30:31.948 [2024-12-09 06:29:26.244806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.948 [2024-12-09 06:29:26.244815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.948 qpair failed and we were unable to recover it. 00:30:31.948 [2024-12-09 06:29:26.244981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.948 [2024-12-09 06:29:26.244994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.948 qpair failed and we were unable to recover it. 00:30:31.948 [2024-12-09 06:29:26.245258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.948 [2024-12-09 06:29:26.245267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.948 qpair failed and we were unable to recover it. 00:30:31.948 [2024-12-09 06:29:26.245563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.948 [2024-12-09 06:29:26.245572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.948 qpair failed and we were unable to recover it. 00:30:31.948 [2024-12-09 06:29:26.245863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.948 [2024-12-09 06:29:26.245872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.948 qpair failed and we were unable to recover it. 00:30:31.948 [2024-12-09 06:29:26.246052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.948 [2024-12-09 06:29:26.246062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.948 qpair failed and we were unable to recover it. 00:30:31.948 [2024-12-09 06:29:26.246413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.948 [2024-12-09 06:29:26.246422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.948 qpair failed and we were unable to recover it. 00:30:31.948 [2024-12-09 06:29:26.246720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.948 [2024-12-09 06:29:26.246729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.948 qpair failed and we were unable to recover it. 00:30:31.949 [2024-12-09 06:29:26.246774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.949 [2024-12-09 06:29:26.246782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.949 qpair failed and we were unable to recover it. 00:30:31.949 [2024-12-09 06:29:26.246891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.949 [2024-12-09 06:29:26.246902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.949 qpair failed and we were unable to recover it. 00:30:31.949 [2024-12-09 06:29:26.247163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.949 [2024-12-09 06:29:26.247174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.949 qpair failed and we were unable to recover it. 00:30:31.949 [2024-12-09 06:29:26.247491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.949 [2024-12-09 06:29:26.247501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.949 qpair failed and we were unable to recover it. 00:30:31.949 [2024-12-09 06:29:26.247654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.949 [2024-12-09 06:29:26.247663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.949 qpair failed and we were unable to recover it. 00:30:31.949 [2024-12-09 06:29:26.248013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.949 [2024-12-09 06:29:26.248023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.949 qpair failed and we were unable to recover it. 00:30:31.949 [2024-12-09 06:29:26.248180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.949 [2024-12-09 06:29:26.248189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.949 qpair failed and we were unable to recover it. 00:30:31.949 [2024-12-09 06:29:26.248368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.949 [2024-12-09 06:29:26.248378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.949 qpair failed and we were unable to recover it. 00:30:31.949 [2024-12-09 06:29:26.248544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.949 [2024-12-09 06:29:26.248554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.949 qpair failed and we were unable to recover it. 00:30:31.949 [2024-12-09 06:29:26.248751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.949 [2024-12-09 06:29:26.248760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.949 qpair failed and we were unable to recover it. 00:30:31.949 [2024-12-09 06:29:26.249065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.949 [2024-12-09 06:29:26.249075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.949 qpair failed and we were unable to recover it. 00:30:31.949 [2024-12-09 06:29:26.249111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.949 [2024-12-09 06:29:26.249120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.949 qpair failed and we were unable to recover it. 00:30:31.949 [2024-12-09 06:29:26.249269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.949 [2024-12-09 06:29:26.249278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.949 qpair failed and we were unable to recover it. 00:30:31.949 [2024-12-09 06:29:26.249478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.949 [2024-12-09 06:29:26.249488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.949 qpair failed and we were unable to recover it. 00:30:31.949 [2024-12-09 06:29:26.249743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.949 [2024-12-09 06:29:26.249753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.949 qpair failed and we were unable to recover it. 00:30:31.949 [2024-12-09 06:29:26.250125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.949 [2024-12-09 06:29:26.250134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.949 qpair failed and we were unable to recover it. 00:30:31.949 [2024-12-09 06:29:26.250332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.949 [2024-12-09 06:29:26.250341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.949 qpair failed and we were unable to recover it. 00:30:31.949 [2024-12-09 06:29:26.250640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.949 [2024-12-09 06:29:26.250649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.949 qpair failed and we were unable to recover it. 00:30:31.949 [2024-12-09 06:29:26.250957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.949 [2024-12-09 06:29:26.250966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.949 qpair failed and we were unable to recover it. 00:30:31.949 [2024-12-09 06:29:26.251261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.949 [2024-12-09 06:29:26.251270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.949 qpair failed and we were unable to recover it. 00:30:31.949 [2024-12-09 06:29:26.251432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.949 [2024-12-09 06:29:26.251442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.949 qpair failed and we were unable to recover it. 00:30:31.949 [2024-12-09 06:29:26.251606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.949 [2024-12-09 06:29:26.251616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.949 qpair failed and we were unable to recover it. 00:30:31.949 [2024-12-09 06:29:26.251875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.949 [2024-12-09 06:29:26.251884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.949 qpair failed and we were unable to recover it. 00:30:31.949 [2024-12-09 06:29:26.252201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.949 [2024-12-09 06:29:26.252212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.949 qpair failed and we were unable to recover it. 00:30:31.949 [2024-12-09 06:29:26.252414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.949 [2024-12-09 06:29:26.252424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.949 qpair failed and we were unable to recover it. 00:30:31.949 [2024-12-09 06:29:26.252499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.949 [2024-12-09 06:29:26.252508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.949 qpair failed and we were unable to recover it. 00:30:31.949 [2024-12-09 06:29:26.252583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.949 [2024-12-09 06:29:26.252592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.949 qpair failed and we were unable to recover it. 00:30:31.949 [2024-12-09 06:29:26.252948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.949 [2024-12-09 06:29:26.252957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.949 qpair failed and we were unable to recover it. 00:30:31.949 [2024-12-09 06:29:26.253156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.949 [2024-12-09 06:29:26.253165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.949 qpair failed and we were unable to recover it. 00:30:31.949 [2024-12-09 06:29:26.253350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.949 [2024-12-09 06:29:26.253359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.949 qpair failed and we were unable to recover it. 00:30:31.949 [2024-12-09 06:29:26.253547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.949 [2024-12-09 06:29:26.253556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.949 qpair failed and we were unable to recover it. 00:30:31.949 [2024-12-09 06:29:26.253722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.949 [2024-12-09 06:29:26.253732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.949 qpair failed and we were unable to recover it. 00:30:31.949 [2024-12-09 06:29:26.253844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.949 [2024-12-09 06:29:26.253852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.949 qpair failed and we were unable to recover it. 00:30:31.949 [2024-12-09 06:29:26.254219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.949 [2024-12-09 06:29:26.254333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:31.949 qpair failed and we were unable to recover it. 00:30:31.949 [2024-12-09 06:29:26.254885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.950 [2024-12-09 06:29:26.254975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:31.950 qpair failed and we were unable to recover it. 00:30:31.950 [2024-12-09 06:29:26.255165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.950 [2024-12-09 06:29:26.255175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.950 qpair failed and we were unable to recover it. 00:30:31.950 [2024-12-09 06:29:26.255499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.950 [2024-12-09 06:29:26.255509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.950 qpair failed and we were unable to recover it. 00:30:31.950 [2024-12-09 06:29:26.255800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.950 [2024-12-09 06:29:26.255809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.950 qpair failed and we were unable to recover it. 00:30:31.950 [2024-12-09 06:29:26.256032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.950 [2024-12-09 06:29:26.256041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.950 qpair failed and we were unable to recover it. 00:30:31.950 [2024-12-09 06:29:26.256389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.950 [2024-12-09 06:29:26.256399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.950 qpair failed and we were unable to recover it. 00:30:31.950 [2024-12-09 06:29:26.256603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.950 [2024-12-09 06:29:26.256613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.950 qpair failed and we were unable to recover it. 00:30:31.950 [2024-12-09 06:29:26.256926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.950 [2024-12-09 06:29:26.256935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.950 qpair failed and we were unable to recover it. 00:30:31.950 [2024-12-09 06:29:26.257209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.950 [2024-12-09 06:29:26.257219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.950 qpair failed and we were unable to recover it. 00:30:31.950 [2024-12-09 06:29:26.257510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.950 [2024-12-09 06:29:26.257520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.950 qpair failed and we were unable to recover it. 00:30:31.950 [2024-12-09 06:29:26.257907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.950 [2024-12-09 06:29:26.257916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.950 qpair failed and we were unable to recover it. 00:30:31.950 [2024-12-09 06:29:26.258115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.950 [2024-12-09 06:29:26.258124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.950 qpair failed and we were unable to recover it. 00:30:31.950 [2024-12-09 06:29:26.258431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.950 [2024-12-09 06:29:26.258441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.950 qpair failed and we were unable to recover it. 00:30:31.950 [2024-12-09 06:29:26.258828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.950 [2024-12-09 06:29:26.258837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.950 qpair failed and we were unable to recover it. 00:30:31.950 [2024-12-09 06:29:26.259159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.950 [2024-12-09 06:29:26.259169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.950 qpair failed and we were unable to recover it. 00:30:31.950 [2024-12-09 06:29:26.259471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.950 [2024-12-09 06:29:26.259480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.950 qpair failed and we were unable to recover it. 00:30:31.950 [2024-12-09 06:29:26.259770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.950 [2024-12-09 06:29:26.259780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.950 qpair failed and we were unable to recover it. 00:30:31.950 [2024-12-09 06:29:26.259821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.950 [2024-12-09 06:29:26.259830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.950 qpair failed and we were unable to recover it. 00:30:31.950 [2024-12-09 06:29:26.260125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.950 [2024-12-09 06:29:26.260135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.950 qpair failed and we were unable to recover it. 00:30:31.950 [2024-12-09 06:29:26.260416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.950 [2024-12-09 06:29:26.260426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.950 qpair failed and we were unable to recover it. 00:30:31.950 [2024-12-09 06:29:26.260595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.950 [2024-12-09 06:29:26.260605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.950 qpair failed and we were unable to recover it. 00:30:31.950 [2024-12-09 06:29:26.260922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.950 [2024-12-09 06:29:26.260931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.950 qpair failed and we were unable to recover it. 00:30:31.950 [2024-12-09 06:29:26.261242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.950 [2024-12-09 06:29:26.261252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.950 qpair failed and we were unable to recover it. 00:30:31.950 [2024-12-09 06:29:26.261573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.950 [2024-12-09 06:29:26.261582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.950 qpair failed and we were unable to recover it. 00:30:31.950 [2024-12-09 06:29:26.261851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.950 [2024-12-09 06:29:26.261861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.950 qpair failed and we were unable to recover it. 00:30:31.950 [2024-12-09 06:29:26.262048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.950 [2024-12-09 06:29:26.262058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.950 qpair failed and we were unable to recover it. 00:30:31.950 [2024-12-09 06:29:26.262241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.950 [2024-12-09 06:29:26.262250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.950 qpair failed and we were unable to recover it. 00:30:31.950 [2024-12-09 06:29:26.262423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.950 [2024-12-09 06:29:26.262432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.950 qpair failed and we were unable to recover it. 00:30:31.950 [2024-12-09 06:29:26.262776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.950 [2024-12-09 06:29:26.262787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.950 qpair failed and we were unable to recover it. 00:30:31.950 [2024-12-09 06:29:26.262964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.950 [2024-12-09 06:29:26.262975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.950 qpair failed and we were unable to recover it. 00:30:31.950 [2024-12-09 06:29:26.263144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.950 [2024-12-09 06:29:26.263153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.950 qpair failed and we were unable to recover it. 00:30:31.950 [2024-12-09 06:29:26.263416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.950 [2024-12-09 06:29:26.263426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.950 qpair failed and we were unable to recover it. 00:30:31.950 [2024-12-09 06:29:26.263753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.950 [2024-12-09 06:29:26.263763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.950 qpair failed and we were unable to recover it. 00:30:31.950 [2024-12-09 06:29:26.264068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.950 [2024-12-09 06:29:26.264078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.950 qpair failed and we were unable to recover it. 00:30:31.950 [2024-12-09 06:29:26.264235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.950 [2024-12-09 06:29:26.264247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.950 qpair failed and we were unable to recover it. 00:30:31.950 [2024-12-09 06:29:26.264592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.950 [2024-12-09 06:29:26.264602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.950 qpair failed and we were unable to recover it. 00:30:31.950 [2024-12-09 06:29:26.264884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.950 [2024-12-09 06:29:26.264894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.950 qpair failed and we were unable to recover it. 00:30:31.950 [2024-12-09 06:29:26.265184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.950 [2024-12-09 06:29:26.265194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.950 qpair failed and we were unable to recover it. 00:30:31.950 [2024-12-09 06:29:26.265369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.951 [2024-12-09 06:29:26.265378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.951 qpair failed and we were unable to recover it. 00:30:31.951 [2024-12-09 06:29:26.265570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.951 [2024-12-09 06:29:26.265582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.951 qpair failed and we were unable to recover it. 00:30:31.951 [2024-12-09 06:29:26.265759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.951 [2024-12-09 06:29:26.265769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.951 qpair failed and we were unable to recover it. 00:30:31.951 [2024-12-09 06:29:26.265963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.951 [2024-12-09 06:29:26.265972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.951 qpair failed and we were unable to recover it. 00:30:31.951 [2024-12-09 06:29:26.266275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.951 [2024-12-09 06:29:26.266285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.951 qpair failed and we were unable to recover it. 00:30:31.951 [2024-12-09 06:29:26.266604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.951 [2024-12-09 06:29:26.266614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.951 qpair failed and we were unable to recover it. 00:30:31.951 [2024-12-09 06:29:26.266950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.951 [2024-12-09 06:29:26.266960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.951 qpair failed and we were unable to recover it. 00:30:31.951 [2024-12-09 06:29:26.267113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.951 [2024-12-09 06:29:26.267122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.951 qpair failed and we were unable to recover it. 00:30:31.951 [2024-12-09 06:29:26.267380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.951 [2024-12-09 06:29:26.267390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.951 qpair failed and we were unable to recover it. 00:30:31.951 [2024-12-09 06:29:26.267689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.951 [2024-12-09 06:29:26.267698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.951 qpair failed and we were unable to recover it. 00:30:31.951 [2024-12-09 06:29:26.268071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.951 [2024-12-09 06:29:26.268081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.951 qpair failed and we were unable to recover it. 00:30:31.951 [2024-12-09 06:29:26.268130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.951 [2024-12-09 06:29:26.268139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.951 qpair failed and we were unable to recover it. 00:30:31.951 [2024-12-09 06:29:26.268416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.951 [2024-12-09 06:29:26.268425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.951 qpair failed and we were unable to recover it. 00:30:31.951 [2024-12-09 06:29:26.268612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.951 [2024-12-09 06:29:26.268622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.951 qpair failed and we were unable to recover it. 00:30:31.951 [2024-12-09 06:29:26.268889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.951 [2024-12-09 06:29:26.268898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.951 qpair failed and we were unable to recover it. 00:30:31.951 [2024-12-09 06:29:26.269084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.951 [2024-12-09 06:29:26.269095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.951 qpair failed and we were unable to recover it. 00:30:31.951 [2024-12-09 06:29:26.269291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.951 [2024-12-09 06:29:26.269300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.951 qpair failed and we were unable to recover it. 00:30:31.951 [2024-12-09 06:29:26.269443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.951 [2024-12-09 06:29:26.269457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.951 qpair failed and we were unable to recover it. 00:30:31.951 [2024-12-09 06:29:26.269626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.951 [2024-12-09 06:29:26.269636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.951 qpair failed and we were unable to recover it. 00:30:31.951 [2024-12-09 06:29:26.269797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.951 [2024-12-09 06:29:26.269807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.951 qpair failed and we were unable to recover it. 00:30:31.951 [2024-12-09 06:29:26.269960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.951 [2024-12-09 06:29:26.269970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.951 qpair failed and we were unable to recover it. 00:30:31.951 [2024-12-09 06:29:26.270202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.951 [2024-12-09 06:29:26.270212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.951 qpair failed and we were unable to recover it. 00:30:31.951 [2024-12-09 06:29:26.270395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.951 [2024-12-09 06:29:26.270404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.951 qpair failed and we were unable to recover it. 00:30:31.951 [2024-12-09 06:29:26.270585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.951 [2024-12-09 06:29:26.270597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.951 qpair failed and we were unable to recover it. 00:30:31.951 [2024-12-09 06:29:26.270957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.951 [2024-12-09 06:29:26.270967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.951 qpair failed and we were unable to recover it. 00:30:31.951 [2024-12-09 06:29:26.271256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.951 [2024-12-09 06:29:26.271266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.951 qpair failed and we were unable to recover it. 00:30:31.951 [2024-12-09 06:29:26.271319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.951 [2024-12-09 06:29:26.271327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.951 qpair failed and we were unable to recover it. 00:30:31.951 [2024-12-09 06:29:26.271620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.951 [2024-12-09 06:29:26.271629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.951 qpair failed and we were unable to recover it. 00:30:31.951 [2024-12-09 06:29:26.271930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.951 [2024-12-09 06:29:26.271940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.951 qpair failed and we were unable to recover it. 00:30:31.951 [2024-12-09 06:29:26.272097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.951 [2024-12-09 06:29:26.272109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.951 qpair failed and we were unable to recover it. 00:30:31.951 [2024-12-09 06:29:26.272301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.951 [2024-12-09 06:29:26.272311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.951 qpair failed and we were unable to recover it. 00:30:31.951 [2024-12-09 06:29:26.272473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.951 [2024-12-09 06:29:26.272484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.951 qpair failed and we were unable to recover it. 00:30:31.951 [2024-12-09 06:29:26.272532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.951 [2024-12-09 06:29:26.272542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.951 qpair failed and we were unable to recover it. 00:30:31.951 [2024-12-09 06:29:26.272619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.951 [2024-12-09 06:29:26.272628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.951 qpair failed and we were unable to recover it. 00:30:31.951 [2024-12-09 06:29:26.272895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.951 [2024-12-09 06:29:26.272904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.951 qpair failed and we were unable to recover it. 00:30:31.951 [2024-12-09 06:29:26.273108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.951 [2024-12-09 06:29:26.273117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.951 qpair failed and we were unable to recover it. 00:30:31.951 [2024-12-09 06:29:26.273296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.951 [2024-12-09 06:29:26.273304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.951 qpair failed and we were unable to recover it. 00:30:31.951 [2024-12-09 06:29:26.273504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.951 [2024-12-09 06:29:26.273513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.951 qpair failed and we were unable to recover it. 00:30:31.952 [2024-12-09 06:29:26.273662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.952 [2024-12-09 06:29:26.273672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.952 qpair failed and we were unable to recover it. 00:30:31.952 [2024-12-09 06:29:26.273953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.952 [2024-12-09 06:29:26.273964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.952 qpair failed and we were unable to recover it. 00:30:31.952 [2024-12-09 06:29:26.274334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.952 [2024-12-09 06:29:26.274344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.952 qpair failed and we were unable to recover it. 00:30:31.952 [2024-12-09 06:29:26.274644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.952 [2024-12-09 06:29:26.274657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.952 qpair failed and we were unable to recover it. 00:30:31.952 [2024-12-09 06:29:26.274985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.952 [2024-12-09 06:29:26.274994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.952 qpair failed and we were unable to recover it. 00:30:31.952 [2024-12-09 06:29:26.275299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.952 [2024-12-09 06:29:26.275308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.952 qpair failed and we were unable to recover it. 00:30:31.952 [2024-12-09 06:29:26.275579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.952 [2024-12-09 06:29:26.275589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.952 qpair failed and we were unable to recover it. 00:30:31.952 [2024-12-09 06:29:26.275802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.952 [2024-12-09 06:29:26.275812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.952 qpair failed and we were unable to recover it. 00:30:31.952 [2024-12-09 06:29:26.276103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.952 [2024-12-09 06:29:26.276112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.952 qpair failed and we were unable to recover it. 00:30:31.952 [2024-12-09 06:29:26.276279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.952 [2024-12-09 06:29:26.276289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.952 qpair failed and we were unable to recover it. 00:30:31.952 [2024-12-09 06:29:26.276615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.952 [2024-12-09 06:29:26.276625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.952 qpair failed and we were unable to recover it. 00:30:31.952 [2024-12-09 06:29:26.276915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.952 [2024-12-09 06:29:26.276931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.952 qpair failed and we were unable to recover it. 00:30:31.952 [2024-12-09 06:29:26.277229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.952 [2024-12-09 06:29:26.277238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.952 qpair failed and we were unable to recover it. 00:30:31.952 [2024-12-09 06:29:26.277397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.952 [2024-12-09 06:29:26.277406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.952 qpair failed and we were unable to recover it. 00:30:31.952 [2024-12-09 06:29:26.277672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.952 [2024-12-09 06:29:26.277682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.952 qpair failed and we were unable to recover it. 00:30:31.952 [2024-12-09 06:29:26.277988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.952 [2024-12-09 06:29:26.277998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.952 qpair failed and we were unable to recover it. 00:30:31.952 [2024-12-09 06:29:26.278168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.952 [2024-12-09 06:29:26.278177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.952 qpair failed and we were unable to recover it. 00:30:31.952 [2024-12-09 06:29:26.278499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.952 [2024-12-09 06:29:26.278509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.952 qpair failed and we were unable to recover it. 00:30:31.952 [2024-12-09 06:29:26.278695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.952 [2024-12-09 06:29:26.278704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.952 qpair failed and we were unable to recover it. 00:30:31.952 [2024-12-09 06:29:26.278901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.952 [2024-12-09 06:29:26.278910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.952 qpair failed and we were unable to recover it. 00:30:31.952 [2024-12-09 06:29:26.279217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.952 [2024-12-09 06:29:26.279227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.952 qpair failed and we were unable to recover it. 00:30:31.952 [2024-12-09 06:29:26.279530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.952 [2024-12-09 06:29:26.279539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.952 qpair failed and we were unable to recover it. 00:30:31.952 [2024-12-09 06:29:26.279868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.952 [2024-12-09 06:29:26.279878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.952 qpair failed and we were unable to recover it. 00:30:31.952 [2024-12-09 06:29:26.280248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.952 [2024-12-09 06:29:26.280259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.952 qpair failed and we were unable to recover it. 00:30:31.952 [2024-12-09 06:29:26.280556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.952 [2024-12-09 06:29:26.280566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.952 qpair failed and we were unable to recover it. 00:30:31.952 [2024-12-09 06:29:26.280785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.952 [2024-12-09 06:29:26.280794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.952 qpair failed and we were unable to recover it. 00:30:31.952 [2024-12-09 06:29:26.281038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.952 [2024-12-09 06:29:26.281047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.952 qpair failed and we were unable to recover it. 00:30:31.952 [2024-12-09 06:29:26.281356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.952 [2024-12-09 06:29:26.281366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.952 qpair failed and we were unable to recover it. 00:30:31.952 [2024-12-09 06:29:26.281554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.952 [2024-12-09 06:29:26.281564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.952 qpair failed and we were unable to recover it. 00:30:31.952 [2024-12-09 06:29:26.281746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.952 [2024-12-09 06:29:26.281755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.952 qpair failed and we were unable to recover it. 00:30:31.952 [2024-12-09 06:29:26.282223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.952 [2024-12-09 06:29:26.282313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:31.952 qpair failed and we were unable to recover it. 00:30:31.952 [2024-12-09 06:29:26.282707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.952 [2024-12-09 06:29:26.282796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaaed30 with addr=10.0.0.2, port=4420 00:30:31.952 qpair failed and we were unable to recover it. 00:30:31.952 [2024-12-09 06:29:26.283060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.952 [2024-12-09 06:29:26.283072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.952 qpair failed and we were unable to recover it. 00:30:31.952 [2024-12-09 06:29:26.283394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.952 [2024-12-09 06:29:26.283404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.952 qpair failed and we were unable to recover it. 00:30:31.952 [2024-12-09 06:29:26.283560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.952 [2024-12-09 06:29:26.283570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.952 qpair failed and we were unable to recover it. 00:30:31.952 [2024-12-09 06:29:26.283762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.952 [2024-12-09 06:29:26.283772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.952 qpair failed and we were unable to recover it. 00:30:31.952 [2024-12-09 06:29:26.283951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.952 [2024-12-09 06:29:26.283961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.952 qpair failed and we were unable to recover it. 00:30:31.952 [2024-12-09 06:29:26.284260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.952 [2024-12-09 06:29:26.284270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.953 qpair failed and we were unable to recover it. 00:30:31.953 [2024-12-09 06:29:26.284595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.953 [2024-12-09 06:29:26.284605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.953 qpair failed and we were unable to recover it. 00:30:31.953 [2024-12-09 06:29:26.284794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.953 [2024-12-09 06:29:26.284803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.953 qpair failed and we were unable to recover it. 00:30:31.953 [2024-12-09 06:29:26.285173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.953 [2024-12-09 06:29:26.285182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.953 qpair failed and we were unable to recover it. 00:30:31.953 [2024-12-09 06:29:26.285346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.953 [2024-12-09 06:29:26.285356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.953 qpair failed and we were unable to recover it. 00:30:31.953 [2024-12-09 06:29:26.285556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.953 [2024-12-09 06:29:26.285566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.953 qpair failed and we were unable to recover it. 00:30:31.953 [2024-12-09 06:29:26.285877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.953 [2024-12-09 06:29:26.285890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.953 qpair failed and we were unable to recover it. 00:30:31.953 [2024-12-09 06:29:26.286187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.953 [2024-12-09 06:29:26.286196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.953 qpair failed and we were unable to recover it. 00:30:31.953 [2024-12-09 06:29:26.286364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.953 [2024-12-09 06:29:26.286373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.953 qpair failed and we were unable to recover it. 00:30:31.953 [2024-12-09 06:29:26.286666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.953 [2024-12-09 06:29:26.286676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.953 qpair failed and we were unable to recover it. 00:30:31.953 [2024-12-09 06:29:26.286995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.953 [2024-12-09 06:29:26.287006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.953 qpair failed and we were unable to recover it. 00:30:31.953 [2024-12-09 06:29:26.287183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.953 [2024-12-09 06:29:26.287193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.953 qpair failed and we were unable to recover it. 00:30:31.953 [2024-12-09 06:29:26.287359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.953 [2024-12-09 06:29:26.287369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.953 qpair failed and we were unable to recover it. 00:30:31.953 [2024-12-09 06:29:26.287668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.953 [2024-12-09 06:29:26.287679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.953 qpair failed and we were unable to recover it. 00:30:31.953 [2024-12-09 06:29:26.287950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.953 [2024-12-09 06:29:26.287960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.953 qpair failed and we were unable to recover it. 00:30:31.953 [2024-12-09 06:29:26.288255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.953 [2024-12-09 06:29:26.288265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.953 qpair failed and we were unable to recover it. 00:30:31.953 [2024-12-09 06:29:26.288419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.953 [2024-12-09 06:29:26.288429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.953 qpair failed and we were unable to recover it. 00:30:31.953 [2024-12-09 06:29:26.288611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.953 [2024-12-09 06:29:26.288620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.953 qpair failed and we were unable to recover it. 00:30:31.953 [2024-12-09 06:29:26.289015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.953 [2024-12-09 06:29:26.289024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.953 qpair failed and we were unable to recover it. 00:30:31.953 [2024-12-09 06:29:26.289232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.953 [2024-12-09 06:29:26.289241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.953 qpair failed and we were unable to recover it. 00:30:31.953 [2024-12-09 06:29:26.289552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.953 [2024-12-09 06:29:26.289561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.953 qpair failed and we were unable to recover it. 00:30:31.953 [2024-12-09 06:29:26.289732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.953 [2024-12-09 06:29:26.289743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.953 qpair failed and we were unable to recover it. 00:30:31.953 [2024-12-09 06:29:26.289986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.953 [2024-12-09 06:29:26.289996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.953 qpair failed and we were unable to recover it. 00:30:31.953 [2024-12-09 06:29:26.290286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.953 [2024-12-09 06:29:26.290296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.953 qpair failed and we were unable to recover it. 00:30:31.953 [2024-12-09 06:29:26.290588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.953 [2024-12-09 06:29:26.290598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.953 qpair failed and we were unable to recover it. 00:30:31.953 [2024-12-09 06:29:26.290804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.953 [2024-12-09 06:29:26.290814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.953 qpair failed and we were unable to recover it. 00:30:31.953 [2024-12-09 06:29:26.291121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.953 [2024-12-09 06:29:26.291130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.953 qpair failed and we were unable to recover it. 00:30:31.953 [2024-12-09 06:29:26.291428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.953 [2024-12-09 06:29:26.291437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.953 qpair failed and we were unable to recover it. 00:30:31.953 [2024-12-09 06:29:26.291769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.953 [2024-12-09 06:29:26.291780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.953 qpair failed and we were unable to recover it. 00:30:31.953 [2024-12-09 06:29:26.292103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.953 [2024-12-09 06:29:26.292113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.953 qpair failed and we were unable to recover it. 00:30:31.953 [2024-12-09 06:29:26.292406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.953 [2024-12-09 06:29:26.292416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.953 qpair failed and we were unable to recover it. 00:30:31.953 [2024-12-09 06:29:26.292712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.953 [2024-12-09 06:29:26.292722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.953 qpair failed and we were unable to recover it. 00:30:31.953 [2024-12-09 06:29:26.292766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.953 [2024-12-09 06:29:26.292775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.953 qpair failed and we were unable to recover it. 00:30:31.953 [2024-12-09 06:29:26.292921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.953 [2024-12-09 06:29:26.292930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.953 qpair failed and we were unable to recover it. 00:30:31.953 [2024-12-09 06:29:26.293098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.953 [2024-12-09 06:29:26.293108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.954 qpair failed and we were unable to recover it. 00:30:31.954 [2024-12-09 06:29:26.293405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.954 [2024-12-09 06:29:26.293414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.954 qpair failed and we were unable to recover it. 00:30:31.954 [2024-12-09 06:29:26.293575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.954 [2024-12-09 06:29:26.293585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.954 qpair failed and we were unable to recover it. 00:30:31.954 [2024-12-09 06:29:26.293628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.954 [2024-12-09 06:29:26.293637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.954 qpair failed and we were unable to recover it. 00:30:31.954 [2024-12-09 06:29:26.293911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.954 [2024-12-09 06:29:26.293922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.954 qpair failed and we were unable to recover it. 00:30:31.954 [2024-12-09 06:29:26.294263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.954 [2024-12-09 06:29:26.294273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.954 qpair failed and we were unable to recover it. 00:30:31.954 [2024-12-09 06:29:26.294592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.954 [2024-12-09 06:29:26.294602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.954 qpair failed and we were unable to recover it. 00:30:31.954 [2024-12-09 06:29:26.294900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.954 [2024-12-09 06:29:26.294909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.954 qpair failed and we were unable to recover it. 00:30:31.954 [2024-12-09 06:29:26.295183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.954 [2024-12-09 06:29:26.295193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.954 qpair failed and we were unable to recover it. 00:30:31.954 [2024-12-09 06:29:26.295382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.954 [2024-12-09 06:29:26.295391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.954 qpair failed and we were unable to recover it. 00:30:31.954 [2024-12-09 06:29:26.295561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.954 [2024-12-09 06:29:26.295571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.954 qpair failed and we were unable to recover it. 00:30:31.954 [2024-12-09 06:29:26.295804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.954 [2024-12-09 06:29:26.295814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.954 qpair failed and we were unable to recover it. 00:30:31.954 [2024-12-09 06:29:26.296110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.954 [2024-12-09 06:29:26.296122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.954 qpair failed and we were unable to recover it. 00:30:31.954 [2024-12-09 06:29:26.296396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.954 [2024-12-09 06:29:26.296406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.954 qpair failed and we were unable to recover it. 00:30:31.954 [2024-12-09 06:29:26.296693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.954 [2024-12-09 06:29:26.296703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.954 qpair failed and we were unable to recover it. 00:30:31.954 [2024-12-09 06:29:26.296878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.954 [2024-12-09 06:29:26.296888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.954 qpair failed and we were unable to recover it. 00:30:31.954 [2024-12-09 06:29:26.297038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.954 [2024-12-09 06:29:26.297047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.954 qpair failed and we were unable to recover it. 00:30:31.954 [2024-12-09 06:29:26.297346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.954 [2024-12-09 06:29:26.297355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.954 qpair failed and we were unable to recover it. 00:30:31.954 [2024-12-09 06:29:26.297535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.954 [2024-12-09 06:29:26.297555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.954 qpair failed and we were unable to recover it. 00:30:31.954 [2024-12-09 06:29:26.297768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.954 [2024-12-09 06:29:26.297778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.954 qpair failed and we were unable to recover it. 00:30:31.954 [2024-12-09 06:29:26.298094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.954 [2024-12-09 06:29:26.298104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.954 qpair failed and we were unable to recover it. 00:30:31.954 [2024-12-09 06:29:26.298288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.954 [2024-12-09 06:29:26.298297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.954 qpair failed and we were unable to recover it. 00:30:31.954 [2024-12-09 06:29:26.298498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.954 [2024-12-09 06:29:26.298508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.954 qpair failed and we were unable to recover it. 00:30:31.954 [2024-12-09 06:29:26.298792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.954 [2024-12-09 06:29:26.298803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.954 qpair failed and we were unable to recover it. 00:30:31.954 [2024-12-09 06:29:26.299096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.954 [2024-12-09 06:29:26.299106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.954 qpair failed and we were unable to recover it. 00:30:31.954 [2024-12-09 06:29:26.299278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.954 [2024-12-09 06:29:26.299288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.954 qpair failed and we were unable to recover it. 00:30:31.954 [2024-12-09 06:29:26.299553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.954 [2024-12-09 06:29:26.299563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.954 qpair failed and we were unable to recover it. 00:30:31.954 [2024-12-09 06:29:26.299855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.954 [2024-12-09 06:29:26.299865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.954 qpair failed and we were unable to recover it. 00:30:31.954 [2024-12-09 06:29:26.300174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.954 [2024-12-09 06:29:26.300185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.954 qpair failed and we were unable to recover it. 00:30:31.954 [2024-12-09 06:29:26.300469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.954 [2024-12-09 06:29:26.300479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.954 qpair failed and we were unable to recover it. 00:30:31.954 [2024-12-09 06:29:26.300777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.954 [2024-12-09 06:29:26.300787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.954 qpair failed and we were unable to recover it. 00:30:31.954 [2024-12-09 06:29:26.300965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.954 [2024-12-09 06:29:26.300976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.954 qpair failed and we were unable to recover it. 00:30:31.954 [2024-12-09 06:29:26.301306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.954 [2024-12-09 06:29:26.301316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.954 qpair failed and we were unable to recover it. 00:30:31.954 [2024-12-09 06:29:26.301567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.954 [2024-12-09 06:29:26.301577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.954 qpair failed and we were unable to recover it. 00:30:31.954 [2024-12-09 06:29:26.301891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.954 [2024-12-09 06:29:26.301901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.954 qpair failed and we were unable to recover it. 00:30:31.954 [2024-12-09 06:29:26.302219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.954 [2024-12-09 06:29:26.302229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.954 qpair failed and we were unable to recover it. 00:30:31.954 [2024-12-09 06:29:26.302465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.954 [2024-12-09 06:29:26.302475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.954 qpair failed and we were unable to recover it. 00:30:31.954 [2024-12-09 06:29:26.302766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.954 [2024-12-09 06:29:26.302776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.954 qpair failed and we were unable to recover it. 00:30:31.954 [2024-12-09 06:29:26.303097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.955 [2024-12-09 06:29:26.303106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.955 qpair failed and we were unable to recover it. 00:30:31.955 [2024-12-09 06:29:26.303401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.955 [2024-12-09 06:29:26.303411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.955 qpair failed and we were unable to recover it. 00:30:31.955 [2024-12-09 06:29:26.303706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.955 [2024-12-09 06:29:26.303717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.955 qpair failed and we were unable to recover it. 00:30:31.955 [2024-12-09 06:29:26.304101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.955 [2024-12-09 06:29:26.304112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.955 qpair failed and we were unable to recover it. 00:30:31.955 [2024-12-09 06:29:26.304402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.955 [2024-12-09 06:29:26.304412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.955 qpair failed and we were unable to recover it. 00:30:31.955 [2024-12-09 06:29:26.304587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.955 [2024-12-09 06:29:26.304598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.955 qpair failed and we were unable to recover it. 00:30:31.955 [2024-12-09 06:29:26.304875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.955 [2024-12-09 06:29:26.304885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.955 qpair failed and we were unable to recover it. 00:30:31.955 [2024-12-09 06:29:26.305162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.955 [2024-12-09 06:29:26.305172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.955 qpair failed and we were unable to recover it. 00:30:31.955 [2024-12-09 06:29:26.305476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.955 [2024-12-09 06:29:26.305487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.955 qpair failed and we were unable to recover it. 00:30:31.955 [2024-12-09 06:29:26.305808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.955 [2024-12-09 06:29:26.305818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.955 qpair failed and we were unable to recover it. 00:30:31.955 [2024-12-09 06:29:26.306118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.955 [2024-12-09 06:29:26.306128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.955 qpair failed and we were unable to recover it. 00:30:31.955 [2024-12-09 06:29:26.306456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.955 [2024-12-09 06:29:26.306467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.955 qpair failed and we were unable to recover it. 00:30:31.955 [2024-12-09 06:29:26.306760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.955 [2024-12-09 06:29:26.306772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.955 qpair failed and we were unable to recover it. 00:30:31.955 [2024-12-09 06:29:26.307067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.955 [2024-12-09 06:29:26.307078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.955 qpair failed and we were unable to recover it. 00:30:31.955 [2024-12-09 06:29:26.307401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.955 [2024-12-09 06:29:26.307415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.955 qpair failed and we were unable to recover it. 00:30:31.955 [2024-12-09 06:29:26.307625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.955 [2024-12-09 06:29:26.307635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.955 qpair failed and we were unable to recover it. 00:30:31.955 [2024-12-09 06:29:26.307816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.955 [2024-12-09 06:29:26.307827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.955 qpair failed and we were unable to recover it. 00:30:31.955 [2024-12-09 06:29:26.308185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.955 [2024-12-09 06:29:26.308196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.955 qpair failed and we were unable to recover it. 00:30:31.955 [2024-12-09 06:29:26.308445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.955 [2024-12-09 06:29:26.308460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.955 qpair failed and we were unable to recover it. 00:30:31.955 [2024-12-09 06:29:26.308754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.955 [2024-12-09 06:29:26.308763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.955 qpair failed and we were unable to recover it. 00:30:31.955 [2024-12-09 06:29:26.309061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.955 [2024-12-09 06:29:26.309071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.955 qpair failed and we were unable to recover it. 00:30:31.955 [2024-12-09 06:29:26.309265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.955 [2024-12-09 06:29:26.309275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.955 qpair failed and we were unable to recover it. 00:30:31.955 [2024-12-09 06:29:26.309577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.955 [2024-12-09 06:29:26.309587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.955 qpair failed and we were unable to recover it. 00:30:31.955 [2024-12-09 06:29:26.309889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.955 [2024-12-09 06:29:26.309899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.955 qpair failed and we were unable to recover it. 00:30:31.955 [2024-12-09 06:29:26.310063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.955 [2024-12-09 06:29:26.310073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.955 qpair failed and we were unable to recover it. 00:30:31.955 [2024-12-09 06:29:26.310359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.955 [2024-12-09 06:29:26.310368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.955 qpair failed and we were unable to recover it. 00:30:31.955 [2024-12-09 06:29:26.310681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.955 [2024-12-09 06:29:26.310691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.955 qpair failed and we were unable to recover it. 00:30:31.955 [2024-12-09 06:29:26.310868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.955 [2024-12-09 06:29:26.310879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.955 qpair failed and we were unable to recover it. 00:30:31.955 [2024-12-09 06:29:26.311190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.955 [2024-12-09 06:29:26.311200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.955 qpair failed and we were unable to recover it. 00:30:31.955 [2024-12-09 06:29:26.311372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.955 [2024-12-09 06:29:26.311381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.955 qpair failed and we were unable to recover it. 00:30:31.955 [2024-12-09 06:29:26.311568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.955 [2024-12-09 06:29:26.311578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.955 qpair failed and we were unable to recover it. 00:30:31.955 [2024-12-09 06:29:26.311902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.955 [2024-12-09 06:29:26.311912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.955 qpair failed and we were unable to recover it. 00:30:31.955 [2024-12-09 06:29:26.312069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.955 [2024-12-09 06:29:26.312078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.955 qpair failed and we were unable to recover it. 00:30:31.955 [2024-12-09 06:29:26.312347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.955 [2024-12-09 06:29:26.312356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.955 qpair failed and we were unable to recover it. 00:30:31.955 [2024-12-09 06:29:26.312615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.955 [2024-12-09 06:29:26.312624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.955 qpair failed and we were unable to recover it. 00:30:31.955 [2024-12-09 06:29:26.312949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.955 [2024-12-09 06:29:26.312959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.955 qpair failed and we were unable to recover it. 00:30:31.955 [2024-12-09 06:29:26.313167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.955 [2024-12-09 06:29:26.313177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.955 qpair failed and we were unable to recover it. 00:30:31.955 [2024-12-09 06:29:26.313327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.955 [2024-12-09 06:29:26.313337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.955 qpair failed and we were unable to recover it. 00:30:31.956 [2024-12-09 06:29:26.313683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.956 [2024-12-09 06:29:26.313692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.956 qpair failed and we were unable to recover it. 00:30:31.956 [2024-12-09 06:29:26.313991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.956 [2024-12-09 06:29:26.314000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.956 qpair failed and we were unable to recover it. 00:30:31.956 [2024-12-09 06:29:26.314187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.956 [2024-12-09 06:29:26.314196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.956 qpair failed and we were unable to recover it. 00:30:31.956 [2024-12-09 06:29:26.314514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.956 [2024-12-09 06:29:26.314524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.956 qpair failed and we were unable to recover it. 00:30:31.956 [2024-12-09 06:29:26.314826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.956 [2024-12-09 06:29:26.314836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.956 qpair failed and we were unable to recover it. 00:30:31.956 [2024-12-09 06:29:26.315121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.956 [2024-12-09 06:29:26.315131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.956 qpair failed and we were unable to recover it. 00:30:31.956 [2024-12-09 06:29:26.315175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.956 [2024-12-09 06:29:26.315185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.956 qpair failed and we were unable to recover it. 00:30:31.956 [2024-12-09 06:29:26.315401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.956 [2024-12-09 06:29:26.315411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.956 qpair failed and we were unable to recover it. 00:30:31.956 [2024-12-09 06:29:26.315733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.956 [2024-12-09 06:29:26.315743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.956 qpair failed and we were unable to recover it. 00:30:31.956 [2024-12-09 06:29:26.316116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.956 [2024-12-09 06:29:26.316126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.956 qpair failed and we were unable to recover it. 00:30:31.956 [2024-12-09 06:29:26.316278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.956 [2024-12-09 06:29:26.316287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.956 qpair failed and we were unable to recover it. 00:30:31.956 [2024-12-09 06:29:26.316618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.956 [2024-12-09 06:29:26.316628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.956 qpair failed and we were unable to recover it. 00:30:31.956 [2024-12-09 06:29:26.317008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.956 [2024-12-09 06:29:26.317017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.956 qpair failed and we were unable to recover it. 00:30:31.956 [2024-12-09 06:29:26.317169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.956 [2024-12-09 06:29:26.317180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.956 qpair failed and we were unable to recover it. 00:30:31.956 [2024-12-09 06:29:26.317338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.956 [2024-12-09 06:29:26.317348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.956 qpair failed and we were unable to recover it. 00:30:31.956 [2024-12-09 06:29:26.317618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.956 [2024-12-09 06:29:26.317627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.956 qpair failed and we were unable to recover it. 00:30:31.956 [2024-12-09 06:29:26.317915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.956 [2024-12-09 06:29:26.317924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.956 qpair failed and we were unable to recover it. 00:30:31.956 [2024-12-09 06:29:26.318190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.956 [2024-12-09 06:29:26.318200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.956 qpair failed and we were unable to recover it. 00:30:31.956 [2024-12-09 06:29:26.318501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.956 [2024-12-09 06:29:26.318511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.956 qpair failed and we were unable to recover it. 00:30:31.956 [2024-12-09 06:29:26.318808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.956 [2024-12-09 06:29:26.318818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.956 qpair failed and we were unable to recover it. 00:30:31.956 [2024-12-09 06:29:26.319120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.956 [2024-12-09 06:29:26.319129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.956 qpair failed and we were unable to recover it. 00:30:31.956 [2024-12-09 06:29:26.319406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.956 [2024-12-09 06:29:26.319415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.956 qpair failed and we were unable to recover it. 00:30:31.956 [2024-12-09 06:29:26.319684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.956 [2024-12-09 06:29:26.319693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.956 qpair failed and we were unable to recover it. 00:30:31.956 [2024-12-09 06:29:26.319888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.956 [2024-12-09 06:29:26.319897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.956 qpair failed and we were unable to recover it. 00:30:31.956 [2024-12-09 06:29:26.320237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.956 [2024-12-09 06:29:26.320246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.956 qpair failed and we were unable to recover it. 00:30:31.956 [2024-12-09 06:29:26.320497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.956 [2024-12-09 06:29:26.320507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.956 qpair failed and we were unable to recover it. 00:30:31.956 [2024-12-09 06:29:26.320803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.956 [2024-12-09 06:29:26.320813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.956 qpair failed and we were unable to recover it. 00:30:31.956 [2024-12-09 06:29:26.321013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.956 [2024-12-09 06:29:26.321022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.956 qpair failed and we were unable to recover it. 00:30:31.956 [2024-12-09 06:29:26.321212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.956 [2024-12-09 06:29:26.321222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.956 qpair failed and we were unable to recover it. 00:30:31.956 [2024-12-09 06:29:26.321404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.956 [2024-12-09 06:29:26.321414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.956 qpair failed and we were unable to recover it. 00:30:31.956 [2024-12-09 06:29:26.321720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.956 [2024-12-09 06:29:26.321730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.956 qpair failed and we were unable to recover it. 00:30:31.956 [2024-12-09 06:29:26.321924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.956 [2024-12-09 06:29:26.321933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.956 qpair failed and we were unable to recover it. 00:30:31.956 [2024-12-09 06:29:26.322108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.956 [2024-12-09 06:29:26.322117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.956 qpair failed and we were unable to recover it. 00:30:31.956 [2024-12-09 06:29:26.322425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.956 [2024-12-09 06:29:26.322434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.956 qpair failed and we were unable to recover it. 00:30:31.956 [2024-12-09 06:29:26.322610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.956 [2024-12-09 06:29:26.322619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.956 qpair failed and we were unable to recover it. 00:30:31.956 [2024-12-09 06:29:26.322824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.956 [2024-12-09 06:29:26.322834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.956 qpair failed and we were unable to recover it. 00:30:31.956 [2024-12-09 06:29:26.323110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.956 [2024-12-09 06:29:26.323120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.956 qpair failed and we were unable to recover it. 00:30:31.956 [2024-12-09 06:29:26.323401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.956 [2024-12-09 06:29:26.323411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.956 qpair failed and we were unable to recover it. 00:30:31.957 [2024-12-09 06:29:26.323495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.957 [2024-12-09 06:29:26.323504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.957 qpair failed and we were unable to recover it. 00:30:31.957 [2024-12-09 06:29:26.323795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.957 [2024-12-09 06:29:26.323804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.957 qpair failed and we were unable to recover it. 00:30:31.957 [2024-12-09 06:29:26.324120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.957 [2024-12-09 06:29:26.324129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.957 qpair failed and we were unable to recover it. 00:30:31.957 [2024-12-09 06:29:26.324431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.957 [2024-12-09 06:29:26.324441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.957 qpair failed and we were unable to recover it. 00:30:31.957 [2024-12-09 06:29:26.324703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.957 [2024-12-09 06:29:26.324713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.957 qpair failed and we were unable to recover it. 00:30:31.957 [2024-12-09 06:29:26.324920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.957 [2024-12-09 06:29:26.324933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.957 qpair failed and we were unable to recover it. 00:30:31.957 [2024-12-09 06:29:26.325246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.957 [2024-12-09 06:29:26.325256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.957 qpair failed and we were unable to recover it. 00:30:31.957 [2024-12-09 06:29:26.325407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.957 [2024-12-09 06:29:26.325417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.957 qpair failed and we were unable to recover it. 00:30:31.957 [2024-12-09 06:29:26.325638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.957 [2024-12-09 06:29:26.325648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.957 qpair failed and we were unable to recover it. 00:30:31.957 [2024-12-09 06:29:26.325735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.957 [2024-12-09 06:29:26.325744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.957 qpair failed and we were unable to recover it. 00:30:31.957 [2024-12-09 06:29:26.326043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.957 [2024-12-09 06:29:26.326053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.957 qpair failed and we were unable to recover it. 00:30:31.957 [2024-12-09 06:29:26.326120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.957 [2024-12-09 06:29:26.326130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.957 qpair failed and we were unable to recover it. 00:30:31.957 [2024-12-09 06:29:26.326271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.957 [2024-12-09 06:29:26.326281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.957 qpair failed and we were unable to recover it. 00:30:31.957 [2024-12-09 06:29:26.326477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.957 [2024-12-09 06:29:26.326487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.957 qpair failed and we were unable to recover it. 00:30:31.957 [2024-12-09 06:29:26.326635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.957 [2024-12-09 06:29:26.326645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.957 qpair failed and we were unable to recover it. 00:30:31.957 [2024-12-09 06:29:26.326951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.957 [2024-12-09 06:29:26.326961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.957 qpair failed and we were unable to recover it. 00:30:31.957 [2024-12-09 06:29:26.327131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.957 [2024-12-09 06:29:26.327141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.957 qpair failed and we were unable to recover it. 00:30:31.957 [2024-12-09 06:29:26.327337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.957 [2024-12-09 06:29:26.327347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.957 qpair failed and we were unable to recover it. 00:30:31.957 [2024-12-09 06:29:26.327690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.957 [2024-12-09 06:29:26.327700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.957 qpair failed and we were unable to recover it. 00:30:31.957 [2024-12-09 06:29:26.327871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.957 [2024-12-09 06:29:26.327881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.957 qpair failed and we were unable to recover it. 00:30:31.957 [2024-12-09 06:29:26.328062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.957 [2024-12-09 06:29:26.328071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.957 qpair failed and we were unable to recover it. 00:30:31.957 [2024-12-09 06:29:26.328379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.957 [2024-12-09 06:29:26.328388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.957 qpair failed and we were unable to recover it. 00:30:31.957 [2024-12-09 06:29:26.328669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.957 [2024-12-09 06:29:26.328679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.957 qpair failed and we were unable to recover it. 00:30:31.957 [2024-12-09 06:29:26.329042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.957 [2024-12-09 06:29:26.329051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.957 qpair failed and we were unable to recover it. 00:30:31.957 [2024-12-09 06:29:26.329209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.957 [2024-12-09 06:29:26.329218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.957 qpair failed and we were unable to recover it. 00:30:31.957 [2024-12-09 06:29:26.329522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.957 [2024-12-09 06:29:26.329533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.957 qpair failed and we were unable to recover it. 00:30:31.957 [2024-12-09 06:29:26.329812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.957 [2024-12-09 06:29:26.329823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.957 qpair failed and we were unable to recover it. 00:30:31.957 [2024-12-09 06:29:26.330118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.957 [2024-12-09 06:29:26.330127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.957 qpair failed and we were unable to recover it. 00:30:31.957 [2024-12-09 06:29:26.330403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.957 [2024-12-09 06:29:26.330413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.957 qpair failed and we were unable to recover it. 00:30:31.957 [2024-12-09 06:29:26.330616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.957 [2024-12-09 06:29:26.330625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.957 qpair failed and we were unable to recover it. 00:30:31.957 [2024-12-09 06:29:26.330920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.957 [2024-12-09 06:29:26.330930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.957 qpair failed and we were unable to recover it. 00:30:31.957 [2024-12-09 06:29:26.331232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.957 [2024-12-09 06:29:26.331243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.957 qpair failed and we were unable to recover it. 00:30:31.957 [2024-12-09 06:29:26.331550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.957 [2024-12-09 06:29:26.331561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.957 qpair failed and we were unable to recover it. 00:30:31.957 [2024-12-09 06:29:26.331728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.957 [2024-12-09 06:29:26.331737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.957 qpair failed and we were unable to recover it. 00:30:31.957 [2024-12-09 06:29:26.331869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.957 [2024-12-09 06:29:26.331878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.957 qpair failed and we were unable to recover it. 00:30:31.957 [2024-12-09 06:29:26.332254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.957 [2024-12-09 06:29:26.332263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.957 qpair failed and we were unable to recover it. 00:30:31.957 [2024-12-09 06:29:26.332529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.957 [2024-12-09 06:29:26.332539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.957 qpair failed and we were unable to recover it. 00:30:31.957 [2024-12-09 06:29:26.332761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.958 [2024-12-09 06:29:26.332771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.958 qpair failed and we were unable to recover it. 00:30:31.958 [2024-12-09 06:29:26.332936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.958 [2024-12-09 06:29:26.332945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.958 qpair failed and we were unable to recover it. 00:30:31.958 [2024-12-09 06:29:26.333123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.958 [2024-12-09 06:29:26.333133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.958 qpair failed and we were unable to recover it. 00:30:31.958 [2024-12-09 06:29:26.333389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.958 [2024-12-09 06:29:26.333399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.958 qpair failed and we were unable to recover it. 00:30:31.958 [2024-12-09 06:29:26.333614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.958 [2024-12-09 06:29:26.333624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.958 qpair failed and we were unable to recover it. 00:30:31.958 [2024-12-09 06:29:26.333966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.958 [2024-12-09 06:29:26.333975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.958 qpair failed and we were unable to recover it. 00:30:31.958 [2024-12-09 06:29:26.334244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.958 [2024-12-09 06:29:26.334259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.958 qpair failed and we were unable to recover it. 00:30:31.958 [2024-12-09 06:29:26.334415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.958 [2024-12-09 06:29:26.334424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.958 qpair failed and we were unable to recover it. 00:30:31.958 [2024-12-09 06:29:26.334694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.958 [2024-12-09 06:29:26.334707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.958 qpair failed and we were unable to recover it. 00:30:31.958 [2024-12-09 06:29:26.334890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.958 [2024-12-09 06:29:26.334900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.958 qpair failed and we were unable to recover it. 00:30:31.958 [2024-12-09 06:29:26.335243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.958 [2024-12-09 06:29:26.335254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.958 qpair failed and we were unable to recover it. 00:30:31.958 [2024-12-09 06:29:26.335427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.958 [2024-12-09 06:29:26.335437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.958 qpair failed and we were unable to recover it. 00:30:31.958 [2024-12-09 06:29:26.335643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.958 [2024-12-09 06:29:26.335653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.958 qpair failed and we were unable to recover it. 00:30:31.958 [2024-12-09 06:29:26.335833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.958 [2024-12-09 06:29:26.335843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.958 qpair failed and we were unable to recover it. 00:30:31.958 [2024-12-09 06:29:26.336021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.958 [2024-12-09 06:29:26.336031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.958 qpair failed and we were unable to recover it. 00:30:31.958 [2024-12-09 06:29:26.336323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.958 [2024-12-09 06:29:26.336333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.958 qpair failed and we were unable to recover it. 00:30:31.958 [2024-12-09 06:29:26.336517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.958 [2024-12-09 06:29:26.336526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.958 qpair failed and we were unable to recover it. 00:30:31.958 [2024-12-09 06:29:26.336694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.958 [2024-12-09 06:29:26.336703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.958 qpair failed and we were unable to recover it. 00:30:31.958 [2024-12-09 06:29:26.336964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.958 [2024-12-09 06:29:26.336974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.958 qpair failed and we were unable to recover it. 00:30:31.958 [2024-12-09 06:29:26.337295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.958 [2024-12-09 06:29:26.337304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.958 qpair failed and we were unable to recover it. 00:30:31.958 [2024-12-09 06:29:26.337569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.958 [2024-12-09 06:29:26.337579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.958 qpair failed and we were unable to recover it. 00:30:31.958 [2024-12-09 06:29:26.337897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.958 [2024-12-09 06:29:26.337907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.958 qpair failed and we were unable to recover it. 00:30:31.958 [2024-12-09 06:29:26.338216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.958 [2024-12-09 06:29:26.338226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.958 qpair failed and we were unable to recover it. 00:30:31.958 [2024-12-09 06:29:26.338572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.958 [2024-12-09 06:29:26.338582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.958 qpair failed and we were unable to recover it. 00:30:31.958 [2024-12-09 06:29:26.338940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.958 [2024-12-09 06:29:26.338949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.958 qpair failed and we were unable to recover it. 00:30:31.958 [2024-12-09 06:29:26.339252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.958 [2024-12-09 06:29:26.339262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.958 qpair failed and we were unable to recover it. 00:30:31.958 [2024-12-09 06:29:26.339327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.958 [2024-12-09 06:29:26.339335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.958 qpair failed and we were unable to recover it. 00:30:31.958 [2024-12-09 06:29:26.339607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.958 [2024-12-09 06:29:26.339616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.958 qpair failed and we were unable to recover it. 00:30:31.958 [2024-12-09 06:29:26.339784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.958 [2024-12-09 06:29:26.339794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.958 qpair failed and we were unable to recover it. 00:30:31.958 [2024-12-09 06:29:26.340117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.958 [2024-12-09 06:29:26.340127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.958 qpair failed and we were unable to recover it. 00:30:31.958 [2024-12-09 06:29:26.340398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.958 [2024-12-09 06:29:26.340407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.958 qpair failed and we were unable to recover it. 00:30:31.958 [2024-12-09 06:29:26.340709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.958 [2024-12-09 06:29:26.340719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.958 qpair failed and we were unable to recover it. 00:30:31.958 [2024-12-09 06:29:26.341036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.958 [2024-12-09 06:29:26.341046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.958 qpair failed and we were unable to recover it. 00:30:31.958 [2024-12-09 06:29:26.341244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.958 [2024-12-09 06:29:26.341253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.958 qpair failed and we were unable to recover it. 00:30:31.958 [2024-12-09 06:29:26.341502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.959 [2024-12-09 06:29:26.341512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.959 qpair failed and we were unable to recover it. 00:30:31.959 [2024-12-09 06:29:26.341582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.959 [2024-12-09 06:29:26.341592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.959 qpair failed and we were unable to recover it. 00:30:31.959 [2024-12-09 06:29:26.341921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.959 [2024-12-09 06:29:26.341931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.959 qpair failed and we were unable to recover it. 00:30:31.959 [2024-12-09 06:29:26.342227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.959 [2024-12-09 06:29:26.342237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.959 qpair failed and we were unable to recover it. 00:30:31.959 [2024-12-09 06:29:26.342533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.959 [2024-12-09 06:29:26.342542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.959 qpair failed and we were unable to recover it. 00:30:31.959 [2024-12-09 06:29:26.342893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.959 [2024-12-09 06:29:26.342902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.959 qpair failed and we were unable to recover it. 00:30:31.959 [2024-12-09 06:29:26.343190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.959 [2024-12-09 06:29:26.343199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.959 qpair failed and we were unable to recover it. 00:30:31.959 [2024-12-09 06:29:26.343506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.959 [2024-12-09 06:29:26.343516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.959 qpair failed and we were unable to recover it. 00:30:31.959 [2024-12-09 06:29:26.343799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.959 [2024-12-09 06:29:26.343809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.959 qpair failed and we were unable to recover it. 00:30:31.959 [2024-12-09 06:29:26.343995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.959 [2024-12-09 06:29:26.344004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.959 qpair failed and we were unable to recover it. 00:30:31.959 [2024-12-09 06:29:26.344322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.959 [2024-12-09 06:29:26.344332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.959 qpair failed and we were unable to recover it. 00:30:31.959 06:29:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:31.959 [2024-12-09 06:29:26.344714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.959 [2024-12-09 06:29:26.344725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.959 qpair failed and we were unable to recover it. 00:30:31.959 06:29:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:30:31.959 [2024-12-09 06:29:26.344996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.959 [2024-12-09 06:29:26.345005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.959 qpair failed and we were unable to recover it. 00:30:31.959 [2024-12-09 06:29:26.345199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.959 [2024-12-09 06:29:26.345211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.959 qpair failed and we were unable to recover it. 00:30:31.959 06:29:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:31.959 [2024-12-09 06:29:26.345453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.959 [2024-12-09 06:29:26.345464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.959 qpair failed and we were unable to recover it. 00:30:31.959 06:29:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:31.959 [2024-12-09 06:29:26.345755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.959 [2024-12-09 06:29:26.345764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.959 qpair failed and we were unable to recover it. 00:30:31.959 06:29:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:31.959 [2024-12-09 06:29:26.346048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.959 [2024-12-09 06:29:26.346057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.959 qpair failed and we were unable to recover it. 00:30:31.959 [2024-12-09 06:29:26.346249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.959 [2024-12-09 06:29:26.346260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.959 qpair failed and we were unable to recover it. 00:30:31.959 [2024-12-09 06:29:26.346453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.959 [2024-12-09 06:29:26.346463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.959 qpair failed and we were unable to recover it. 00:30:31.959 [2024-12-09 06:29:26.346756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.959 [2024-12-09 06:29:26.346766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.959 qpair failed and we were unable to recover it. 00:30:31.959 [2024-12-09 06:29:26.347043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.959 [2024-12-09 06:29:26.347055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.959 qpair failed and we were unable to recover it. 00:30:31.959 [2024-12-09 06:29:26.347212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.959 [2024-12-09 06:29:26.347221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.959 qpair failed and we were unable to recover it. 00:30:31.959 [2024-12-09 06:29:26.347522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.959 [2024-12-09 06:29:26.347532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.959 qpair failed and we were unable to recover it. 00:30:31.959 [2024-12-09 06:29:26.347698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.959 [2024-12-09 06:29:26.347708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.959 qpair failed and we were unable to recover it. 00:30:31.959 [2024-12-09 06:29:26.347971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.959 [2024-12-09 06:29:26.347980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.959 qpair failed and we were unable to recover it. 00:30:31.959 [2024-12-09 06:29:26.348165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.959 [2024-12-09 06:29:26.348177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.959 qpair failed and we were unable to recover it. 00:30:31.959 [2024-12-09 06:29:26.348407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.959 [2024-12-09 06:29:26.348416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.959 qpair failed and we were unable to recover it. 00:30:31.959 [2024-12-09 06:29:26.348589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.959 [2024-12-09 06:29:26.348599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.959 qpair failed and we were unable to recover it. 00:30:31.959 [2024-12-09 06:29:26.348737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.959 [2024-12-09 06:29:26.348747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.959 qpair failed and we were unable to recover it. 00:30:31.959 [2024-12-09 06:29:26.348929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.959 [2024-12-09 06:29:26.348938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.959 qpair failed and we were unable to recover it. 00:30:31.959 [2024-12-09 06:29:26.349246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.959 [2024-12-09 06:29:26.349255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.959 qpair failed and we were unable to recover it. 00:30:31.959 [2024-12-09 06:29:26.349557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.959 [2024-12-09 06:29:26.349567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.959 qpair failed and we were unable to recover it. 00:30:31.959 [2024-12-09 06:29:26.349732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.959 [2024-12-09 06:29:26.349742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.959 qpair failed and we were unable to recover it. 00:30:31.959 [2024-12-09 06:29:26.349924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.959 [2024-12-09 06:29:26.349933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.959 qpair failed and we were unable to recover it. 00:30:31.959 [2024-12-09 06:29:26.350206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.959 [2024-12-09 06:29:26.350216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.959 qpair failed and we were unable to recover it. 00:30:31.959 [2024-12-09 06:29:26.350544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.959 [2024-12-09 06:29:26.350555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.959 qpair failed and we were unable to recover it. 00:30:31.960 [2024-12-09 06:29:26.350600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.960 [2024-12-09 06:29:26.350609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.960 qpair failed and we were unable to recover it. 00:30:31.960 [2024-12-09 06:29:26.350785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.960 [2024-12-09 06:29:26.350794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.960 qpair failed and we were unable to recover it. 00:30:31.960 [2024-12-09 06:29:26.351091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.960 [2024-12-09 06:29:26.351102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.960 qpair failed and we were unable to recover it. 00:30:31.960 [2024-12-09 06:29:26.351295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.960 [2024-12-09 06:29:26.351306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.960 qpair failed and we were unable to recover it. 00:30:31.960 [2024-12-09 06:29:26.351486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.960 [2024-12-09 06:29:26.351496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.960 qpair failed and we were unable to recover it. 00:30:31.960 [2024-12-09 06:29:26.351846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.960 [2024-12-09 06:29:26.351856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.960 qpair failed and we were unable to recover it. 00:30:31.960 [2024-12-09 06:29:26.352134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.960 [2024-12-09 06:29:26.352144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.960 qpair failed and we were unable to recover it. 00:30:31.960 [2024-12-09 06:29:26.352301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.960 [2024-12-09 06:29:26.352310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.960 qpair failed and we were unable to recover it. 00:30:31.960 [2024-12-09 06:29:26.352472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.960 [2024-12-09 06:29:26.352481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.960 qpair failed and we were unable to recover it. 00:30:31.960 [2024-12-09 06:29:26.352781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.960 [2024-12-09 06:29:26.352791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.960 qpair failed and we were unable to recover it. 00:30:31.960 [2024-12-09 06:29:26.353100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.960 [2024-12-09 06:29:26.353109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.960 qpair failed and we were unable to recover it. 00:30:31.960 [2024-12-09 06:29:26.353387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.960 [2024-12-09 06:29:26.353396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.960 qpair failed and we were unable to recover it. 00:30:31.960 [2024-12-09 06:29:26.353704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.960 [2024-12-09 06:29:26.353715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.960 qpair failed and we were unable to recover it. 00:30:31.960 [2024-12-09 06:29:26.353868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.960 [2024-12-09 06:29:26.353879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.960 qpair failed and we were unable to recover it. 00:30:31.960 [2024-12-09 06:29:26.354192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.960 [2024-12-09 06:29:26.354202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.960 qpair failed and we were unable to recover it. 00:30:31.960 [2024-12-09 06:29:26.354367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.960 [2024-12-09 06:29:26.354377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.960 qpair failed and we were unable to recover it. 00:30:31.960 [2024-12-09 06:29:26.354673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.960 [2024-12-09 06:29:26.354685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.960 qpair failed and we were unable to recover it. 00:30:31.960 [2024-12-09 06:29:26.354870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.960 [2024-12-09 06:29:26.354880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.960 qpair failed and we were unable to recover it. 00:30:31.960 [2024-12-09 06:29:26.355192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.960 [2024-12-09 06:29:26.355202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.960 qpair failed and we were unable to recover it. 00:30:31.960 [2024-12-09 06:29:26.355365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.960 [2024-12-09 06:29:26.355374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.960 qpair failed and we were unable to recover it. 00:30:31.960 [2024-12-09 06:29:26.355556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.960 [2024-12-09 06:29:26.355565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.960 qpair failed and we were unable to recover it. 00:30:31.960 [2024-12-09 06:29:26.355836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.960 [2024-12-09 06:29:26.355846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.960 qpair failed and we were unable to recover it. 00:30:31.960 [2024-12-09 06:29:26.356042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.960 [2024-12-09 06:29:26.356052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.960 qpair failed and we were unable to recover it. 00:30:31.960 [2024-12-09 06:29:26.356321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.960 [2024-12-09 06:29:26.356330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.960 qpair failed and we were unable to recover it. 00:30:31.960 [2024-12-09 06:29:26.356617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.960 [2024-12-09 06:29:26.356626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.960 qpair failed and we were unable to recover it. 00:30:31.960 [2024-12-09 06:29:26.356775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.960 [2024-12-09 06:29:26.356785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.960 qpair failed and we were unable to recover it. 00:30:31.960 [2024-12-09 06:29:26.357067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.960 [2024-12-09 06:29:26.357076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.960 qpair failed and we were unable to recover it. 00:30:31.960 [2024-12-09 06:29:26.357349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.960 [2024-12-09 06:29:26.357359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.960 qpair failed and we were unable to recover it. 00:30:31.960 [2024-12-09 06:29:26.357625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.960 [2024-12-09 06:29:26.357635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.960 qpair failed and we were unable to recover it. 00:30:31.960 [2024-12-09 06:29:26.357913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.960 [2024-12-09 06:29:26.357923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.960 qpair failed and we were unable to recover it. 00:30:31.960 [2024-12-09 06:29:26.358216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.960 [2024-12-09 06:29:26.358225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.960 qpair failed and we were unable to recover it. 00:30:31.960 [2024-12-09 06:29:26.358506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.960 [2024-12-09 06:29:26.358516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.960 qpair failed and we were unable to recover it. 00:30:31.960 [2024-12-09 06:29:26.358827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.960 [2024-12-09 06:29:26.358837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.960 qpair failed and we were unable to recover it. 00:30:31.960 [2024-12-09 06:29:26.358996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.960 [2024-12-09 06:29:26.359006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.960 qpair failed and we were unable to recover it. 00:30:31.960 [2024-12-09 06:29:26.359245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.960 [2024-12-09 06:29:26.359255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.960 qpair failed and we were unable to recover it. 00:30:31.960 [2024-12-09 06:29:26.359599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.960 [2024-12-09 06:29:26.359609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.960 qpair failed and we were unable to recover it. 00:30:31.960 [2024-12-09 06:29:26.359919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.960 [2024-12-09 06:29:26.359929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.960 qpair failed and we were unable to recover it. 00:30:31.960 [2024-12-09 06:29:26.360219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.961 [2024-12-09 06:29:26.360228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.961 qpair failed and we were unable to recover it. 00:30:31.961 [2024-12-09 06:29:26.360274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.961 [2024-12-09 06:29:26.360284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.961 qpair failed and we were unable to recover it. 00:30:31.961 [2024-12-09 06:29:26.360479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.961 [2024-12-09 06:29:26.360488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.961 qpair failed and we were unable to recover it. 00:30:31.961 [2024-12-09 06:29:26.360794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.961 [2024-12-09 06:29:26.360804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.961 qpair failed and we were unable to recover it. 00:30:31.961 [2024-12-09 06:29:26.361002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.961 [2024-12-09 06:29:26.361012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.961 qpair failed and we were unable to recover it. 00:30:31.961 [2024-12-09 06:29:26.361052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.961 [2024-12-09 06:29:26.361060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.961 qpair failed and we were unable to recover it. 00:30:31.961 [2024-12-09 06:29:26.361321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.961 [2024-12-09 06:29:26.361331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.961 qpair failed and we were unable to recover it. 00:30:31.961 [2024-12-09 06:29:26.361632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.961 [2024-12-09 06:29:26.361641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.961 qpair failed and we were unable to recover it. 00:30:31.961 [2024-12-09 06:29:26.361831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.961 [2024-12-09 06:29:26.361840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.961 qpair failed and we were unable to recover it. 00:30:31.961 [2024-12-09 06:29:26.362145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.961 [2024-12-09 06:29:26.362154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.961 qpair failed and we were unable to recover it. 00:30:31.961 [2024-12-09 06:29:26.362470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.961 [2024-12-09 06:29:26.362481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.961 qpair failed and we were unable to recover it. 00:30:31.961 [2024-12-09 06:29:26.362644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.961 [2024-12-09 06:29:26.362654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.961 qpair failed and we were unable to recover it. 00:30:31.961 [2024-12-09 06:29:26.362832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.961 [2024-12-09 06:29:26.362841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.961 qpair failed and we were unable to recover it. 00:30:31.961 [2024-12-09 06:29:26.363151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.961 [2024-12-09 06:29:26.363161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.961 qpair failed and we were unable to recover it. 00:30:31.961 [2024-12-09 06:29:26.363463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.961 [2024-12-09 06:29:26.363474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.961 qpair failed and we were unable to recover it. 00:30:31.961 [2024-12-09 06:29:26.363749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.961 [2024-12-09 06:29:26.363759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.961 qpair failed and we were unable to recover it. 00:30:31.961 [2024-12-09 06:29:26.363942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.961 [2024-12-09 06:29:26.363950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.961 qpair failed and we were unable to recover it. 00:30:31.961 [2024-12-09 06:29:26.364206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.961 [2024-12-09 06:29:26.364215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.961 qpair failed and we were unable to recover it. 00:30:31.961 [2024-12-09 06:29:26.364520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.961 [2024-12-09 06:29:26.364530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.961 qpair failed and we were unable to recover it. 00:30:31.961 [2024-12-09 06:29:26.364848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.961 [2024-12-09 06:29:26.364860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.961 qpair failed and we were unable to recover it. 00:30:31.961 [2024-12-09 06:29:26.365038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.961 [2024-12-09 06:29:26.365048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.961 qpair failed and we were unable to recover it. 00:30:31.961 [2024-12-09 06:29:26.365346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.961 [2024-12-09 06:29:26.365356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.961 qpair failed and we were unable to recover it. 00:30:31.961 [2024-12-09 06:29:26.365537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.961 [2024-12-09 06:29:26.365547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.961 qpair failed and we were unable to recover it. 00:30:31.961 [2024-12-09 06:29:26.365917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.961 [2024-12-09 06:29:26.365927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.961 qpair failed and we were unable to recover it. 00:30:31.961 [2024-12-09 06:29:26.366199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.961 [2024-12-09 06:29:26.366210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.961 qpair failed and we were unable to recover it. 00:30:31.961 [2024-12-09 06:29:26.366362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.961 [2024-12-09 06:29:26.366372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.961 qpair failed and we were unable to recover it. 00:30:31.961 [2024-12-09 06:29:26.366540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.961 [2024-12-09 06:29:26.366550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.961 qpair failed and we were unable to recover it. 00:30:31.961 [2024-12-09 06:29:26.366765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.961 [2024-12-09 06:29:26.366774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.961 qpair failed and we were unable to recover it. 00:30:31.961 [2024-12-09 06:29:26.367060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.961 [2024-12-09 06:29:26.367070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.961 qpair failed and we were unable to recover it. 00:30:31.961 [2024-12-09 06:29:26.367363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.961 [2024-12-09 06:29:26.367373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.961 qpair failed and we were unable to recover it. 00:30:31.961 [2024-12-09 06:29:26.367590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.961 [2024-12-09 06:29:26.367600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.961 qpair failed and we were unable to recover it. 00:30:31.961 [2024-12-09 06:29:26.367973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.961 [2024-12-09 06:29:26.367985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.961 qpair failed and we were unable to recover it. 00:30:31.961 [2024-12-09 06:29:26.368173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.961 [2024-12-09 06:29:26.368182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.961 qpair failed and we were unable to recover it. 00:30:31.961 [2024-12-09 06:29:26.368459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.961 [2024-12-09 06:29:26.368469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.961 qpair failed and we were unable to recover it. 00:30:31.961 [2024-12-09 06:29:26.368626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.961 [2024-12-09 06:29:26.368636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.961 qpair failed and we were unable to recover it. 00:30:31.961 [2024-12-09 06:29:26.368813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.961 [2024-12-09 06:29:26.368823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.961 qpair failed and we were unable to recover it. 00:30:31.961 [2024-12-09 06:29:26.369228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.961 [2024-12-09 06:29:26.369237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.961 qpair failed and we were unable to recover it. 00:30:31.961 [2024-12-09 06:29:26.369377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.961 [2024-12-09 06:29:26.369387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.961 qpair failed and we were unable to recover it. 00:30:31.962 [2024-12-09 06:29:26.369721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.962 [2024-12-09 06:29:26.369731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.962 qpair failed and we were unable to recover it. 00:30:31.962 [2024-12-09 06:29:26.370030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.962 [2024-12-09 06:29:26.370039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.962 qpair failed and we were unable to recover it. 00:30:31.962 [2024-12-09 06:29:26.370338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.962 [2024-12-09 06:29:26.370348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.962 qpair failed and we were unable to recover it. 00:30:31.962 [2024-12-09 06:29:26.370644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.962 [2024-12-09 06:29:26.370653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.962 qpair failed and we were unable to recover it. 00:30:31.962 [2024-12-09 06:29:26.370834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.962 [2024-12-09 06:29:26.370844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.962 qpair failed and we were unable to recover it. 00:30:31.962 [2024-12-09 06:29:26.371153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.962 [2024-12-09 06:29:26.371163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.962 qpair failed and we were unable to recover it. 00:30:31.962 [2024-12-09 06:29:26.371284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.962 [2024-12-09 06:29:26.371294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.962 qpair failed and we were unable to recover it. 00:30:31.962 [2024-12-09 06:29:26.371624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.962 [2024-12-09 06:29:26.371636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.962 qpair failed and we were unable to recover it. 00:30:31.962 [2024-12-09 06:29:26.371910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.962 [2024-12-09 06:29:26.371921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.962 qpair failed and we were unable to recover it. 00:30:31.962 [2024-12-09 06:29:26.372202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.962 [2024-12-09 06:29:26.372211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.962 qpair failed and we were unable to recover it. 00:30:31.962 [2024-12-09 06:29:26.372254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.962 [2024-12-09 06:29:26.372264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.962 qpair failed and we were unable to recover it. 00:30:31.962 [2024-12-09 06:29:26.372561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.962 [2024-12-09 06:29:26.372571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.962 qpair failed and we were unable to recover it. 00:30:31.962 [2024-12-09 06:29:26.372864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.962 [2024-12-09 06:29:26.372874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.962 qpair failed and we were unable to recover it. 00:30:31.962 [2024-12-09 06:29:26.373169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.962 [2024-12-09 06:29:26.373179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.962 qpair failed and we were unable to recover it. 00:30:31.962 [2024-12-09 06:29:26.373480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.962 [2024-12-09 06:29:26.373490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.962 qpair failed and we were unable to recover it. 00:30:31.962 [2024-12-09 06:29:26.373799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.962 [2024-12-09 06:29:26.373808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.962 qpair failed and we were unable to recover it. 00:30:31.962 [2024-12-09 06:29:26.373976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.962 [2024-12-09 06:29:26.373986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.962 qpair failed and we were unable to recover it. 00:30:31.962 [2024-12-09 06:29:26.374258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.962 [2024-12-09 06:29:26.374267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.962 qpair failed and we were unable to recover it. 00:30:31.962 [2024-12-09 06:29:26.374444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.962 [2024-12-09 06:29:26.374456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.962 qpair failed and we were unable to recover it. 00:30:31.962 [2024-12-09 06:29:26.374747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.962 [2024-12-09 06:29:26.374756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.962 qpair failed and we were unable to recover it. 00:30:31.962 [2024-12-09 06:29:26.375061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.962 [2024-12-09 06:29:26.375069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.962 qpair failed and we were unable to recover it. 00:30:31.962 [2024-12-09 06:29:26.375234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.962 [2024-12-09 06:29:26.375246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.962 qpair failed and we were unable to recover it. 00:30:31.962 [2024-12-09 06:29:26.375566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.962 [2024-12-09 06:29:26.375575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.962 qpair failed and we were unable to recover it. 00:30:31.962 [2024-12-09 06:29:26.375895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.962 [2024-12-09 06:29:26.375906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.962 qpair failed and we were unable to recover it. 00:30:31.962 [2024-12-09 06:29:26.376198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.962 [2024-12-09 06:29:26.376208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.962 qpair failed and we were unable to recover it. 00:30:31.962 [2024-12-09 06:29:26.376505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.962 [2024-12-09 06:29:26.376514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.962 qpair failed and we were unable to recover it. 00:30:31.962 [2024-12-09 06:29:26.376867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.962 [2024-12-09 06:29:26.376876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.962 qpair failed and we were unable to recover it. 00:30:31.962 [2024-12-09 06:29:26.377150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.962 [2024-12-09 06:29:26.377159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.962 qpair failed and we were unable to recover it. 00:30:31.962 [2024-12-09 06:29:26.377200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.962 [2024-12-09 06:29:26.377209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.962 qpair failed and we were unable to recover it. 00:30:31.962 [2024-12-09 06:29:26.377398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.962 [2024-12-09 06:29:26.377408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.962 qpair failed and we were unable to recover it. 00:30:31.962 [2024-12-09 06:29:26.377573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.962 [2024-12-09 06:29:26.377583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.962 qpair failed and we were unable to recover it. 00:30:31.962 [2024-12-09 06:29:26.377744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.962 [2024-12-09 06:29:26.377754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.962 qpair failed and we were unable to recover it. 00:30:31.962 [2024-12-09 06:29:26.378061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.962 [2024-12-09 06:29:26.378070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.962 qpair failed and we were unable to recover it. 00:30:31.962 [2024-12-09 06:29:26.378114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.962 [2024-12-09 06:29:26.378122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.962 qpair failed and we were unable to recover it. 00:30:31.962 [2024-12-09 06:29:26.378392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.963 [2024-12-09 06:29:26.378402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.963 qpair failed and we were unable to recover it. 00:30:31.963 [2024-12-09 06:29:26.378758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.963 [2024-12-09 06:29:26.378768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.963 qpair failed and we were unable to recover it. 00:30:31.963 [2024-12-09 06:29:26.378997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.963 [2024-12-09 06:29:26.379006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.963 qpair failed and we were unable to recover it. 00:30:31.963 [2024-12-09 06:29:26.379324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.963 [2024-12-09 06:29:26.379333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.963 qpair failed and we were unable to recover it. 00:30:31.963 [2024-12-09 06:29:26.379623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.963 [2024-12-09 06:29:26.379635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.963 qpair failed and we were unable to recover it. 00:30:31.963 [2024-12-09 06:29:26.379793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.963 [2024-12-09 06:29:26.379802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.963 qpair failed and we were unable to recover it. 00:30:31.963 [2024-12-09 06:29:26.380015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.963 [2024-12-09 06:29:26.380026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.963 qpair failed and we were unable to recover it. 00:30:31.963 [2024-12-09 06:29:26.380290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.963 [2024-12-09 06:29:26.380300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.963 qpair failed and we were unable to recover it. 00:30:31.963 [2024-12-09 06:29:26.380345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.963 [2024-12-09 06:29:26.380354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.963 qpair failed and we were unable to recover it. 00:30:31.963 [2024-12-09 06:29:26.380715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.963 [2024-12-09 06:29:26.380725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.963 qpair failed and we were unable to recover it. 00:30:31.963 [2024-12-09 06:29:26.380918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.963 [2024-12-09 06:29:26.380928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.963 qpair failed and we were unable to recover it. 00:30:31.963 [2024-12-09 06:29:26.380974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.963 [2024-12-09 06:29:26.380983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.963 qpair failed and we were unable to recover it. 00:30:31.963 [2024-12-09 06:29:26.381258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.963 [2024-12-09 06:29:26.381268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.963 qpair failed and we were unable to recover it. 00:30:31.963 [2024-12-09 06:29:26.381563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.963 [2024-12-09 06:29:26.381573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.963 qpair failed and we were unable to recover it. 00:30:31.963 [2024-12-09 06:29:26.381871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.963 [2024-12-09 06:29:26.381881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.963 qpair failed and we were unable to recover it. 00:30:31.963 [2024-12-09 06:29:26.382176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.963 [2024-12-09 06:29:26.382187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.963 qpair failed and we were unable to recover it. 00:30:31.963 [2024-12-09 06:29:26.382455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.963 [2024-12-09 06:29:26.382466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.963 qpair failed and we were unable to recover it. 00:30:31.963 [2024-12-09 06:29:26.382622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.963 [2024-12-09 06:29:26.382632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.963 qpair failed and we were unable to recover it. 00:30:31.963 [2024-12-09 06:29:26.382966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.963 [2024-12-09 06:29:26.382976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.963 qpair failed and we were unable to recover it. 00:30:31.963 [2024-12-09 06:29:26.383184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.963 [2024-12-09 06:29:26.383193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.963 qpair failed and we were unable to recover it. 00:30:31.963 [2024-12-09 06:29:26.383375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.963 [2024-12-09 06:29:26.383384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.963 qpair failed and we were unable to recover it. 00:30:31.963 06:29:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:31.963 [2024-12-09 06:29:26.383660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.963 [2024-12-09 06:29:26.383672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.963 qpair failed and we were unable to recover it. 00:30:31.963 06:29:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:31.963 [2024-12-09 06:29:26.383983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.963 [2024-12-09 06:29:26.383994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.963 qpair failed and we were unable to recover it. 00:30:31.963 06:29:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:31.963 06:29:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:31.963 [2024-12-09 06:29:26.384279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.963 [2024-12-09 06:29:26.384290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.963 qpair failed and we were unable to recover it. 00:30:31.963 [2024-12-09 06:29:26.384469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.963 [2024-12-09 06:29:26.384479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.963 qpair failed and we were unable to recover it. 00:30:31.963 [2024-12-09 06:29:26.384885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.963 [2024-12-09 06:29:26.384897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.963 qpair failed and we were unable to recover it. 00:30:31.963 [2024-12-09 06:29:26.385169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.963 [2024-12-09 06:29:26.385178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.963 qpair failed and we were unable to recover it. 00:30:31.963 [2024-12-09 06:29:26.385324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.963 [2024-12-09 06:29:26.385334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.963 qpair failed and we were unable to recover it. 00:30:31.963 [2024-12-09 06:29:26.385623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.963 [2024-12-09 06:29:26.385634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.963 qpair failed and we were unable to recover it. 00:30:31.963 [2024-12-09 06:29:26.385933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.963 [2024-12-09 06:29:26.385943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.963 qpair failed and we were unable to recover it. 00:30:31.963 [2024-12-09 06:29:26.386117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.963 [2024-12-09 06:29:26.386127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.963 qpair failed and we were unable to recover it. 00:30:31.963 [2024-12-09 06:29:26.386309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.963 [2024-12-09 06:29:26.386319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.963 qpair failed and we were unable to recover it. 00:30:31.963 [2024-12-09 06:29:26.386483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.963 [2024-12-09 06:29:26.386492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.963 qpair failed and we were unable to recover it. 00:30:31.963 [2024-12-09 06:29:26.386761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.963 [2024-12-09 06:29:26.386771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.963 qpair failed and we were unable to recover it. 00:30:31.963 [2024-12-09 06:29:26.386950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.963 [2024-12-09 06:29:26.386961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.963 qpair failed and we were unable to recover it. 00:30:31.963 [2024-12-09 06:29:26.387157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.963 [2024-12-09 06:29:26.387166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.963 qpair failed and we were unable to recover it. 00:30:31.964 [2024-12-09 06:29:26.387475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.964 [2024-12-09 06:29:26.387484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.964 qpair failed and we were unable to recover it. 00:30:31.964 [2024-12-09 06:29:26.387771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.964 [2024-12-09 06:29:26.387780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.964 qpair failed and we were unable to recover it. 00:30:31.964 [2024-12-09 06:29:26.388079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.964 [2024-12-09 06:29:26.388088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.964 qpair failed and we were unable to recover it. 00:30:31.964 [2024-12-09 06:29:26.388421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.964 [2024-12-09 06:29:26.388430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.964 qpair failed and we were unable to recover it. 00:30:31.964 [2024-12-09 06:29:26.388728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.964 [2024-12-09 06:29:26.388737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.964 qpair failed and we were unable to recover it. 00:30:31.964 [2024-12-09 06:29:26.388927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.964 [2024-12-09 06:29:26.388937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.964 qpair failed and we were unable to recover it. 00:30:31.964 [2024-12-09 06:29:26.389194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.964 [2024-12-09 06:29:26.389205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.964 qpair failed and we were unable to recover it. 00:30:31.964 [2024-12-09 06:29:26.389505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.964 [2024-12-09 06:29:26.389515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.964 qpair failed and we were unable to recover it. 00:30:31.964 [2024-12-09 06:29:26.389816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.964 [2024-12-09 06:29:26.389825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.964 qpair failed and we were unable to recover it. 00:30:31.964 [2024-12-09 06:29:26.390113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.964 [2024-12-09 06:29:26.390122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.964 qpair failed and we were unable to recover it. 00:30:31.964 [2024-12-09 06:29:26.390297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.964 [2024-12-09 06:29:26.390306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.964 qpair failed and we were unable to recover it. 00:30:31.964 [2024-12-09 06:29:26.390632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.964 [2024-12-09 06:29:26.390642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.964 qpair failed and we were unable to recover it. 00:30:31.964 [2024-12-09 06:29:26.390804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.964 [2024-12-09 06:29:26.390814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.964 qpair failed and we were unable to recover it. 00:30:31.964 [2024-12-09 06:29:26.390982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.964 [2024-12-09 06:29:26.390991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.964 qpair failed and we were unable to recover it. 00:30:31.964 [2024-12-09 06:29:26.391304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.964 [2024-12-09 06:29:26.391314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.964 qpair failed and we were unable to recover it. 00:30:31.964 [2024-12-09 06:29:26.391610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.964 [2024-12-09 06:29:26.391620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.964 qpair failed and we were unable to recover it. 00:30:31.964 [2024-12-09 06:29:26.391678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.964 [2024-12-09 06:29:26.391686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.964 qpair failed and we were unable to recover it. 00:30:31.964 [2024-12-09 06:29:26.391852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.964 [2024-12-09 06:29:26.391862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.964 qpair failed and we were unable to recover it. 00:30:31.964 [2024-12-09 06:29:26.392130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.964 [2024-12-09 06:29:26.392139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.964 qpair failed and we were unable to recover it. 00:30:31.964 [2024-12-09 06:29:26.392427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.964 [2024-12-09 06:29:26.392436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.964 qpair failed and we were unable to recover it. 00:30:31.964 [2024-12-09 06:29:26.392748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.964 [2024-12-09 06:29:26.392757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.964 qpair failed and we were unable to recover it. 00:30:31.964 [2024-12-09 06:29:26.393039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.964 [2024-12-09 06:29:26.393049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.964 qpair failed and we were unable to recover it. 00:30:31.964 [2024-12-09 06:29:26.393205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.964 [2024-12-09 06:29:26.393215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.964 qpair failed and we were unable to recover it. 00:30:31.964 [2024-12-09 06:29:26.393491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.964 [2024-12-09 06:29:26.393501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.964 qpair failed and we were unable to recover it. 00:30:31.964 [2024-12-09 06:29:26.393813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.964 [2024-12-09 06:29:26.393821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.964 qpair failed and we were unable to recover it. 00:30:31.964 [2024-12-09 06:29:26.394118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.964 [2024-12-09 06:29:26.394127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.964 qpair failed and we were unable to recover it. 00:30:31.964 [2024-12-09 06:29:26.394425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.964 [2024-12-09 06:29:26.394434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.964 qpair failed and we were unable to recover it. 00:30:31.964 [2024-12-09 06:29:26.394735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.964 [2024-12-09 06:29:26.394745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.964 qpair failed and we were unable to recover it. 00:30:31.964 [2024-12-09 06:29:26.395054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.964 [2024-12-09 06:29:26.395064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.964 qpair failed and we were unable to recover it. 00:30:31.964 [2024-12-09 06:29:26.395284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.964 [2024-12-09 06:29:26.395297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.964 qpair failed and we were unable to recover it. 00:30:31.964 [2024-12-09 06:29:26.395461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.964 [2024-12-09 06:29:26.395472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.964 qpair failed and we were unable to recover it. 00:30:31.964 [2024-12-09 06:29:26.395738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.964 [2024-12-09 06:29:26.395747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.964 qpair failed and we were unable to recover it. 00:30:31.964 [2024-12-09 06:29:26.396080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.964 [2024-12-09 06:29:26.396091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.964 qpair failed and we were unable to recover it. 00:30:31.964 [2024-12-09 06:29:26.396233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.964 [2024-12-09 06:29:26.396242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.964 qpair failed and we were unable to recover it. 00:30:31.964 [2024-12-09 06:29:26.396507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.964 [2024-12-09 06:29:26.396516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.964 qpair failed and we were unable to recover it. 00:30:31.964 [2024-12-09 06:29:26.396867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.964 [2024-12-09 06:29:26.396876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.964 qpair failed and we were unable to recover it. 00:30:31.964 [2024-12-09 06:29:26.397041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.964 [2024-12-09 06:29:26.397050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.964 qpair failed and we were unable to recover it. 00:30:31.964 [2024-12-09 06:29:26.397345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.964 [2024-12-09 06:29:26.397354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.964 qpair failed and we were unable to recover it. 00:30:31.965 [2024-12-09 06:29:26.397648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.965 [2024-12-09 06:29:26.397658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.965 qpair failed and we were unable to recover it. 00:30:31.965 [2024-12-09 06:29:26.397734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.965 [2024-12-09 06:29:26.397743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.965 qpair failed and we were unable to recover it. 00:30:31.965 [2024-12-09 06:29:26.397903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.965 [2024-12-09 06:29:26.397913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.965 qpair failed and we were unable to recover it. 00:30:31.965 [2024-12-09 06:29:26.398189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.965 [2024-12-09 06:29:26.398200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.965 qpair failed and we were unable to recover it. 00:30:31.965 [2024-12-09 06:29:26.398348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.965 [2024-12-09 06:29:26.398359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.965 qpair failed and we were unable to recover it. 00:30:31.965 [2024-12-09 06:29:26.398422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.965 [2024-12-09 06:29:26.398431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.965 qpair failed and we were unable to recover it. 00:30:31.965 [2024-12-09 06:29:26.398581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.965 [2024-12-09 06:29:26.398590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.965 qpair failed and we were unable to recover it. 00:30:31.965 [2024-12-09 06:29:26.398849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.965 [2024-12-09 06:29:26.398858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.965 qpair failed and we were unable to recover it. 00:30:31.965 [2024-12-09 06:29:26.399131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.965 [2024-12-09 06:29:26.399141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.965 qpair failed and we were unable to recover it. 00:30:31.965 [2024-12-09 06:29:26.399344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.965 [2024-12-09 06:29:26.399353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.965 qpair failed and we were unable to recover it. 00:30:31.965 [2024-12-09 06:29:26.399686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.965 [2024-12-09 06:29:26.399696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.965 qpair failed and we were unable to recover it. 00:30:31.965 [2024-12-09 06:29:26.399738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.965 [2024-12-09 06:29:26.399748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.965 qpair failed and we were unable to recover it. 00:30:31.965 [2024-12-09 06:29:26.399909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.965 [2024-12-09 06:29:26.399918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.965 qpair failed and we were unable to recover it. 00:30:31.965 [2024-12-09 06:29:26.400230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.965 [2024-12-09 06:29:26.400239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.965 qpair failed and we were unable to recover it. 00:30:31.965 [2024-12-09 06:29:26.400430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.965 [2024-12-09 06:29:26.400440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.965 qpair failed and we were unable to recover it. 00:30:31.965 [2024-12-09 06:29:26.400769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.965 [2024-12-09 06:29:26.400778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.965 qpair failed and we were unable to recover it. 00:30:31.965 [2024-12-09 06:29:26.401055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.965 [2024-12-09 06:29:26.401064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.965 qpair failed and we were unable to recover it. 00:30:31.965 [2024-12-09 06:29:26.401329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.965 [2024-12-09 06:29:26.401338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.965 qpair failed and we were unable to recover it. 00:30:31.965 [2024-12-09 06:29:26.401501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.965 [2024-12-09 06:29:26.401511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.965 qpair failed and we were unable to recover it. 00:30:31.965 [2024-12-09 06:29:26.401747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.965 [2024-12-09 06:29:26.401756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.965 qpair failed and we were unable to recover it. 00:30:31.965 [2024-12-09 06:29:26.402034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.965 [2024-12-09 06:29:26.402044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.965 qpair failed and we were unable to recover it. 00:30:31.965 [2024-12-09 06:29:26.402341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.965 [2024-12-09 06:29:26.402351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.965 qpair failed and we were unable to recover it. 00:30:31.965 [2024-12-09 06:29:26.402617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.965 [2024-12-09 06:29:26.402627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.965 qpair failed and we were unable to recover it. 00:30:31.965 [2024-12-09 06:29:26.402782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.965 [2024-12-09 06:29:26.402791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.965 qpair failed and we were unable to recover it. 00:30:31.965 [2024-12-09 06:29:26.403056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.965 [2024-12-09 06:29:26.403065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.965 qpair failed and we were unable to recover it. 00:30:31.965 [2024-12-09 06:29:26.403377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.965 [2024-12-09 06:29:26.403387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.965 qpair failed and we were unable to recover it. 00:30:31.965 [2024-12-09 06:29:26.403584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.965 [2024-12-09 06:29:26.403594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.965 qpair failed and we were unable to recover it. 00:30:31.965 [2024-12-09 06:29:26.403870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.965 [2024-12-09 06:29:26.403880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.965 qpair failed and we were unable to recover it. 00:30:31.965 [2024-12-09 06:29:26.404193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.965 [2024-12-09 06:29:26.404202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.965 qpair failed and we were unable to recover it. 00:30:31.965 [2024-12-09 06:29:26.404492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.965 [2024-12-09 06:29:26.404501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.965 qpair failed and we were unable to recover it. 00:30:31.965 [2024-12-09 06:29:26.404772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.965 [2024-12-09 06:29:26.404781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.965 qpair failed and we were unable to recover it. 00:30:31.965 [2024-12-09 06:29:26.405112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.965 [2024-12-09 06:29:26.405123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.965 qpair failed and we were unable to recover it. 00:30:31.965 [2024-12-09 06:29:26.405441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.965 [2024-12-09 06:29:26.405453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.965 qpair failed and we were unable to recover it. 00:30:31.965 [2024-12-09 06:29:26.405608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.965 [2024-12-09 06:29:26.405617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.965 qpair failed and we were unable to recover it. 00:30:31.965 [2024-12-09 06:29:26.405781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.965 [2024-12-09 06:29:26.405790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.965 qpair failed and we were unable to recover it. 00:30:31.965 [2024-12-09 06:29:26.405953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.965 [2024-12-09 06:29:26.405962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.965 qpair failed and we were unable to recover it. 00:30:31.965 [2024-12-09 06:29:26.406115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.965 [2024-12-09 06:29:26.406125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.965 qpair failed and we were unable to recover it. 00:30:31.965 [2024-12-09 06:29:26.406305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.966 [2024-12-09 06:29:26.406313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.966 qpair failed and we were unable to recover it. 00:30:31.966 [2024-12-09 06:29:26.406574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.966 [2024-12-09 06:29:26.406584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.966 qpair failed and we were unable to recover it. 00:30:31.966 [2024-12-09 06:29:26.406922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.966 [2024-12-09 06:29:26.406932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.966 qpair failed and we were unable to recover it. 00:30:31.966 [2024-12-09 06:29:26.407185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.966 [2024-12-09 06:29:26.407194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.966 qpair failed and we were unable to recover it. 00:30:31.966 [2024-12-09 06:29:26.407492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.966 [2024-12-09 06:29:26.407501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.966 qpair failed and we were unable to recover it. 00:30:31.966 [2024-12-09 06:29:26.407688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.966 [2024-12-09 06:29:26.407698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.966 qpair failed and we were unable to recover it. 00:30:31.966 [2024-12-09 06:29:26.407880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.966 [2024-12-09 06:29:26.407890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.966 qpair failed and we were unable to recover it. 00:30:31.966 [2024-12-09 06:29:26.408046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.966 [2024-12-09 06:29:26.408055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.966 qpair failed and we were unable to recover it. 00:30:31.966 [2024-12-09 06:29:26.408344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.966 [2024-12-09 06:29:26.408354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.966 qpair failed and we were unable to recover it. 00:30:31.966 [2024-12-09 06:29:26.408647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.966 [2024-12-09 06:29:26.408657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.966 qpair failed and we were unable to recover it. 00:30:31.966 [2024-12-09 06:29:26.408964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.966 [2024-12-09 06:29:26.408974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.966 qpair failed and we were unable to recover it. 00:30:31.966 [2024-12-09 06:29:26.409285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.966 [2024-12-09 06:29:26.409294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.966 qpair failed and we were unable to recover it. 00:30:31.966 [2024-12-09 06:29:26.409443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.966 [2024-12-09 06:29:26.409455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.966 qpair failed and we were unable to recover it. 00:30:31.966 [2024-12-09 06:29:26.409753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.966 [2024-12-09 06:29:26.409762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.966 qpair failed and we were unable to recover it. 00:30:31.966 [2024-12-09 06:29:26.409936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.966 [2024-12-09 06:29:26.409945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.966 qpair failed and we were unable to recover it. 00:30:31.966 [2024-12-09 06:29:26.410161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.966 [2024-12-09 06:29:26.410170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.966 qpair failed and we were unable to recover it. 00:30:31.966 [2024-12-09 06:29:26.410436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.966 [2024-12-09 06:29:26.410445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.966 qpair failed and we were unable to recover it. 00:30:31.966 [2024-12-09 06:29:26.410610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.966 [2024-12-09 06:29:26.410619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.966 qpair failed and we were unable to recover it. 00:30:31.966 [2024-12-09 06:29:26.410957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.966 [2024-12-09 06:29:26.410966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.966 qpair failed and we were unable to recover it. 00:30:31.966 [2024-12-09 06:29:26.411130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.966 [2024-12-09 06:29:26.411139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.966 qpair failed and we were unable to recover it. 00:30:31.966 [2024-12-09 06:29:26.411475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.966 [2024-12-09 06:29:26.411486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.966 qpair failed and we were unable to recover it. 00:30:31.966 [2024-12-09 06:29:26.411754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.966 [2024-12-09 06:29:26.411764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.966 qpair failed and we were unable to recover it. 00:30:31.966 [2024-12-09 06:29:26.412074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.966 [2024-12-09 06:29:26.412084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.966 qpair failed and we were unable to recover it. 00:30:31.966 [2024-12-09 06:29:26.412236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.966 [2024-12-09 06:29:26.412245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.966 qpair failed and we were unable to recover it. 00:30:31.966 [2024-12-09 06:29:26.412507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.966 [2024-12-09 06:29:26.412516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.966 qpair failed and we were unable to recover it. 00:30:31.966 [2024-12-09 06:29:26.412690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.966 [2024-12-09 06:29:26.412699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.966 qpair failed and we were unable to recover it. 00:30:31.966 [2024-12-09 06:29:26.412979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.966 [2024-12-09 06:29:26.412988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.966 qpair failed and we were unable to recover it. 00:30:31.966 [2024-12-09 06:29:26.413285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.966 [2024-12-09 06:29:26.413295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.966 qpair failed and we were unable to recover it. 00:30:31.966 [2024-12-09 06:29:26.413569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.966 [2024-12-09 06:29:26.413578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.966 qpair failed and we were unable to recover it. 00:30:31.966 [2024-12-09 06:29:26.413887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.966 [2024-12-09 06:29:26.413896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.966 qpair failed and we were unable to recover it. 00:30:31.966 [2024-12-09 06:29:26.414081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.966 [2024-12-09 06:29:26.414090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.966 qpair failed and we were unable to recover it. 00:30:31.966 [2024-12-09 06:29:26.414381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.966 [2024-12-09 06:29:26.414390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.966 qpair failed and we were unable to recover it. 00:30:31.966 Malloc0 00:30:31.966 [2024-12-09 06:29:26.414732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.966 [2024-12-09 06:29:26.414742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.966 qpair failed and we were unable to recover it. 00:30:31.966 [2024-12-09 06:29:26.414914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.966 [2024-12-09 06:29:26.414923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.966 qpair failed and we were unable to recover it. 00:30:31.966 [2024-12-09 06:29:26.415090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.966 [2024-12-09 06:29:26.415102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.966 qpair failed and we were unable to recover it. 00:30:31.966 [2024-12-09 06:29:26.415264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.966 [2024-12-09 06:29:26.415273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.966 qpair failed and we were unable to recover it. 00:30:31.966 [2024-12-09 06:29:26.415462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.966 [2024-12-09 06:29:26.415472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.966 qpair failed and we were unable to recover it. 00:30:31.966 06:29:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:31.966 [2024-12-09 06:29:26.415666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.967 [2024-12-09 06:29:26.415675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.967 qpair failed and we were unable to recover it. 00:30:31.967 [2024-12-09 06:29:26.415728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.967 [2024-12-09 06:29:26.415736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.967 qpair failed and we were unable to recover it. 00:30:31.967 [2024-12-09 06:29:26.415896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.967 [2024-12-09 06:29:26.415905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.967 qpair failed and we were unable to recover it. 00:30:31.967 06:29:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:31.967 [2024-12-09 06:29:26.416213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.967 [2024-12-09 06:29:26.416223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.967 qpair failed and we were unable to recover it. 00:30:31.967 06:29:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:31.967 [2024-12-09 06:29:26.416489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.967 [2024-12-09 06:29:26.416499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.967 qpair failed and we were unable to recover it. 00:30:31.967 06:29:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:31.967 [2024-12-09 06:29:26.416580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.967 [2024-12-09 06:29:26.416589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.967 qpair failed and we were unable to recover it. 00:30:31.967 [2024-12-09 06:29:26.416759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.967 [2024-12-09 06:29:26.416769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.967 qpair failed and we were unable to recover it. 00:30:31.967 [2024-12-09 06:29:26.417061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.967 [2024-12-09 06:29:26.417071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.967 qpair failed and we were unable to recover it. 00:30:31.967 [2024-12-09 06:29:26.417372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.967 [2024-12-09 06:29:26.417382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.967 qpair failed and we were unable to recover it. 00:30:31.967 [2024-12-09 06:29:26.417662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.967 [2024-12-09 06:29:26.417672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.967 qpair failed and we were unable to recover it. 00:30:31.967 [2024-12-09 06:29:26.417961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.967 [2024-12-09 06:29:26.417971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.967 qpair failed and we were unable to recover it. 00:30:31.967 [2024-12-09 06:29:26.418149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.967 [2024-12-09 06:29:26.418159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.967 qpair failed and we were unable to recover it. 00:30:31.967 [2024-12-09 06:29:26.418470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.967 [2024-12-09 06:29:26.418479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.967 qpair failed and we were unable to recover it. 00:30:31.967 [2024-12-09 06:29:26.418790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.967 [2024-12-09 06:29:26.418799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.967 qpair failed and we were unable to recover it. 00:30:31.967 [2024-12-09 06:29:26.419075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.967 [2024-12-09 06:29:26.419084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.967 qpair failed and we were unable to recover it. 00:30:31.967 [2024-12-09 06:29:26.419344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.967 [2024-12-09 06:29:26.419352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.967 qpair failed and we were unable to recover it. 00:30:31.967 [2024-12-09 06:29:26.419659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.967 [2024-12-09 06:29:26.419676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.967 qpair failed and we were unable to recover it. 00:30:31.967 [2024-12-09 06:29:26.419979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.967 [2024-12-09 06:29:26.419988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.967 qpair failed and we were unable to recover it. 00:30:31.967 [2024-12-09 06:29:26.420179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.967 [2024-12-09 06:29:26.420188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.967 qpair failed and we were unable to recover it. 00:30:31.967 [2024-12-09 06:29:26.420498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.967 [2024-12-09 06:29:26.420507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.967 qpair failed and we were unable to recover it. 00:30:31.967 [2024-12-09 06:29:26.420783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.967 [2024-12-09 06:29:26.420792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.967 qpair failed and we were unable to recover it. 00:30:31.967 [2024-12-09 06:29:26.420956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.967 [2024-12-09 06:29:26.420965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.967 qpair failed and we were unable to recover it. 00:30:31.967 [2024-12-09 06:29:26.421272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.967 [2024-12-09 06:29:26.421283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.967 qpair failed and we were unable to recover it. 00:30:31.967 [2024-12-09 06:29:26.421566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.967 [2024-12-09 06:29:26.421575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.967 qpair failed and we were unable to recover it. 00:30:31.967 [2024-12-09 06:29:26.421780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.967 [2024-12-09 06:29:26.421789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.967 qpair failed and we were unable to recover it. 00:30:31.967 [2024-12-09 06:29:26.422092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.967 [2024-12-09 06:29:26.422102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.967 qpair failed and we were unable to recover it. 00:30:31.967 [2024-12-09 06:29:26.422143] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:31.967 [2024-12-09 06:29:26.422402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.967 [2024-12-09 06:29:26.422412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.967 qpair failed and we were unable to recover it. 00:30:31.967 [2024-12-09 06:29:26.422684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.967 [2024-12-09 06:29:26.422694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.967 qpair failed and we were unable to recover it. 00:30:31.967 [2024-12-09 06:29:26.422980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.967 [2024-12-09 06:29:26.422989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.967 qpair failed and we were unable to recover it. 00:30:31.967 [2024-12-09 06:29:26.423154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.967 [2024-12-09 06:29:26.423163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.967 qpair failed and we were unable to recover it. 00:30:31.967 [2024-12-09 06:29:26.423523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.967 [2024-12-09 06:29:26.423533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.967 qpair failed and we were unable to recover it. 00:30:31.967 [2024-12-09 06:29:26.423760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.967 [2024-12-09 06:29:26.423770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.967 qpair failed and we were unable to recover it. 00:30:31.967 [2024-12-09 06:29:26.423978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.967 [2024-12-09 06:29:26.423986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.968 qpair failed and we were unable to recover it. 00:30:31.968 [2024-12-09 06:29:26.424280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.968 [2024-12-09 06:29:26.424289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.968 qpair failed and we were unable to recover it. 00:30:31.968 [2024-12-09 06:29:26.424591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.968 [2024-12-09 06:29:26.424600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.968 qpair failed and we were unable to recover it. 00:30:31.968 [2024-12-09 06:29:26.424923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.968 [2024-12-09 06:29:26.424934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.968 qpair failed and we were unable to recover it. 00:30:31.968 [2024-12-09 06:29:26.425106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.968 [2024-12-09 06:29:26.425115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.968 qpair failed and we were unable to recover it. 00:30:31.968 [2024-12-09 06:29:26.425409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.968 [2024-12-09 06:29:26.425418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.968 qpair failed and we were unable to recover it. 00:30:31.968 [2024-12-09 06:29:26.425722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.968 [2024-12-09 06:29:26.425732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.968 qpair failed and we were unable to recover it. 00:30:31.968 [2024-12-09 06:29:26.426045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.968 [2024-12-09 06:29:26.426054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.968 qpair failed and we were unable to recover it. 00:30:31.968 [2024-12-09 06:29:26.426213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.968 [2024-12-09 06:29:26.426222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.968 qpair failed and we were unable to recover it. 00:30:31.968 [2024-12-09 06:29:26.426565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.968 [2024-12-09 06:29:26.426574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.968 qpair failed and we were unable to recover it. 00:30:31.968 [2024-12-09 06:29:26.426782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.968 [2024-12-09 06:29:26.426791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.968 qpair failed and we were unable to recover it. 00:30:31.968 [2024-12-09 06:29:26.426957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.968 [2024-12-09 06:29:26.426967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.968 qpair failed and we were unable to recover it. 00:30:31.968 [2024-12-09 06:29:26.427197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.968 [2024-12-09 06:29:26.427206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.968 qpair failed and we were unable to recover it. 00:30:31.968 [2024-12-09 06:29:26.427515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.968 [2024-12-09 06:29:26.427524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.968 qpair failed and we were unable to recover it. 00:30:31.968 [2024-12-09 06:29:26.427875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.968 [2024-12-09 06:29:26.427885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.968 qpair failed and we were unable to recover it. 00:30:31.968 [2024-12-09 06:29:26.428196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.968 [2024-12-09 06:29:26.428206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.968 qpair failed and we were unable to recover it. 00:30:31.968 [2024-12-09 06:29:26.428543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.968 [2024-12-09 06:29:26.428552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.968 qpair failed and we were unable to recover it. 00:30:31.968 [2024-12-09 06:29:26.428837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.968 [2024-12-09 06:29:26.428847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.968 qpair failed and we were unable to recover it. 00:30:31.968 [2024-12-09 06:29:26.429146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.968 [2024-12-09 06:29:26.429155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.968 qpair failed and we were unable to recover it. 00:30:31.968 [2024-12-09 06:29:26.429466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.968 [2024-12-09 06:29:26.429476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.968 qpair failed and we were unable to recover it. 00:30:31.968 [2024-12-09 06:29:26.429761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.968 [2024-12-09 06:29:26.429770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.968 qpair failed and we were unable to recover it. 00:30:31.968 [2024-12-09 06:29:26.429930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.968 [2024-12-09 06:29:26.429939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.968 qpair failed and we were unable to recover it. 00:30:31.968 [2024-12-09 06:29:26.430100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.968 [2024-12-09 06:29:26.430110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.968 qpair failed and we were unable to recover it. 00:30:31.968 [2024-12-09 06:29:26.430433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.968 [2024-12-09 06:29:26.430442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.968 qpair failed and we were unable to recover it. 00:30:31.968 [2024-12-09 06:29:26.430639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.968 [2024-12-09 06:29:26.430649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.968 qpair failed and we were unable to recover it. 00:30:31.968 [2024-12-09 06:29:26.430980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.968 [2024-12-09 06:29:26.430989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.968 qpair failed and we were unable to recover it. 00:30:31.968 06:29:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:31.968 [2024-12-09 06:29:26.431292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.968 [2024-12-09 06:29:26.431301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.968 qpair failed and we were unable to recover it. 00:30:31.968 [2024-12-09 06:29:26.431370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.968 [2024-12-09 06:29:26.431378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.968 qpair failed and we were unable to recover it. 00:30:31.968 06:29:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:31.968 [2024-12-09 06:29:26.431638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.968 [2024-12-09 06:29:26.431649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.968 qpair failed and we were unable to recover it. 00:30:31.968 06:29:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:31.968 [2024-12-09 06:29:26.431953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.968 [2024-12-09 06:29:26.431963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.968 qpair failed and we were unable to recover it. 00:30:31.968 [2024-12-09 06:29:26.432128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.968 [2024-12-09 06:29:26.432138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.968 qpair failed and we were unable to recover it. 00:30:31.968 06:29:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:31.968 [2024-12-09 06:29:26.432451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.968 [2024-12-09 06:29:26.432461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.968 qpair failed and we were unable to recover it. 00:30:31.968 [2024-12-09 06:29:26.432789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.968 [2024-12-09 06:29:26.432799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.968 qpair failed and we were unable to recover it. 00:30:31.968 [2024-12-09 06:29:26.433094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.968 [2024-12-09 06:29:26.433103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.968 qpair failed and we were unable to recover it. 00:30:31.968 [2024-12-09 06:29:26.433400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.968 [2024-12-09 06:29:26.433409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.968 qpair failed and we were unable to recover it. 00:30:31.968 [2024-12-09 06:29:26.433760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.968 [2024-12-09 06:29:26.433769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.968 qpair failed and we were unable to recover it. 00:30:31.968 [2024-12-09 06:29:26.434081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.968 [2024-12-09 06:29:26.434092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.969 qpair failed and we were unable to recover it. 00:30:31.969 [2024-12-09 06:29:26.434367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.969 [2024-12-09 06:29:26.434376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.969 qpair failed and we were unable to recover it. 00:30:31.969 [2024-12-09 06:29:26.434423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.969 [2024-12-09 06:29:26.434431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.969 qpair failed and we were unable to recover it. 00:30:31.969 [2024-12-09 06:29:26.434765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.969 [2024-12-09 06:29:26.434775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.969 qpair failed and we were unable to recover it. 00:30:31.969 [2024-12-09 06:29:26.434958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.969 [2024-12-09 06:29:26.434967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.969 qpair failed and we were unable to recover it. 00:30:31.969 [2024-12-09 06:29:26.435283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.969 [2024-12-09 06:29:26.435293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.969 qpair failed and we were unable to recover it. 00:30:31.969 [2024-12-09 06:29:26.435563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.969 [2024-12-09 06:29:26.435573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.969 qpair failed and we were unable to recover it. 00:30:31.969 [2024-12-09 06:29:26.435784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.969 [2024-12-09 06:29:26.435793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.969 qpair failed and we were unable to recover it. 00:30:31.969 [2024-12-09 06:29:26.435964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.969 [2024-12-09 06:29:26.435973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.969 qpair failed and we were unable to recover it. 00:30:31.969 [2024-12-09 06:29:26.436204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.969 [2024-12-09 06:29:26.436214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.969 qpair failed and we were unable to recover it. 00:30:31.969 [2024-12-09 06:29:26.436459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.969 [2024-12-09 06:29:26.436469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.969 qpair failed and we were unable to recover it. 00:30:31.969 [2024-12-09 06:29:26.436764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.969 [2024-12-09 06:29:26.436773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.969 qpair failed and we were unable to recover it. 00:30:31.969 [2024-12-09 06:29:26.437100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.969 [2024-12-09 06:29:26.437109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.969 qpair failed and we were unable to recover it. 00:30:31.969 [2024-12-09 06:29:26.437254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.969 [2024-12-09 06:29:26.437264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.969 qpair failed and we were unable to recover it. 00:30:31.969 [2024-12-09 06:29:26.437556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.969 [2024-12-09 06:29:26.437566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.969 qpair failed and we were unable to recover it. 00:30:31.969 [2024-12-09 06:29:26.437874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.969 [2024-12-09 06:29:26.437883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.969 qpair failed and we were unable to recover it. 00:30:31.969 [2024-12-09 06:29:26.438180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.969 [2024-12-09 06:29:26.438189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.969 qpair failed and we were unable to recover it. 00:30:31.969 [2024-12-09 06:29:26.438240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.969 [2024-12-09 06:29:26.438249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.969 qpair failed and we were unable to recover it. 00:30:31.969 [2024-12-09 06:29:26.438572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.969 [2024-12-09 06:29:26.438582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.969 qpair failed and we were unable to recover it. 00:30:31.969 [2024-12-09 06:29:26.438881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.969 [2024-12-09 06:29:26.438890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.969 qpair failed and we were unable to recover it. 00:30:31.969 [2024-12-09 06:29:26.439043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.969 [2024-12-09 06:29:26.439052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.969 qpair failed and we were unable to recover it. 00:30:31.969 [2024-12-09 06:29:26.439351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.969 [2024-12-09 06:29:26.439361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.969 qpair failed and we were unable to recover it. 00:30:31.969 [2024-12-09 06:29:26.439523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.969 [2024-12-09 06:29:26.439532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.969 qpair failed and we were unable to recover it. 00:30:31.969 [2024-12-09 06:29:26.439796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.969 [2024-12-09 06:29:26.439805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.969 qpair failed and we were unable to recover it. 00:30:31.969 [2024-12-09 06:29:26.440031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.969 [2024-12-09 06:29:26.440041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.969 qpair failed and we were unable to recover it. 00:30:31.969 [2024-12-09 06:29:26.440228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.969 [2024-12-09 06:29:26.440237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.969 qpair failed and we were unable to recover it. 00:30:31.969 [2024-12-09 06:29:26.440503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.969 [2024-12-09 06:29:26.440512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.969 qpair failed and we were unable to recover it. 00:30:31.969 [2024-12-09 06:29:26.440714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.969 [2024-12-09 06:29:26.440724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.969 qpair failed and we were unable to recover it. 00:30:31.969 [2024-12-09 06:29:26.440889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.969 [2024-12-09 06:29:26.440899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.969 qpair failed and we were unable to recover it. 00:30:31.969 [2024-12-09 06:29:26.441190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.969 [2024-12-09 06:29:26.441199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.969 qpair failed and we were unable to recover it. 00:30:31.969 [2024-12-09 06:29:26.441384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.969 [2024-12-09 06:29:26.441394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.969 qpair failed and we were unable to recover it. 00:30:31.969 [2024-12-09 06:29:26.441732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.969 [2024-12-09 06:29:26.441741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.969 qpair failed and we were unable to recover it. 00:30:31.969 [2024-12-09 06:29:26.441907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.969 [2024-12-09 06:29:26.441917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.969 qpair failed and we were unable to recover it. 00:30:31.969 [2024-12-09 06:29:26.442216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.969 [2024-12-09 06:29:26.442225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.969 qpair failed and we were unable to recover it. 00:30:31.969 [2024-12-09 06:29:26.442505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.969 [2024-12-09 06:29:26.442514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.969 qpair failed and we were unable to recover it. 00:30:31.969 [2024-12-09 06:29:26.442691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.969 [2024-12-09 06:29:26.442701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.969 qpair failed and we were unable to recover it. 00:30:31.969 [2024-12-09 06:29:26.442891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.969 [2024-12-09 06:29:26.442900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.969 qpair failed and we were unable to recover it. 00:30:31.969 06:29:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:31.969 [2024-12-09 06:29:26.443233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.969 [2024-12-09 06:29:26.443242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.969 qpair failed and we were unable to recover it. 00:30:31.970 [2024-12-09 06:29:26.443539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.970 [2024-12-09 06:29:26.443548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.970 qpair failed and we were unable to recover it. 00:30:31.970 06:29:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:31.970 [2024-12-09 06:29:26.443835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.970 [2024-12-09 06:29:26.443845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.970 qpair failed and we were unable to recover it. 00:30:31.970 06:29:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:31.970 [2024-12-09 06:29:26.444043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.970 [2024-12-09 06:29:26.444052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.970 qpair failed and we were unable to recover it. 00:30:31.970 06:29:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:31.970 [2024-12-09 06:29:26.444261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.970 [2024-12-09 06:29:26.444270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.970 qpair failed and we were unable to recover it. 00:30:31.970 [2024-12-09 06:29:26.444569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.970 [2024-12-09 06:29:26.444579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.970 qpair failed and we were unable to recover it. 00:30:31.970 [2024-12-09 06:29:26.444868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.970 [2024-12-09 06:29:26.444878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.970 qpair failed and we were unable to recover it. 00:30:31.970 [2024-12-09 06:29:26.445170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.970 [2024-12-09 06:29:26.445179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.970 qpair failed and we were unable to recover it. 00:30:31.970 [2024-12-09 06:29:26.445365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.970 [2024-12-09 06:29:26.445374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.970 qpair failed and we were unable to recover it. 00:30:31.970 [2024-12-09 06:29:26.445684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.970 [2024-12-09 06:29:26.445694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.970 qpair failed and we were unable to recover it. 00:30:31.970 [2024-12-09 06:29:26.445881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.970 [2024-12-09 06:29:26.445890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.970 qpair failed and we were unable to recover it. 00:30:31.970 [2024-12-09 06:29:26.446059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.970 [2024-12-09 06:29:26.446068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.970 qpair failed and we were unable to recover it. 00:30:31.970 [2024-12-09 06:29:26.446342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.970 [2024-12-09 06:29:26.446352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.970 qpair failed and we were unable to recover it. 00:30:31.970 [2024-12-09 06:29:26.446664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.970 [2024-12-09 06:29:26.446673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.970 qpair failed and we were unable to recover it. 00:30:31.970 [2024-12-09 06:29:26.446969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.970 [2024-12-09 06:29:26.446978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.970 qpair failed and we were unable to recover it. 00:30:31.970 [2024-12-09 06:29:26.447243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.970 [2024-12-09 06:29:26.447252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.970 qpair failed and we were unable to recover it. 00:30:31.970 [2024-12-09 06:29:26.447430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.970 [2024-12-09 06:29:26.447439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.970 qpair failed and we were unable to recover it. 00:30:31.970 [2024-12-09 06:29:26.447740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.970 [2024-12-09 06:29:26.447750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.970 qpair failed and we were unable to recover it. 00:30:31.970 [2024-12-09 06:29:26.447901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.970 [2024-12-09 06:29:26.447911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.970 qpair failed and we were unable to recover it. 00:30:31.970 [2024-12-09 06:29:26.448087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.970 [2024-12-09 06:29:26.448097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.970 qpair failed and we were unable to recover it. 00:30:31.970 [2024-12-09 06:29:26.448393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.970 [2024-12-09 06:29:26.448404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.970 qpair failed and we were unable to recover it. 00:30:31.970 [2024-12-09 06:29:26.448697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.970 [2024-12-09 06:29:26.448707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.970 qpair failed and we were unable to recover it. 00:30:31.970 [2024-12-09 06:29:26.448865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.970 [2024-12-09 06:29:26.448875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.970 qpair failed and we were unable to recover it. 00:30:31.970 [2024-12-09 06:29:26.449057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.970 [2024-12-09 06:29:26.449067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.970 qpair failed and we were unable to recover it. 00:30:31.970 [2024-12-09 06:29:26.449454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.970 [2024-12-09 06:29:26.449464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.970 qpair failed and we were unable to recover it. 00:30:31.970 [2024-12-09 06:29:26.449519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.970 [2024-12-09 06:29:26.449528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.970 qpair failed and we were unable to recover it. 00:30:31.970 [2024-12-09 06:29:26.449683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.970 [2024-12-09 06:29:26.449692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.970 qpair failed and we were unable to recover it. 00:30:31.970 [2024-12-09 06:29:26.449849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.970 [2024-12-09 06:29:26.449859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.970 qpair failed and we were unable to recover it. 00:30:31.970 [2024-12-09 06:29:26.450079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.970 [2024-12-09 06:29:26.450089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.970 qpair failed and we were unable to recover it. 00:30:31.970 [2024-12-09 06:29:26.450415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.970 [2024-12-09 06:29:26.450424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.970 qpair failed and we were unable to recover it. 00:30:31.970 [2024-12-09 06:29:26.450696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.970 [2024-12-09 06:29:26.450706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.970 qpair failed and we were unable to recover it. 00:30:31.970 [2024-12-09 06:29:26.451006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.970 [2024-12-09 06:29:26.451015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.970 qpair failed and we were unable to recover it. 00:30:31.970 [2024-12-09 06:29:26.451179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.970 [2024-12-09 06:29:26.451188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.970 qpair failed and we were unable to recover it. 00:30:31.970 [2024-12-09 06:29:26.451365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.970 [2024-12-09 06:29:26.451379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.970 qpair failed and we were unable to recover it. 00:30:31.970 [2024-12-09 06:29:26.451679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.970 [2024-12-09 06:29:26.451689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.970 qpair failed and we were unable to recover it. 00:30:31.970 [2024-12-09 06:29:26.452000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.970 [2024-12-09 06:29:26.452010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.970 qpair failed and we were unable to recover it. 00:30:31.970 [2024-12-09 06:29:26.452057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.970 [2024-12-09 06:29:26.452065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.970 qpair failed and we were unable to recover it. 00:30:31.970 [2024-12-09 06:29:26.452341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.971 [2024-12-09 06:29:26.452351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.971 qpair failed and we were unable to recover it. 00:30:31.971 [2024-12-09 06:29:26.452568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.971 [2024-12-09 06:29:26.452578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.971 qpair failed and we were unable to recover it. 00:30:31.971 [2024-12-09 06:29:26.452781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.971 [2024-12-09 06:29:26.452790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.971 qpair failed and we were unable to recover it. 00:30:31.971 [2024-12-09 06:29:26.453106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.971 [2024-12-09 06:29:26.453116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.971 qpair failed and we were unable to recover it. 00:30:31.971 [2024-12-09 06:29:26.453281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.971 [2024-12-09 06:29:26.453290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.971 qpair failed and we were unable to recover it. 00:30:31.971 [2024-12-09 06:29:26.453471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.971 [2024-12-09 06:29:26.453481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.971 qpair failed and we were unable to recover it. 00:30:31.971 [2024-12-09 06:29:26.453657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.971 [2024-12-09 06:29:26.453667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.971 qpair failed and we were unable to recover it. 00:30:31.971 [2024-12-09 06:29:26.453824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.971 [2024-12-09 06:29:26.453834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.971 qpair failed and we were unable to recover it. 00:30:31.971 [2024-12-09 06:29:26.454150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.971 [2024-12-09 06:29:26.454159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.971 qpair failed and we were unable to recover it. 00:30:31.971 [2024-12-09 06:29:26.454311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.971 [2024-12-09 06:29:26.454320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.971 qpair failed and we were unable to recover it. 00:30:31.971 [2024-12-09 06:29:26.454722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.971 [2024-12-09 06:29:26.454811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.971 qpair failed and we were unable to recover it. 00:30:31.971 [2024-12-09 06:29:26.455214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.971 [2024-12-09 06:29:26.455251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.971 06:29:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:31.971 qpair failed and we were unable to recover it. 00:30:31.971 [2024-12-09 06:29:26.455508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.971 [2024-12-09 06:29:26.455551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a4000b90 with addr=10.0.0.2, port=4420 00:30:31.971 qpair failed and we were unable to recover it. 00:30:31.971 06:29:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:31.971 [2024-12-09 06:29:26.455860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.971 [2024-12-09 06:29:26.455871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.971 qpair failed and we were unable to recover it. 00:30:31.971 06:29:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:31.971 [2024-12-09 06:29:26.456050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.971 [2024-12-09 06:29:26.456060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.971 qpair failed and we were unable to recover it. 00:30:31.971 06:29:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:31.971 [2024-12-09 06:29:26.456390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.971 [2024-12-09 06:29:26.456400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.971 qpair failed and we were unable to recover it. 00:30:31.971 [2024-12-09 06:29:26.456476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.971 [2024-12-09 06:29:26.456485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.971 qpair failed and we were unable to recover it. 00:30:31.971 [2024-12-09 06:29:26.456659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.971 [2024-12-09 06:29:26.456669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.971 qpair failed and we were unable to recover it. 00:30:31.971 [2024-12-09 06:29:26.456985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.971 [2024-12-09 06:29:26.456995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.971 qpair failed and we were unable to recover it. 00:30:31.971 [2024-12-09 06:29:26.457233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.971 [2024-12-09 06:29:26.457243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.971 qpair failed and we were unable to recover it. 00:30:31.971 [2024-12-09 06:29:26.457584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.971 [2024-12-09 06:29:26.457595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.971 qpair failed and we were unable to recover it. 00:30:31.971 [2024-12-09 06:29:26.457664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.971 [2024-12-09 06:29:26.457673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.971 qpair failed and we were unable to recover it. 00:30:31.971 [2024-12-09 06:29:26.457878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.971 [2024-12-09 06:29:26.457888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.971 qpair failed and we were unable to recover it. 00:30:31.971 [2024-12-09 06:29:26.458201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.971 [2024-12-09 06:29:26.458211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.971 qpair failed and we were unable to recover it. 00:30:31.971 [2024-12-09 06:29:26.458378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.971 [2024-12-09 06:29:26.458388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.971 qpair failed and we were unable to recover it. 00:30:31.971 [2024-12-09 06:29:26.458657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.971 [2024-12-09 06:29:26.458666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.971 qpair failed and we were unable to recover it. 00:30:31.971 [2024-12-09 06:29:26.458839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.971 [2024-12-09 06:29:26.458848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.971 qpair failed and we were unable to recover it. 00:30:31.971 [2024-12-09 06:29:26.458891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.971 [2024-12-09 06:29:26.458900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.971 qpair failed and we were unable to recover it. 00:30:31.971 [2024-12-09 06:29:26.459002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.971 [2024-12-09 06:29:26.459011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.971 qpair failed and we were unable to recover it. 00:30:31.971 [2024-12-09 06:29:26.459213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.971 [2024-12-09 06:29:26.459222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.971 qpair failed and we were unable to recover it. 00:30:31.971 [2024-12-09 06:29:26.459441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.971 [2024-12-09 06:29:26.459453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.971 qpair failed and we were unable to recover it. 00:30:31.971 [2024-12-09 06:29:26.459629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.971 [2024-12-09 06:29:26.459639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.971 qpair failed and we were unable to recover it. 00:30:31.971 [2024-12-09 06:29:26.459949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.971 [2024-12-09 06:29:26.459958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.971 qpair failed and we were unable to recover it. 00:30:31.972 [2024-12-09 06:29:26.460261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.972 [2024-12-09 06:29:26.460270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.972 qpair failed and we were unable to recover it. 00:30:31.972 [2024-12-09 06:29:26.460459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.972 [2024-12-09 06:29:26.460471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.972 qpair failed and we were unable to recover it. 00:30:31.972 [2024-12-09 06:29:26.460754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.972 [2024-12-09 06:29:26.460763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.972 qpair failed and we were unable to recover it. 00:30:31.972 [2024-12-09 06:29:26.461092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.972 [2024-12-09 06:29:26.461101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.972 qpair failed and we were unable to recover it. 00:30:31.972 [2024-12-09 06:29:26.461390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.972 [2024-12-09 06:29:26.461399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.972 qpair failed and we were unable to recover it. 00:30:31.972 [2024-12-09 06:29:26.461702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.972 [2024-12-09 06:29:26.461711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.972 qpair failed and we were unable to recover it. 00:30:31.972 [2024-12-09 06:29:26.461795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.972 [2024-12-09 06:29:26.461804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.972 qpair failed and we were unable to recover it. 00:30:31.972 [2024-12-09 06:29:26.462054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.972 [2024-12-09 06:29:26.462063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.972 qpair failed and we were unable to recover it. 00:30:31.972 [2024-12-09 06:29:26.462380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.972 [2024-12-09 06:29:26.462389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f71a8000b90 with addr=10.0.0.2, port=4420 00:30:31.972 [2024-12-09 06:29:26.462387] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:31.972 qpair failed and we were unable to recover it. 00:30:31.972 06:29:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:31.972 06:29:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:31.972 06:29:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:31.972 06:29:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:31.972 [2024-12-09 06:29:26.473018] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.972 [2024-12-09 06:29:26.473092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.972 [2024-12-09 06:29:26.473109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.972 [2024-12-09 06:29:26.473117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.972 [2024-12-09 06:29:26.473126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:31.972 [2024-12-09 06:29:26.473143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.972 qpair failed and we were unable to recover it. 00:30:31.972 06:29:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:31.972 06:29:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 504507 00:30:31.972 [2024-12-09 06:29:26.482825] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.972 [2024-12-09 06:29:26.482875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.972 [2024-12-09 06:29:26.482889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.972 [2024-12-09 06:29:26.482896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.972 [2024-12-09 06:29:26.482902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:31.972 [2024-12-09 06:29:26.482916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.972 qpair failed and we were unable to recover it. 00:30:31.972 [2024-12-09 06:29:26.492968] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.972 [2024-12-09 06:29:26.493016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.972 [2024-12-09 06:29:26.493029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.972 [2024-12-09 06:29:26.493035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.972 [2024-12-09 06:29:26.493041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:31.972 [2024-12-09 06:29:26.493055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.972 qpair failed and we were unable to recover it. 00:30:31.972 [2024-12-09 06:29:26.502919] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.972 [2024-12-09 06:29:26.502969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.972 [2024-12-09 06:29:26.502982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.972 [2024-12-09 06:29:26.502989] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.972 [2024-12-09 06:29:26.502994] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:31.972 [2024-12-09 06:29:26.503007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.972 qpair failed and we were unable to recover it. 00:30:31.972 [2024-12-09 06:29:26.512825] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.972 [2024-12-09 06:29:26.512878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.972 [2024-12-09 06:29:26.512891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.972 [2024-12-09 06:29:26.512898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.972 [2024-12-09 06:29:26.512903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:31.972 [2024-12-09 06:29:26.512916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.972 qpair failed and we were unable to recover it. 00:30:32.233 [2024-12-09 06:29:26.522956] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.234 [2024-12-09 06:29:26.523010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.234 [2024-12-09 06:29:26.523022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.234 [2024-12-09 06:29:26.523029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.234 [2024-12-09 06:29:26.523035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:32.234 [2024-12-09 06:29:26.523048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.234 qpair failed and we were unable to recover it. 00:30:32.234 [2024-12-09 06:29:26.532881] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.234 [2024-12-09 06:29:26.532925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.234 [2024-12-09 06:29:26.532938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.234 [2024-12-09 06:29:26.532944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.234 [2024-12-09 06:29:26.532949] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:32.234 [2024-12-09 06:29:26.532963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.234 qpair failed and we were unable to recover it. 00:30:32.234 [2024-12-09 06:29:26.542978] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.234 [2024-12-09 06:29:26.543075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.234 [2024-12-09 06:29:26.543087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.234 [2024-12-09 06:29:26.543094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.234 [2024-12-09 06:29:26.543100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:32.234 [2024-12-09 06:29:26.543113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.234 qpair failed and we were unable to recover it. 00:30:32.234 [2024-12-09 06:29:26.553077] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.234 [2024-12-09 06:29:26.553136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.234 [2024-12-09 06:29:26.553149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.234 [2024-12-09 06:29:26.553155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.234 [2024-12-09 06:29:26.553161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:32.234 [2024-12-09 06:29:26.553174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.234 qpair failed and we were unable to recover it. 00:30:32.234 [2024-12-09 06:29:26.563068] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.234 [2024-12-09 06:29:26.563113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.234 [2024-12-09 06:29:26.563126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.234 [2024-12-09 06:29:26.563136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.234 [2024-12-09 06:29:26.563142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:32.234 [2024-12-09 06:29:26.563155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.234 qpair failed and we were unable to recover it. 00:30:32.234 [2024-12-09 06:29:26.573082] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.234 [2024-12-09 06:29:26.573130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.234 [2024-12-09 06:29:26.573142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.234 [2024-12-09 06:29:26.573149] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.234 [2024-12-09 06:29:26.573155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:32.234 [2024-12-09 06:29:26.573168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.234 qpair failed and we were unable to recover it. 00:30:32.234 [2024-12-09 06:29:26.582967] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.234 [2024-12-09 06:29:26.583015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.234 [2024-12-09 06:29:26.583028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.234 [2024-12-09 06:29:26.583035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.234 [2024-12-09 06:29:26.583041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:32.234 [2024-12-09 06:29:26.583054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.234 qpair failed and we were unable to recover it. 00:30:32.234 [2024-12-09 06:29:26.593147] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.234 [2024-12-09 06:29:26.593243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.234 [2024-12-09 06:29:26.593256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.234 [2024-12-09 06:29:26.593263] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.234 [2024-12-09 06:29:26.593269] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:32.234 [2024-12-09 06:29:26.593283] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.234 qpair failed and we were unable to recover it. 00:30:32.234 [2024-12-09 06:29:26.603051] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.234 [2024-12-09 06:29:26.603097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.234 [2024-12-09 06:29:26.603110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.234 [2024-12-09 06:29:26.603117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.234 [2024-12-09 06:29:26.603122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:32.234 [2024-12-09 06:29:26.603139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.234 qpair failed and we were unable to recover it. 00:30:32.234 [2024-12-09 06:29:26.613102] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.234 [2024-12-09 06:29:26.613146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.234 [2024-12-09 06:29:26.613159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.234 [2024-12-09 06:29:26.613165] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.234 [2024-12-09 06:29:26.613171] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:32.234 [2024-12-09 06:29:26.613184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.234 qpair failed and we were unable to recover it. 00:30:32.234 [2024-12-09 06:29:26.623201] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.234 [2024-12-09 06:29:26.623246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.234 [2024-12-09 06:29:26.623259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.234 [2024-12-09 06:29:26.623265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.234 [2024-12-09 06:29:26.623271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:32.234 [2024-12-09 06:29:26.623284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.234 qpair failed and we were unable to recover it. 00:30:32.234 [2024-12-09 06:29:26.633257] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.234 [2024-12-09 06:29:26.633311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.234 [2024-12-09 06:29:26.633323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.235 [2024-12-09 06:29:26.633330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.235 [2024-12-09 06:29:26.633336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:32.235 [2024-12-09 06:29:26.633349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.235 qpair failed and we were unable to recover it. 00:30:32.235 [2024-12-09 06:29:26.643163] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.235 [2024-12-09 06:29:26.643209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.235 [2024-12-09 06:29:26.643221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.235 [2024-12-09 06:29:26.643228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.235 [2024-12-09 06:29:26.643233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:32.235 [2024-12-09 06:29:26.643246] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.235 qpair failed and we were unable to recover it. 00:30:32.235 [2024-12-09 06:29:26.653420] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.235 [2024-12-09 06:29:26.653484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.235 [2024-12-09 06:29:26.653497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.235 [2024-12-09 06:29:26.653503] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.235 [2024-12-09 06:29:26.653509] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:32.235 [2024-12-09 06:29:26.653522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.235 qpair failed and we were unable to recover it. 00:30:32.235 [2024-12-09 06:29:26.663326] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.235 [2024-12-09 06:29:26.663371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.235 [2024-12-09 06:29:26.663384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.235 [2024-12-09 06:29:26.663390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.235 [2024-12-09 06:29:26.663396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:32.235 [2024-12-09 06:29:26.663410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.235 qpair failed and we were unable to recover it. 00:30:32.235 [2024-12-09 06:29:26.673415] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.235 [2024-12-09 06:29:26.673474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.235 [2024-12-09 06:29:26.673487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.235 [2024-12-09 06:29:26.673493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.235 [2024-12-09 06:29:26.673499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:32.235 [2024-12-09 06:29:26.673512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.235 qpair failed and we were unable to recover it. 00:30:32.235 [2024-12-09 06:29:26.683425] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.235 [2024-12-09 06:29:26.683475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.235 [2024-12-09 06:29:26.683487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.235 [2024-12-09 06:29:26.683494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.235 [2024-12-09 06:29:26.683499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:32.235 [2024-12-09 06:29:26.683512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.235 qpair failed and we were unable to recover it. 00:30:32.235 [2024-12-09 06:29:26.693407] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.235 [2024-12-09 06:29:26.693460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.235 [2024-12-09 06:29:26.693472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.235 [2024-12-09 06:29:26.693482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.235 [2024-12-09 06:29:26.693488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:32.235 [2024-12-09 06:29:26.693501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.235 qpair failed and we were unable to recover it. 00:30:32.235 [2024-12-09 06:29:26.703400] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.235 [2024-12-09 06:29:26.703476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.235 [2024-12-09 06:29:26.703490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.235 [2024-12-09 06:29:26.703496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.235 [2024-12-09 06:29:26.703502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:32.235 [2024-12-09 06:29:26.703516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.235 qpair failed and we were unable to recover it. 00:30:32.235 [2024-12-09 06:29:26.713466] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.235 [2024-12-09 06:29:26.713525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.235 [2024-12-09 06:29:26.713537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.235 [2024-12-09 06:29:26.713543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.235 [2024-12-09 06:29:26.713549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:32.235 [2024-12-09 06:29:26.713562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.235 qpair failed and we were unable to recover it. 00:30:32.235 [2024-12-09 06:29:26.723485] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.235 [2024-12-09 06:29:26.723536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.235 [2024-12-09 06:29:26.723548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.235 [2024-12-09 06:29:26.723555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.235 [2024-12-09 06:29:26.723560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:32.235 [2024-12-09 06:29:26.723573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.235 qpair failed and we were unable to recover it. 00:30:32.235 [2024-12-09 06:29:26.733511] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.235 [2024-12-09 06:29:26.733558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.235 [2024-12-09 06:29:26.733570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.235 [2024-12-09 06:29:26.733577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.235 [2024-12-09 06:29:26.733582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:32.235 [2024-12-09 06:29:26.733598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.235 qpair failed and we were unable to recover it. 00:30:32.235 [2024-12-09 06:29:26.743492] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.235 [2024-12-09 06:29:26.743538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.235 [2024-12-09 06:29:26.743550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.235 [2024-12-09 06:29:26.743556] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.235 [2024-12-09 06:29:26.743562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:32.235 [2024-12-09 06:29:26.743575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.235 qpair failed and we were unable to recover it. 00:30:32.235 [2024-12-09 06:29:26.753550] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.235 [2024-12-09 06:29:26.753598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.235 [2024-12-09 06:29:26.753611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.235 [2024-12-09 06:29:26.753617] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.235 [2024-12-09 06:29:26.753623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:32.235 [2024-12-09 06:29:26.753636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.235 qpair failed and we were unable to recover it. 00:30:32.235 [2024-12-09 06:29:26.763596] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.236 [2024-12-09 06:29:26.763641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.236 [2024-12-09 06:29:26.763654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.236 [2024-12-09 06:29:26.763660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.236 [2024-12-09 06:29:26.763666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:32.236 [2024-12-09 06:29:26.763679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.236 qpair failed and we were unable to recover it. 00:30:32.236 [2024-12-09 06:29:26.773631] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.236 [2024-12-09 06:29:26.773681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.236 [2024-12-09 06:29:26.773693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.236 [2024-12-09 06:29:26.773700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.236 [2024-12-09 06:29:26.773705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:32.236 [2024-12-09 06:29:26.773718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.236 qpair failed and we were unable to recover it. 00:30:32.236 [2024-12-09 06:29:26.783621] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.236 [2024-12-09 06:29:26.783670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.236 [2024-12-09 06:29:26.783682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.236 [2024-12-09 06:29:26.783689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.236 [2024-12-09 06:29:26.783695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:32.236 [2024-12-09 06:29:26.783708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.236 qpair failed and we were unable to recover it. 00:30:32.236 [2024-12-09 06:29:26.793676] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.236 [2024-12-09 06:29:26.793730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.236 [2024-12-09 06:29:26.793742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.236 [2024-12-09 06:29:26.793749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.236 [2024-12-09 06:29:26.793755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:32.236 [2024-12-09 06:29:26.793768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.236 qpair failed and we were unable to recover it. 00:30:32.236 [2024-12-09 06:29:26.803673] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.236 [2024-12-09 06:29:26.803719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.236 [2024-12-09 06:29:26.803732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.236 [2024-12-09 06:29:26.803738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.236 [2024-12-09 06:29:26.803744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:32.236 [2024-12-09 06:29:26.803757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.236 qpair failed and we were unable to recover it. 00:30:32.236 [2024-12-09 06:29:26.813732] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.236 [2024-12-09 06:29:26.813783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.236 [2024-12-09 06:29:26.813795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.236 [2024-12-09 06:29:26.813801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.236 [2024-12-09 06:29:26.813807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:32.236 [2024-12-09 06:29:26.813820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.236 qpair failed and we were unable to recover it. 00:30:32.497 [2024-12-09 06:29:26.823731] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.497 [2024-12-09 06:29:26.823778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.497 [2024-12-09 06:29:26.823793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.497 [2024-12-09 06:29:26.823800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.497 [2024-12-09 06:29:26.823806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:32.497 [2024-12-09 06:29:26.823819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.497 qpair failed and we were unable to recover it. 00:30:32.497 [2024-12-09 06:29:26.833806] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.497 [2024-12-09 06:29:26.833861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.497 [2024-12-09 06:29:26.833873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.497 [2024-12-09 06:29:26.833880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.497 [2024-12-09 06:29:26.833885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:32.497 [2024-12-09 06:29:26.833898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.497 qpair failed and we were unable to recover it. 00:30:32.497 [2024-12-09 06:29:26.843812] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.497 [2024-12-09 06:29:26.843859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.497 [2024-12-09 06:29:26.843871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.497 [2024-12-09 06:29:26.843877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.497 [2024-12-09 06:29:26.843883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:32.497 [2024-12-09 06:29:26.843895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.497 qpair failed and we were unable to recover it. 00:30:32.497 [2024-12-09 06:29:26.853882] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.497 [2024-12-09 06:29:26.853955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.497 [2024-12-09 06:29:26.853968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.497 [2024-12-09 06:29:26.853974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.497 [2024-12-09 06:29:26.853980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:32.497 [2024-12-09 06:29:26.853993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.497 qpair failed and we were unable to recover it. 00:30:32.497 [2024-12-09 06:29:26.863819] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.497 [2024-12-09 06:29:26.863890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.497 [2024-12-09 06:29:26.863903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.497 [2024-12-09 06:29:26.863909] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.497 [2024-12-09 06:29:26.863918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:32.497 [2024-12-09 06:29:26.863931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.497 qpair failed and we were unable to recover it. 00:30:32.497 [2024-12-09 06:29:26.873901] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.497 [2024-12-09 06:29:26.873950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.497 [2024-12-09 06:29:26.873963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.497 [2024-12-09 06:29:26.873969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.497 [2024-12-09 06:29:26.873975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:32.497 [2024-12-09 06:29:26.873987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.497 qpair failed and we were unable to recover it. 00:30:32.497 [2024-12-09 06:29:26.883900] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.497 [2024-12-09 06:29:26.883945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.497 [2024-12-09 06:29:26.883958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.497 [2024-12-09 06:29:26.883964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.497 [2024-12-09 06:29:26.883970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:32.497 [2024-12-09 06:29:26.883982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.497 qpair failed and we were unable to recover it. 00:30:32.497 [2024-12-09 06:29:26.893828] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.497 [2024-12-09 06:29:26.893874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.497 [2024-12-09 06:29:26.893887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.497 [2024-12-09 06:29:26.893893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.497 [2024-12-09 06:29:26.893898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:32.497 [2024-12-09 06:29:26.893911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.497 qpair failed and we were unable to recover it. 00:30:32.497 [2024-12-09 06:29:26.903927] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.497 [2024-12-09 06:29:26.903975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.497 [2024-12-09 06:29:26.903988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.497 [2024-12-09 06:29:26.903995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.497 [2024-12-09 06:29:26.904000] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:32.497 [2024-12-09 06:29:26.904013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.497 qpair failed and we were unable to recover it. 00:30:32.497 [2024-12-09 06:29:26.914035] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.497 [2024-12-09 06:29:26.914083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.497 [2024-12-09 06:29:26.914095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.497 [2024-12-09 06:29:26.914101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.497 [2024-12-09 06:29:26.914106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:32.497 [2024-12-09 06:29:26.914119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.497 qpair failed and we were unable to recover it. 00:30:32.497 [2024-12-09 06:29:26.924010] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.497 [2024-12-09 06:29:26.924057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.497 [2024-12-09 06:29:26.924069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.497 [2024-12-09 06:29:26.924076] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.497 [2024-12-09 06:29:26.924081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:32.497 [2024-12-09 06:29:26.924094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.497 qpair failed and we were unable to recover it. 00:30:32.497 [2024-12-09 06:29:26.934068] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.497 [2024-12-09 06:29:26.934115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.497 [2024-12-09 06:29:26.934127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.497 [2024-12-09 06:29:26.934134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.497 [2024-12-09 06:29:26.934140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:32.497 [2024-12-09 06:29:26.934152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.497 qpair failed and we were unable to recover it. 00:30:32.497 [2024-12-09 06:29:26.944078] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.497 [2024-12-09 06:29:26.944152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.497 [2024-12-09 06:29:26.944164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.497 [2024-12-09 06:29:26.944170] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.497 [2024-12-09 06:29:26.944176] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:32.497 [2024-12-09 06:29:26.944188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.497 qpair failed and we were unable to recover it. 00:30:32.497 [2024-12-09 06:29:26.954135] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.497 [2024-12-09 06:29:26.954182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.497 [2024-12-09 06:29:26.954198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.497 [2024-12-09 06:29:26.954204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.497 [2024-12-09 06:29:26.954210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:32.497 [2024-12-09 06:29:26.954223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.497 qpair failed and we were unable to recover it. 00:30:32.497 [2024-12-09 06:29:26.964147] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.497 [2024-12-09 06:29:26.964206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.497 [2024-12-09 06:29:26.964228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.497 [2024-12-09 06:29:26.964236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.497 [2024-12-09 06:29:26.964243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:32.497 [2024-12-09 06:29:26.964260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.497 qpair failed and we were unable to recover it. 00:30:32.497 [2024-12-09 06:29:26.974184] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.497 [2024-12-09 06:29:26.974232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.497 [2024-12-09 06:29:26.974254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.497 [2024-12-09 06:29:26.974262] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.497 [2024-12-09 06:29:26.974268] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:32.497 [2024-12-09 06:29:26.974286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.497 qpair failed and we were unable to recover it. 00:30:32.497 [2024-12-09 06:29:26.984060] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.497 [2024-12-09 06:29:26.984106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.497 [2024-12-09 06:29:26.984120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.497 [2024-12-09 06:29:26.984127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.497 [2024-12-09 06:29:26.984133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:32.497 [2024-12-09 06:29:26.984147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.497 qpair failed and we were unable to recover it. 00:30:32.497 [2024-12-09 06:29:26.994220] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.497 [2024-12-09 06:29:26.994268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.497 [2024-12-09 06:29:26.994281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.497 [2024-12-09 06:29:26.994287] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.497 [2024-12-09 06:29:26.994297] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:32.497 [2024-12-09 06:29:26.994311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.497 qpair failed and we were unable to recover it. 00:30:32.497 [2024-12-09 06:29:27.004310] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.497 [2024-12-09 06:29:27.004391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.497 [2024-12-09 06:29:27.004405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.497 [2024-12-09 06:29:27.004411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.497 [2024-12-09 06:29:27.004417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:32.497 [2024-12-09 06:29:27.004431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.497 qpair failed and we were unable to recover it. 00:30:32.497 [2024-12-09 06:29:27.014288] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.497 [2024-12-09 06:29:27.014338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.497 [2024-12-09 06:29:27.014351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.498 [2024-12-09 06:29:27.014358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.498 [2024-12-09 06:29:27.014363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:32.498 [2024-12-09 06:29:27.014377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.498 qpair failed and we were unable to recover it. 00:30:32.498 [2024-12-09 06:29:27.024299] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.498 [2024-12-09 06:29:27.024345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.498 [2024-12-09 06:29:27.024358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.498 [2024-12-09 06:29:27.024364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.498 [2024-12-09 06:29:27.024370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:32.498 [2024-12-09 06:29:27.024382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.498 qpair failed and we were unable to recover it. 00:30:32.498 [2024-12-09 06:29:27.034347] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.498 [2024-12-09 06:29:27.034399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.498 [2024-12-09 06:29:27.034412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.498 [2024-12-09 06:29:27.034418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.498 [2024-12-09 06:29:27.034424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:32.498 [2024-12-09 06:29:27.034436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.498 qpair failed and we were unable to recover it. 00:30:32.498 [2024-12-09 06:29:27.044355] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.498 [2024-12-09 06:29:27.044413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.498 [2024-12-09 06:29:27.044425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.498 [2024-12-09 06:29:27.044431] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.498 [2024-12-09 06:29:27.044437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:32.498 [2024-12-09 06:29:27.044453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.498 qpair failed and we were unable to recover it. 00:30:32.498 [2024-12-09 06:29:27.054382] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.498 [2024-12-09 06:29:27.054431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.498 [2024-12-09 06:29:27.054444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.498 [2024-12-09 06:29:27.054454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.498 [2024-12-09 06:29:27.054460] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:32.498 [2024-12-09 06:29:27.054474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.498 qpair failed and we were unable to recover it. 00:30:32.498 [2024-12-09 06:29:27.064405] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.498 [2024-12-09 06:29:27.064456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.498 [2024-12-09 06:29:27.064469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.498 [2024-12-09 06:29:27.064475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.498 [2024-12-09 06:29:27.064481] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:32.498 [2024-12-09 06:29:27.064494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.498 qpair failed and we were unable to recover it. 00:30:32.498 [2024-12-09 06:29:27.074485] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.498 [2024-12-09 06:29:27.074538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.498 [2024-12-09 06:29:27.074551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.498 [2024-12-09 06:29:27.074557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.498 [2024-12-09 06:29:27.074563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:32.498 [2024-12-09 06:29:27.074576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.498 qpair failed and we were unable to recover it. 00:30:32.759 [2024-12-09 06:29:27.084510] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.759 [2024-12-09 06:29:27.084561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.759 [2024-12-09 06:29:27.084573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.759 [2024-12-09 06:29:27.084580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.759 [2024-12-09 06:29:27.084585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:32.759 [2024-12-09 06:29:27.084598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.759 qpair failed and we were unable to recover it. 00:30:32.759 [2024-12-09 06:29:27.094380] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.759 [2024-12-09 06:29:27.094424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.759 [2024-12-09 06:29:27.094437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.759 [2024-12-09 06:29:27.094443] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.759 [2024-12-09 06:29:27.094452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:32.759 [2024-12-09 06:29:27.094466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.759 qpair failed and we were unable to recover it. 00:30:32.759 [2024-12-09 06:29:27.104501] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.760 [2024-12-09 06:29:27.104562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.760 [2024-12-09 06:29:27.104575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.760 [2024-12-09 06:29:27.104581] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.760 [2024-12-09 06:29:27.104587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:32.760 [2024-12-09 06:29:27.104600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.760 qpair failed and we were unable to recover it. 00:30:32.760 [2024-12-09 06:29:27.114444] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.760 [2024-12-09 06:29:27.114501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.760 [2024-12-09 06:29:27.114513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.760 [2024-12-09 06:29:27.114520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.760 [2024-12-09 06:29:27.114525] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:32.760 [2024-12-09 06:29:27.114538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.760 qpair failed and we were unable to recover it. 00:30:32.760 [2024-12-09 06:29:27.124607] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.760 [2024-12-09 06:29:27.124651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.760 [2024-12-09 06:29:27.124663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.760 [2024-12-09 06:29:27.124677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.760 [2024-12-09 06:29:27.124683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:32.760 [2024-12-09 06:29:27.124696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.760 qpair failed and we were unable to recover it. 00:30:32.760 [2024-12-09 06:29:27.134627] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.760 [2024-12-09 06:29:27.134674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.760 [2024-12-09 06:29:27.134687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.760 [2024-12-09 06:29:27.134693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.760 [2024-12-09 06:29:27.134698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:32.760 [2024-12-09 06:29:27.134711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.760 qpair failed and we were unable to recover it. 00:30:32.760 [2024-12-09 06:29:27.144623] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.760 [2024-12-09 06:29:27.144679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.760 [2024-12-09 06:29:27.144692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.760 [2024-12-09 06:29:27.144698] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.760 [2024-12-09 06:29:27.144704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:32.760 [2024-12-09 06:29:27.144717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.760 qpair failed and we were unable to recover it. 00:30:32.760 [2024-12-09 06:29:27.154674] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.760 [2024-12-09 06:29:27.154725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.760 [2024-12-09 06:29:27.154738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.760 [2024-12-09 06:29:27.154744] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.760 [2024-12-09 06:29:27.154750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:32.760 [2024-12-09 06:29:27.154763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.760 qpair failed and we were unable to recover it. 00:30:32.760 [2024-12-09 06:29:27.164714] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.760 [2024-12-09 06:29:27.164760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.760 [2024-12-09 06:29:27.164772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.760 [2024-12-09 06:29:27.164779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.760 [2024-12-09 06:29:27.164784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:32.760 [2024-12-09 06:29:27.164801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.760 qpair failed and we were unable to recover it. 00:30:32.760 [2024-12-09 06:29:27.174596] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.760 [2024-12-09 06:29:27.174642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.760 [2024-12-09 06:29:27.174655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.760 [2024-12-09 06:29:27.174661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.760 [2024-12-09 06:29:27.174667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:32.760 [2024-12-09 06:29:27.174680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.760 qpair failed and we were unable to recover it. 00:30:32.760 [2024-12-09 06:29:27.184736] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.760 [2024-12-09 06:29:27.184784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.760 [2024-12-09 06:29:27.184797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.760 [2024-12-09 06:29:27.184803] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.760 [2024-12-09 06:29:27.184809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:32.760 [2024-12-09 06:29:27.184822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.760 qpair failed and we were unable to recover it. 00:30:32.760 [2024-12-09 06:29:27.194794] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.760 [2024-12-09 06:29:27.194856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.760 [2024-12-09 06:29:27.194868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.760 [2024-12-09 06:29:27.194875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.760 [2024-12-09 06:29:27.194880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:32.760 [2024-12-09 06:29:27.194893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.760 qpair failed and we were unable to recover it. 00:30:32.760 [2024-12-09 06:29:27.204805] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.760 [2024-12-09 06:29:27.204863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.760 [2024-12-09 06:29:27.204876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.760 [2024-12-09 06:29:27.204882] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.760 [2024-12-09 06:29:27.204888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:32.760 [2024-12-09 06:29:27.204900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.760 qpair failed and we were unable to recover it. 00:30:32.760 [2024-12-09 06:29:27.214827] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.760 [2024-12-09 06:29:27.214877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.760 [2024-12-09 06:29:27.214890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.760 [2024-12-09 06:29:27.214896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.760 [2024-12-09 06:29:27.214902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:32.760 [2024-12-09 06:29:27.214915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.760 qpair failed and we were unable to recover it. 00:30:32.760 [2024-12-09 06:29:27.224840] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.760 [2024-12-09 06:29:27.224885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.760 [2024-12-09 06:29:27.224897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.760 [2024-12-09 06:29:27.224904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.761 [2024-12-09 06:29:27.224909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:32.761 [2024-12-09 06:29:27.224922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.761 qpair failed and we were unable to recover it. 00:30:32.761 [2024-12-09 06:29:27.234908] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.761 [2024-12-09 06:29:27.234957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.761 [2024-12-09 06:29:27.234969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.761 [2024-12-09 06:29:27.234976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.761 [2024-12-09 06:29:27.234981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:32.761 [2024-12-09 06:29:27.234994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.761 qpair failed and we were unable to recover it. 00:30:32.761 [2024-12-09 06:29:27.244909] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.761 [2024-12-09 06:29:27.244953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.761 [2024-12-09 06:29:27.244966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.761 [2024-12-09 06:29:27.244972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.761 [2024-12-09 06:29:27.244978] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:32.761 [2024-12-09 06:29:27.244990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.761 qpair failed and we were unable to recover it. 00:30:32.761 [2024-12-09 06:29:27.254849] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.761 [2024-12-09 06:29:27.254898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.761 [2024-12-09 06:29:27.254914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.761 [2024-12-09 06:29:27.254921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.761 [2024-12-09 06:29:27.254926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:32.761 [2024-12-09 06:29:27.254939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.761 qpair failed and we were unable to recover it. 00:30:32.761 [2024-12-09 06:29:27.264945] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.761 [2024-12-09 06:29:27.264997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.761 [2024-12-09 06:29:27.265010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.761 [2024-12-09 06:29:27.265016] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.761 [2024-12-09 06:29:27.265022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:32.761 [2024-12-09 06:29:27.265036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.761 qpair failed and we were unable to recover it. 00:30:32.761 [2024-12-09 06:29:27.274898] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.761 [2024-12-09 06:29:27.274988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.761 [2024-12-09 06:29:27.275001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.761 [2024-12-09 06:29:27.275008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.761 [2024-12-09 06:29:27.275014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:32.761 [2024-12-09 06:29:27.275027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.761 qpair failed and we were unable to recover it. 00:30:32.761 [2024-12-09 06:29:27.285041] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.761 [2024-12-09 06:29:27.285086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.761 [2024-12-09 06:29:27.285098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.761 [2024-12-09 06:29:27.285105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.761 [2024-12-09 06:29:27.285111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:32.761 [2024-12-09 06:29:27.285124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.761 qpair failed and we were unable to recover it. 00:30:32.761 [2024-12-09 06:29:27.295063] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.761 [2024-12-09 06:29:27.295108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.761 [2024-12-09 06:29:27.295120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.761 [2024-12-09 06:29:27.295127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.761 [2024-12-09 06:29:27.295133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:32.761 [2024-12-09 06:29:27.295149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.761 qpair failed and we were unable to recover it. 00:30:32.761 [2024-12-09 06:29:27.305059] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.761 [2024-12-09 06:29:27.305103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.761 [2024-12-09 06:29:27.305116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.761 [2024-12-09 06:29:27.305123] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.761 [2024-12-09 06:29:27.305129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:32.761 [2024-12-09 06:29:27.305142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.761 qpair failed and we were unable to recover it. 00:30:32.761 [2024-12-09 06:29:27.315131] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.761 [2024-12-09 06:29:27.315179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.761 [2024-12-09 06:29:27.315191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.761 [2024-12-09 06:29:27.315198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.761 [2024-12-09 06:29:27.315204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:32.761 [2024-12-09 06:29:27.315217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.761 qpair failed and we were unable to recover it. 00:30:32.761 [2024-12-09 06:29:27.325142] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.761 [2024-12-09 06:29:27.325198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.761 [2024-12-09 06:29:27.325210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.761 [2024-12-09 06:29:27.325216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.761 [2024-12-09 06:29:27.325222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:32.761 [2024-12-09 06:29:27.325234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.761 qpair failed and we were unable to recover it. 00:30:32.761 [2024-12-09 06:29:27.335089] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.761 [2024-12-09 06:29:27.335142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.761 [2024-12-09 06:29:27.335156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.761 [2024-12-09 06:29:27.335162] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.761 [2024-12-09 06:29:27.335168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:32.761 [2024-12-09 06:29:27.335181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.761 qpair failed and we were unable to recover it. 00:30:33.023 [2024-12-09 06:29:27.345184] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.023 [2024-12-09 06:29:27.345234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.023 [2024-12-09 06:29:27.345247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.023 [2024-12-09 06:29:27.345253] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.023 [2024-12-09 06:29:27.345259] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.023 [2024-12-09 06:29:27.345272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.023 qpair failed and we were unable to recover it. 00:30:33.023 [2024-12-09 06:29:27.355236] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.023 [2024-12-09 06:29:27.355284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.023 [2024-12-09 06:29:27.355297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.023 [2024-12-09 06:29:27.355304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.023 [2024-12-09 06:29:27.355310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.023 [2024-12-09 06:29:27.355323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.023 qpair failed and we were unable to recover it. 00:30:33.023 [2024-12-09 06:29:27.365241] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.023 [2024-12-09 06:29:27.365305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.023 [2024-12-09 06:29:27.365328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.023 [2024-12-09 06:29:27.365336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.023 [2024-12-09 06:29:27.365342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.023 [2024-12-09 06:29:27.365360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.023 qpair failed and we were unable to recover it. 00:30:33.023 [2024-12-09 06:29:27.375283] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.023 [2024-12-09 06:29:27.375367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.023 [2024-12-09 06:29:27.375381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.023 [2024-12-09 06:29:27.375388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.023 [2024-12-09 06:29:27.375394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.023 [2024-12-09 06:29:27.375408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.023 qpair failed and we were unable to recover it. 00:30:33.023 [2024-12-09 06:29:27.385170] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.023 [2024-12-09 06:29:27.385217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.023 [2024-12-09 06:29:27.385234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.023 [2024-12-09 06:29:27.385241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.023 [2024-12-09 06:29:27.385247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.023 [2024-12-09 06:29:27.385267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.023 qpair failed and we were unable to recover it. 00:30:33.023 [2024-12-09 06:29:27.395342] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.023 [2024-12-09 06:29:27.395394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.023 [2024-12-09 06:29:27.395407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.023 [2024-12-09 06:29:27.395413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.023 [2024-12-09 06:29:27.395419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.023 [2024-12-09 06:29:27.395433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.023 qpair failed and we were unable to recover it. 00:30:33.023 [2024-12-09 06:29:27.405372] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.023 [2024-12-09 06:29:27.405425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.023 [2024-12-09 06:29:27.405437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.023 [2024-12-09 06:29:27.405444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.023 [2024-12-09 06:29:27.405454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.023 [2024-12-09 06:29:27.405468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.023 qpair failed and we were unable to recover it. 00:30:33.023 [2024-12-09 06:29:27.415389] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.023 [2024-12-09 06:29:27.415475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.023 [2024-12-09 06:29:27.415488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.023 [2024-12-09 06:29:27.415495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.023 [2024-12-09 06:29:27.415501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.023 [2024-12-09 06:29:27.415514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.023 qpair failed and we were unable to recover it. 00:30:33.023 [2024-12-09 06:29:27.425404] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.023 [2024-12-09 06:29:27.425463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.023 [2024-12-09 06:29:27.425476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.023 [2024-12-09 06:29:27.425483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.023 [2024-12-09 06:29:27.425492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.023 [2024-12-09 06:29:27.425506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.023 qpair failed and we were unable to recover it. 00:30:33.023 [2024-12-09 06:29:27.435471] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.023 [2024-12-09 06:29:27.435523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.023 [2024-12-09 06:29:27.435536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.023 [2024-12-09 06:29:27.435542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.023 [2024-12-09 06:29:27.435548] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.023 [2024-12-09 06:29:27.435562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.023 qpair failed and we were unable to recover it. 00:30:33.023 [2024-12-09 06:29:27.445402] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.023 [2024-12-09 06:29:27.445457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.023 [2024-12-09 06:29:27.445470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.023 [2024-12-09 06:29:27.445477] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.023 [2024-12-09 06:29:27.445483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.023 [2024-12-09 06:29:27.445496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.023 qpair failed and we were unable to recover it. 00:30:33.023 [2024-12-09 06:29:27.455411] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.023 [2024-12-09 06:29:27.455460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.023 [2024-12-09 06:29:27.455472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.023 [2024-12-09 06:29:27.455479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.023 [2024-12-09 06:29:27.455485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.023 [2024-12-09 06:29:27.455498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.023 qpair failed and we were unable to recover it. 00:30:33.023 [2024-12-09 06:29:27.465525] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.023 [2024-12-09 06:29:27.465571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.024 [2024-12-09 06:29:27.465583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.024 [2024-12-09 06:29:27.465590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.024 [2024-12-09 06:29:27.465596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.024 [2024-12-09 06:29:27.465609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.024 qpair failed and we were unable to recover it. 00:30:33.024 [2024-12-09 06:29:27.475581] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.024 [2024-12-09 06:29:27.475630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.024 [2024-12-09 06:29:27.475643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.024 [2024-12-09 06:29:27.475649] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.024 [2024-12-09 06:29:27.475655] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.024 [2024-12-09 06:29:27.475668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.024 qpair failed and we were unable to recover it. 00:30:33.024 [2024-12-09 06:29:27.485587] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.024 [2024-12-09 06:29:27.485638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.024 [2024-12-09 06:29:27.485651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.024 [2024-12-09 06:29:27.485657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.024 [2024-12-09 06:29:27.485662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.024 [2024-12-09 06:29:27.485676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.024 qpair failed and we were unable to recover it. 00:30:33.024 [2024-12-09 06:29:27.495595] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.024 [2024-12-09 06:29:27.495638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.024 [2024-12-09 06:29:27.495651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.024 [2024-12-09 06:29:27.495657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.024 [2024-12-09 06:29:27.495663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.024 [2024-12-09 06:29:27.495676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.024 qpair failed and we were unable to recover it. 00:30:33.024 [2024-12-09 06:29:27.505613] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.024 [2024-12-09 06:29:27.505658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.024 [2024-12-09 06:29:27.505670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.024 [2024-12-09 06:29:27.505677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.024 [2024-12-09 06:29:27.505682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.024 [2024-12-09 06:29:27.505695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.024 qpair failed and we were unable to recover it. 00:30:33.024 [2024-12-09 06:29:27.515670] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.024 [2024-12-09 06:29:27.515715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.024 [2024-12-09 06:29:27.515731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.024 [2024-12-09 06:29:27.515737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.024 [2024-12-09 06:29:27.515743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.024 [2024-12-09 06:29:27.515756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.024 qpair failed and we were unable to recover it. 00:30:33.024 [2024-12-09 06:29:27.525695] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.024 [2024-12-09 06:29:27.525742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.024 [2024-12-09 06:29:27.525754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.024 [2024-12-09 06:29:27.525761] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.024 [2024-12-09 06:29:27.525766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.024 [2024-12-09 06:29:27.525779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.024 qpair failed and we were unable to recover it. 00:30:33.024 [2024-12-09 06:29:27.535708] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.024 [2024-12-09 06:29:27.535755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.024 [2024-12-09 06:29:27.535768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.024 [2024-12-09 06:29:27.535774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.024 [2024-12-09 06:29:27.535780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.024 [2024-12-09 06:29:27.535793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.024 qpair failed and we were unable to recover it. 00:30:33.024 [2024-12-09 06:29:27.545724] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.024 [2024-12-09 06:29:27.545772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.024 [2024-12-09 06:29:27.545784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.024 [2024-12-09 06:29:27.545791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.024 [2024-12-09 06:29:27.545797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.024 [2024-12-09 06:29:27.545809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.024 qpair failed and we were unable to recover it. 00:30:33.024 [2024-12-09 06:29:27.555805] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.024 [2024-12-09 06:29:27.555848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.024 [2024-12-09 06:29:27.555861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.024 [2024-12-09 06:29:27.555871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.024 [2024-12-09 06:29:27.555877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.024 [2024-12-09 06:29:27.555890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.024 qpair failed and we were unable to recover it. 00:30:33.024 [2024-12-09 06:29:27.565813] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.024 [2024-12-09 06:29:27.565860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.024 [2024-12-09 06:29:27.565873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.024 [2024-12-09 06:29:27.565879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.024 [2024-12-09 06:29:27.565885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.024 [2024-12-09 06:29:27.565898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.024 qpair failed and we were unable to recover it. 00:30:33.024 [2024-12-09 06:29:27.575849] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.024 [2024-12-09 06:29:27.575896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.024 [2024-12-09 06:29:27.575909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.024 [2024-12-09 06:29:27.575915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.024 [2024-12-09 06:29:27.575921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.024 [2024-12-09 06:29:27.575934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.024 qpair failed and we were unable to recover it. 00:30:33.024 [2024-12-09 06:29:27.585849] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.024 [2024-12-09 06:29:27.585902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.024 [2024-12-09 06:29:27.585915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.024 [2024-12-09 06:29:27.585921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.024 [2024-12-09 06:29:27.585927] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.024 [2024-12-09 06:29:27.585940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.024 qpair failed and we were unable to recover it. 00:30:33.024 [2024-12-09 06:29:27.595915] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.025 [2024-12-09 06:29:27.595961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.025 [2024-12-09 06:29:27.595973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.025 [2024-12-09 06:29:27.595980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.025 [2024-12-09 06:29:27.595986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.025 [2024-12-09 06:29:27.595999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.025 qpair failed and we were unable to recover it. 00:30:33.025 [2024-12-09 06:29:27.605927] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.025 [2024-12-09 06:29:27.605971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.025 [2024-12-09 06:29:27.605984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.025 [2024-12-09 06:29:27.605991] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.025 [2024-12-09 06:29:27.605996] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.025 [2024-12-09 06:29:27.606009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.025 qpair failed and we were unable to recover it. 00:30:33.287 [2024-12-09 06:29:27.615932] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.287 [2024-12-09 06:29:27.615999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.287 [2024-12-09 06:29:27.616012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.287 [2024-12-09 06:29:27.616018] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.287 [2024-12-09 06:29:27.616024] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.287 [2024-12-09 06:29:27.616037] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.287 qpair failed and we were unable to recover it. 00:30:33.287 [2024-12-09 06:29:27.625956] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.287 [2024-12-09 06:29:27.626019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.287 [2024-12-09 06:29:27.626031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.287 [2024-12-09 06:29:27.626037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.287 [2024-12-09 06:29:27.626043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.287 [2024-12-09 06:29:27.626055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.287 qpair failed and we were unable to recover it. 00:30:33.287 [2024-12-09 06:29:27.636018] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.287 [2024-12-09 06:29:27.636073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.287 [2024-12-09 06:29:27.636086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.287 [2024-12-09 06:29:27.636093] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.287 [2024-12-09 06:29:27.636099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.287 [2024-12-09 06:29:27.636112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.287 qpair failed and we were unable to recover it. 00:30:33.287 [2024-12-09 06:29:27.646089] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.287 [2024-12-09 06:29:27.646138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.287 [2024-12-09 06:29:27.646151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.287 [2024-12-09 06:29:27.646157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.287 [2024-12-09 06:29:27.646163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.287 [2024-12-09 06:29:27.646175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.287 qpair failed and we were unable to recover it. 00:30:33.287 [2024-12-09 06:29:27.656075] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.287 [2024-12-09 06:29:27.656123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.287 [2024-12-09 06:29:27.656136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.287 [2024-12-09 06:29:27.656143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.287 [2024-12-09 06:29:27.656148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.287 [2024-12-09 06:29:27.656161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.287 qpair failed and we were unable to recover it. 00:30:33.287 [2024-12-09 06:29:27.666055] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.287 [2024-12-09 06:29:27.666115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.287 [2024-12-09 06:29:27.666128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.287 [2024-12-09 06:29:27.666134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.287 [2024-12-09 06:29:27.666140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.287 [2024-12-09 06:29:27.666152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.287 qpair failed and we were unable to recover it. 00:30:33.287 [2024-12-09 06:29:27.676031] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.288 [2024-12-09 06:29:27.676088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.288 [2024-12-09 06:29:27.676101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.288 [2024-12-09 06:29:27.676107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.288 [2024-12-09 06:29:27.676113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.288 [2024-12-09 06:29:27.676126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.288 qpair failed and we were unable to recover it. 00:30:33.288 [2024-12-09 06:29:27.686015] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.288 [2024-12-09 06:29:27.686061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.288 [2024-12-09 06:29:27.686073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.288 [2024-12-09 06:29:27.686083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.288 [2024-12-09 06:29:27.686088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.288 [2024-12-09 06:29:27.686101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.288 qpair failed and we were unable to recover it. 00:30:33.288 [2024-12-09 06:29:27.696155] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.288 [2024-12-09 06:29:27.696199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.288 [2024-12-09 06:29:27.696212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.288 [2024-12-09 06:29:27.696218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.288 [2024-12-09 06:29:27.696224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.288 [2024-12-09 06:29:27.696237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.288 qpair failed and we were unable to recover it. 00:30:33.288 [2024-12-09 06:29:27.706174] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.288 [2024-12-09 06:29:27.706237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.288 [2024-12-09 06:29:27.706260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.288 [2024-12-09 06:29:27.706268] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.288 [2024-12-09 06:29:27.706274] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.288 [2024-12-09 06:29:27.706292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.288 qpair failed and we were unable to recover it. 00:30:33.288 [2024-12-09 06:29:27.716230] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.288 [2024-12-09 06:29:27.716281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.288 [2024-12-09 06:29:27.716304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.288 [2024-12-09 06:29:27.716312] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.288 [2024-12-09 06:29:27.716319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.288 [2024-12-09 06:29:27.716336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.288 qpair failed and we were unable to recover it. 00:30:33.288 [2024-12-09 06:29:27.726270] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.288 [2024-12-09 06:29:27.726329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.288 [2024-12-09 06:29:27.726370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.288 [2024-12-09 06:29:27.726377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.288 [2024-12-09 06:29:27.726383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.288 [2024-12-09 06:29:27.726410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.288 qpair failed and we were unable to recover it. 00:30:33.288 [2024-12-09 06:29:27.736308] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.288 [2024-12-09 06:29:27.736395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.288 [2024-12-09 06:29:27.736408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.288 [2024-12-09 06:29:27.736415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.288 [2024-12-09 06:29:27.736421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.288 [2024-12-09 06:29:27.736435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.288 qpair failed and we were unable to recover it. 00:30:33.288 [2024-12-09 06:29:27.746257] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.288 [2024-12-09 06:29:27.746353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.288 [2024-12-09 06:29:27.746366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.288 [2024-12-09 06:29:27.746373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.288 [2024-12-09 06:29:27.746379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.288 [2024-12-09 06:29:27.746392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.288 qpair failed and we were unable to recover it. 00:30:33.288 [2024-12-09 06:29:27.756343] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.288 [2024-12-09 06:29:27.756397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.288 [2024-12-09 06:29:27.756410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.288 [2024-12-09 06:29:27.756417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.288 [2024-12-09 06:29:27.756423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.288 [2024-12-09 06:29:27.756437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.288 qpair failed and we were unable to recover it. 00:30:33.288 [2024-12-09 06:29:27.766351] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.288 [2024-12-09 06:29:27.766445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.288 [2024-12-09 06:29:27.766461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.288 [2024-12-09 06:29:27.766468] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.288 [2024-12-09 06:29:27.766474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.288 [2024-12-09 06:29:27.766487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.288 qpair failed and we were unable to recover it. 00:30:33.288 [2024-12-09 06:29:27.776439] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.288 [2024-12-09 06:29:27.776535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.288 [2024-12-09 06:29:27.776548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.288 [2024-12-09 06:29:27.776554] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.288 [2024-12-09 06:29:27.776560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.288 [2024-12-09 06:29:27.776574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.288 qpair failed and we were unable to recover it. 00:30:33.288 [2024-12-09 06:29:27.786431] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.288 [2024-12-09 06:29:27.786488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.288 [2024-12-09 06:29:27.786501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.288 [2024-12-09 06:29:27.786507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.288 [2024-12-09 06:29:27.786513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.288 [2024-12-09 06:29:27.786526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.288 qpair failed and we were unable to recover it. 00:30:33.288 [2024-12-09 06:29:27.796457] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.288 [2024-12-09 06:29:27.796550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.288 [2024-12-09 06:29:27.796562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.288 [2024-12-09 06:29:27.796569] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.288 [2024-12-09 06:29:27.796574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.288 [2024-12-09 06:29:27.796588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.288 qpair failed and we were unable to recover it. 00:30:33.289 [2024-12-09 06:29:27.806486] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.289 [2024-12-09 06:29:27.806534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.289 [2024-12-09 06:29:27.806547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.289 [2024-12-09 06:29:27.806554] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.289 [2024-12-09 06:29:27.806559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.289 [2024-12-09 06:29:27.806572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.289 qpair failed and we were unable to recover it. 00:30:33.289 [2024-12-09 06:29:27.816455] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.289 [2024-12-09 06:29:27.816500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.289 [2024-12-09 06:29:27.816517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.289 [2024-12-09 06:29:27.816523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.289 [2024-12-09 06:29:27.816529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.289 [2024-12-09 06:29:27.816542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.289 qpair failed and we were unable to recover it. 00:30:33.289 [2024-12-09 06:29:27.826463] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.289 [2024-12-09 06:29:27.826523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.289 [2024-12-09 06:29:27.826535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.289 [2024-12-09 06:29:27.826542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.289 [2024-12-09 06:29:27.826548] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.289 [2024-12-09 06:29:27.826561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.289 qpair failed and we were unable to recover it. 00:30:33.289 [2024-12-09 06:29:27.836553] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.289 [2024-12-09 06:29:27.836607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.289 [2024-12-09 06:29:27.836620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.289 [2024-12-09 06:29:27.836626] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.289 [2024-12-09 06:29:27.836632] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.289 [2024-12-09 06:29:27.836644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.289 qpair failed and we were unable to recover it. 00:30:33.289 [2024-12-09 06:29:27.846446] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.289 [2024-12-09 06:29:27.846499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.289 [2024-12-09 06:29:27.846511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.289 [2024-12-09 06:29:27.846517] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.289 [2024-12-09 06:29:27.846524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.289 [2024-12-09 06:29:27.846542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.289 qpair failed and we were unable to recover it. 00:30:33.289 [2024-12-09 06:29:27.856586] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.289 [2024-12-09 06:29:27.856645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.289 [2024-12-09 06:29:27.856657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.289 [2024-12-09 06:29:27.856664] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.289 [2024-12-09 06:29:27.856669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.289 [2024-12-09 06:29:27.856686] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.289 qpair failed and we were unable to recover it. 00:30:33.289 [2024-12-09 06:29:27.866509] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.289 [2024-12-09 06:29:27.866561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.289 [2024-12-09 06:29:27.866573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.289 [2024-12-09 06:29:27.866580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.289 [2024-12-09 06:29:27.866585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.289 [2024-12-09 06:29:27.866598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.289 qpair failed and we were unable to recover it. 00:30:33.551 [2024-12-09 06:29:27.876682] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.551 [2024-12-09 06:29:27.876728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.551 [2024-12-09 06:29:27.876741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.551 [2024-12-09 06:29:27.876748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.551 [2024-12-09 06:29:27.876753] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.551 [2024-12-09 06:29:27.876766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.551 qpair failed and we were unable to recover it. 00:30:33.551 [2024-12-09 06:29:27.886702] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.551 [2024-12-09 06:29:27.886754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.551 [2024-12-09 06:29:27.886766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.551 [2024-12-09 06:29:27.886773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.551 [2024-12-09 06:29:27.886778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.551 [2024-12-09 06:29:27.886791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.551 qpair failed and we were unable to recover it. 00:30:33.551 [2024-12-09 06:29:27.896721] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.551 [2024-12-09 06:29:27.896793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.551 [2024-12-09 06:29:27.896806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.551 [2024-12-09 06:29:27.896813] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.551 [2024-12-09 06:29:27.896818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.551 [2024-12-09 06:29:27.896832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.551 qpair failed and we were unable to recover it. 00:30:33.551 [2024-12-09 06:29:27.906723] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.551 [2024-12-09 06:29:27.906772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.551 [2024-12-09 06:29:27.906785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.551 [2024-12-09 06:29:27.906791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.551 [2024-12-09 06:29:27.906797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.551 [2024-12-09 06:29:27.906809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.551 qpair failed and we were unable to recover it. 00:30:33.551 [2024-12-09 06:29:27.916662] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.551 [2024-12-09 06:29:27.916716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.551 [2024-12-09 06:29:27.916728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.551 [2024-12-09 06:29:27.916735] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.551 [2024-12-09 06:29:27.916741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.551 [2024-12-09 06:29:27.916753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.551 qpair failed and we were unable to recover it. 00:30:33.551 [2024-12-09 06:29:27.926796] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.551 [2024-12-09 06:29:27.926886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.551 [2024-12-09 06:29:27.926898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.551 [2024-12-09 06:29:27.926905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.551 [2024-12-09 06:29:27.926911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.551 [2024-12-09 06:29:27.926923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.551 qpair failed and we were unable to recover it. 00:30:33.551 [2024-12-09 06:29:27.936813] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.551 [2024-12-09 06:29:27.936869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.551 [2024-12-09 06:29:27.936881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.551 [2024-12-09 06:29:27.936888] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.551 [2024-12-09 06:29:27.936893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.551 [2024-12-09 06:29:27.936906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.551 qpair failed and we were unable to recover it. 00:30:33.551 [2024-12-09 06:29:27.946820] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.551 [2024-12-09 06:29:27.946880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.551 [2024-12-09 06:29:27.946899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.551 [2024-12-09 06:29:27.946905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.551 [2024-12-09 06:29:27.946911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.551 [2024-12-09 06:29:27.946924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.551 qpair failed and we were unable to recover it. 00:30:33.551 [2024-12-09 06:29:27.956857] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.551 [2024-12-09 06:29:27.956906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.551 [2024-12-09 06:29:27.956919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.552 [2024-12-09 06:29:27.956925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.552 [2024-12-09 06:29:27.956931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.552 [2024-12-09 06:29:27.956944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.552 qpair failed and we were unable to recover it. 00:30:33.552 [2024-12-09 06:29:27.966892] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.552 [2024-12-09 06:29:27.966943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.552 [2024-12-09 06:29:27.966955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.552 [2024-12-09 06:29:27.966962] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.552 [2024-12-09 06:29:27.966967] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.552 [2024-12-09 06:29:27.966980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.552 qpair failed and we were unable to recover it. 00:30:33.552 [2024-12-09 06:29:27.976881] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.552 [2024-12-09 06:29:27.976933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.552 [2024-12-09 06:29:27.976945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.552 [2024-12-09 06:29:27.976952] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.552 [2024-12-09 06:29:27.976957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.552 [2024-12-09 06:29:27.976970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.552 qpair failed and we were unable to recover it. 00:30:33.552 [2024-12-09 06:29:27.986972] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.552 [2024-12-09 06:29:27.987026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.552 [2024-12-09 06:29:27.987038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.552 [2024-12-09 06:29:27.987045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.552 [2024-12-09 06:29:27.987054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.552 [2024-12-09 06:29:27.987066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.552 qpair failed and we were unable to recover it. 00:30:33.552 [2024-12-09 06:29:27.997005] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.552 [2024-12-09 06:29:27.997117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.552 [2024-12-09 06:29:27.997129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.552 [2024-12-09 06:29:27.997136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.552 [2024-12-09 06:29:27.997142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.552 [2024-12-09 06:29:27.997154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.552 qpair failed and we were unable to recover it. 00:30:33.552 [2024-12-09 06:29:28.006981] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.552 [2024-12-09 06:29:28.007024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.552 [2024-12-09 06:29:28.007037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.552 [2024-12-09 06:29:28.007043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.552 [2024-12-09 06:29:28.007049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.552 [2024-12-09 06:29:28.007062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.552 qpair failed and we were unable to recover it. 00:30:33.552 [2024-12-09 06:29:28.017008] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.552 [2024-12-09 06:29:28.017072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.552 [2024-12-09 06:29:28.017085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.552 [2024-12-09 06:29:28.017091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.552 [2024-12-09 06:29:28.017097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.552 [2024-12-09 06:29:28.017109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.552 qpair failed and we were unable to recover it. 00:30:33.552 [2024-12-09 06:29:28.027037] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.552 [2024-12-09 06:29:28.027085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.552 [2024-12-09 06:29:28.027097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.552 [2024-12-09 06:29:28.027103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.552 [2024-12-09 06:29:28.027109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.552 [2024-12-09 06:29:28.027122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.552 qpair failed and we were unable to recover it. 00:30:33.552 [2024-12-09 06:29:28.037100] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.552 [2024-12-09 06:29:28.037145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.552 [2024-12-09 06:29:28.037158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.552 [2024-12-09 06:29:28.037164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.552 [2024-12-09 06:29:28.037170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.552 [2024-12-09 06:29:28.037183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.552 qpair failed and we were unable to recover it. 00:30:33.552 [2024-12-09 06:29:28.047103] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.552 [2024-12-09 06:29:28.047149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.552 [2024-12-09 06:29:28.047164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.552 [2024-12-09 06:29:28.047171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.552 [2024-12-09 06:29:28.047177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.552 [2024-12-09 06:29:28.047191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.552 qpair failed and we were unable to recover it. 00:30:33.552 [2024-12-09 06:29:28.057118] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.552 [2024-12-09 06:29:28.057170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.552 [2024-12-09 06:29:28.057183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.552 [2024-12-09 06:29:28.057189] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.552 [2024-12-09 06:29:28.057195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.552 [2024-12-09 06:29:28.057208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.552 qpair failed and we were unable to recover it. 00:30:33.552 [2024-12-09 06:29:28.067112] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.552 [2024-12-09 06:29:28.067159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.552 [2024-12-09 06:29:28.067171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.552 [2024-12-09 06:29:28.067177] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.552 [2024-12-09 06:29:28.067183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.552 [2024-12-09 06:29:28.067197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.552 qpair failed and we were unable to recover it. 00:30:33.552 [2024-12-09 06:29:28.077200] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.552 [2024-12-09 06:29:28.077256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.552 [2024-12-09 06:29:28.077272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.552 [2024-12-09 06:29:28.077278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.552 [2024-12-09 06:29:28.077284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.552 [2024-12-09 06:29:28.077297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.552 qpair failed and we were unable to recover it. 00:30:33.552 [2024-12-09 06:29:28.087224] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.552 [2024-12-09 06:29:28.087268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.552 [2024-12-09 06:29:28.087280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.553 [2024-12-09 06:29:28.087287] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.553 [2024-12-09 06:29:28.087292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.553 [2024-12-09 06:29:28.087305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.553 qpair failed and we were unable to recover it. 00:30:33.553 [2024-12-09 06:29:28.097239] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.553 [2024-12-09 06:29:28.097292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.553 [2024-12-09 06:29:28.097305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.553 [2024-12-09 06:29:28.097311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.553 [2024-12-09 06:29:28.097317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.553 [2024-12-09 06:29:28.097330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.553 qpair failed and we were unable to recover it. 00:30:33.553 [2024-12-09 06:29:28.107244] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.553 [2024-12-09 06:29:28.107301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.553 [2024-12-09 06:29:28.107313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.553 [2024-12-09 06:29:28.107320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.553 [2024-12-09 06:29:28.107326] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.553 [2024-12-09 06:29:28.107338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.553 qpair failed and we were unable to recover it. 00:30:33.553 [2024-12-09 06:29:28.117189] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.553 [2024-12-09 06:29:28.117234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.553 [2024-12-09 06:29:28.117248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.553 [2024-12-09 06:29:28.117258] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.553 [2024-12-09 06:29:28.117264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.553 [2024-12-09 06:29:28.117283] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.553 qpair failed and we were unable to recover it. 00:30:33.553 [2024-12-09 06:29:28.127323] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.553 [2024-12-09 06:29:28.127373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.553 [2024-12-09 06:29:28.127385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.553 [2024-12-09 06:29:28.127392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.553 [2024-12-09 06:29:28.127397] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.553 [2024-12-09 06:29:28.127411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.553 qpair failed and we were unable to recover it. 00:30:33.813 [2024-12-09 06:29:28.137387] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.813 [2024-12-09 06:29:28.137434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.813 [2024-12-09 06:29:28.137446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.813 [2024-12-09 06:29:28.137457] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.813 [2024-12-09 06:29:28.137463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.813 [2024-12-09 06:29:28.137476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.813 qpair failed and we were unable to recover it. 00:30:33.813 [2024-12-09 06:29:28.147243] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.813 [2024-12-09 06:29:28.147324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.813 [2024-12-09 06:29:28.147336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.813 [2024-12-09 06:29:28.147343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.813 [2024-12-09 06:29:28.147349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.813 [2024-12-09 06:29:28.147361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.813 qpair failed and we were unable to recover it. 00:30:33.813 [2024-12-09 06:29:28.157421] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.813 [2024-12-09 06:29:28.157474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.813 [2024-12-09 06:29:28.157487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.813 [2024-12-09 06:29:28.157494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.813 [2024-12-09 06:29:28.157500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.813 [2024-12-09 06:29:28.157513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.813 qpair failed and we were unable to recover it. 00:30:33.813 [2024-12-09 06:29:28.167429] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.813 [2024-12-09 06:29:28.167480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.813 [2024-12-09 06:29:28.167493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.813 [2024-12-09 06:29:28.167499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.813 [2024-12-09 06:29:28.167505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.813 [2024-12-09 06:29:28.167518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.813 qpair failed and we were unable to recover it. 00:30:33.813 [2024-12-09 06:29:28.177361] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.813 [2024-12-09 06:29:28.177422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.813 [2024-12-09 06:29:28.177435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.813 [2024-12-09 06:29:28.177441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.813 [2024-12-09 06:29:28.177447] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.813 [2024-12-09 06:29:28.177464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.813 qpair failed and we were unable to recover it. 00:30:33.813 [2024-12-09 06:29:28.187365] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.813 [2024-12-09 06:29:28.187411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.813 [2024-12-09 06:29:28.187424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.813 [2024-12-09 06:29:28.187430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.813 [2024-12-09 06:29:28.187436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.813 [2024-12-09 06:29:28.187458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.813 qpair failed and we were unable to recover it. 00:30:33.813 [2024-12-09 06:29:28.197549] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.813 [2024-12-09 06:29:28.197603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.813 [2024-12-09 06:29:28.197615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.813 [2024-12-09 06:29:28.197622] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.813 [2024-12-09 06:29:28.197628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.813 [2024-12-09 06:29:28.197641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.813 qpair failed and we were unable to recover it. 00:30:33.813 [2024-12-09 06:29:28.207539] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.813 [2024-12-09 06:29:28.207596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.813 [2024-12-09 06:29:28.207609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.813 [2024-12-09 06:29:28.207615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.813 [2024-12-09 06:29:28.207621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.813 [2024-12-09 06:29:28.207633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.813 qpair failed and we were unable to recover it. 00:30:33.813 [2024-12-09 06:29:28.217527] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.813 [2024-12-09 06:29:28.217574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.813 [2024-12-09 06:29:28.217586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.813 [2024-12-09 06:29:28.217592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.813 [2024-12-09 06:29:28.217598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.813 [2024-12-09 06:29:28.217611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.813 qpair failed and we were unable to recover it. 00:30:33.813 [2024-12-09 06:29:28.227571] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.813 [2024-12-09 06:29:28.227646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.813 [2024-12-09 06:29:28.227659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.813 [2024-12-09 06:29:28.227665] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.813 [2024-12-09 06:29:28.227671] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.813 [2024-12-09 06:29:28.227685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.813 qpair failed and we were unable to recover it. 00:30:33.813 [2024-12-09 06:29:28.237521] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.813 [2024-12-09 06:29:28.237571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.813 [2024-12-09 06:29:28.237583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.813 [2024-12-09 06:29:28.237589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.813 [2024-12-09 06:29:28.237595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.813 [2024-12-09 06:29:28.237608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.813 qpair failed and we were unable to recover it. 00:30:33.813 [2024-12-09 06:29:28.247534] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.813 [2024-12-09 06:29:28.247587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.813 [2024-12-09 06:29:28.247599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.813 [2024-12-09 06:29:28.247608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.813 [2024-12-09 06:29:28.247615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.813 [2024-12-09 06:29:28.247628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.813 qpair failed and we were unable to recover it. 00:30:33.813 [2024-12-09 06:29:28.257666] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.813 [2024-12-09 06:29:28.257711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.813 [2024-12-09 06:29:28.257723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.813 [2024-12-09 06:29:28.257729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.813 [2024-12-09 06:29:28.257735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.813 [2024-12-09 06:29:28.257748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.813 qpair failed and we were unable to recover it. 00:30:33.813 [2024-12-09 06:29:28.267680] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.813 [2024-12-09 06:29:28.267725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.813 [2024-12-09 06:29:28.267738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.813 [2024-12-09 06:29:28.267744] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.813 [2024-12-09 06:29:28.267750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.813 [2024-12-09 06:29:28.267763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.813 qpair failed and we were unable to recover it. 00:30:33.813 [2024-12-09 06:29:28.277805] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.813 [2024-12-09 06:29:28.277879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.813 [2024-12-09 06:29:28.277891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.813 [2024-12-09 06:29:28.277898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.813 [2024-12-09 06:29:28.277904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.813 [2024-12-09 06:29:28.277916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.813 qpair failed and we were unable to recover it. 00:30:33.813 [2024-12-09 06:29:28.287766] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.813 [2024-12-09 06:29:28.287815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.814 [2024-12-09 06:29:28.287828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.814 [2024-12-09 06:29:28.287834] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.814 [2024-12-09 06:29:28.287839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.814 [2024-12-09 06:29:28.287855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.814 qpair failed and we were unable to recover it. 00:30:33.814 [2024-12-09 06:29:28.297676] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.814 [2024-12-09 06:29:28.297725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.814 [2024-12-09 06:29:28.297739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.814 [2024-12-09 06:29:28.297745] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.814 [2024-12-09 06:29:28.297751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.814 [2024-12-09 06:29:28.297764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.814 qpair failed and we were unable to recover it. 00:30:33.814 [2024-12-09 06:29:28.307789] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.814 [2024-12-09 06:29:28.307861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.814 [2024-12-09 06:29:28.307874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.814 [2024-12-09 06:29:28.307880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.814 [2024-12-09 06:29:28.307886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.814 [2024-12-09 06:29:28.307899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.814 qpair failed and we were unable to recover it. 00:30:33.814 [2024-12-09 06:29:28.317865] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.814 [2024-12-09 06:29:28.317928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.814 [2024-12-09 06:29:28.317940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.814 [2024-12-09 06:29:28.317947] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.814 [2024-12-09 06:29:28.317953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.814 [2024-12-09 06:29:28.317965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.814 qpair failed and we were unable to recover it. 00:30:33.814 [2024-12-09 06:29:28.327862] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.814 [2024-12-09 06:29:28.327930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.814 [2024-12-09 06:29:28.327943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.814 [2024-12-09 06:29:28.327950] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.814 [2024-12-09 06:29:28.327955] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.814 [2024-12-09 06:29:28.327970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.814 qpair failed and we were unable to recover it. 00:30:33.814 [2024-12-09 06:29:28.337881] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.814 [2024-12-09 06:29:28.337927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.814 [2024-12-09 06:29:28.337939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.814 [2024-12-09 06:29:28.337946] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.814 [2024-12-09 06:29:28.337951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.814 [2024-12-09 06:29:28.337965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.814 qpair failed and we were unable to recover it. 00:30:33.814 [2024-12-09 06:29:28.347920] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.814 [2024-12-09 06:29:28.347966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.814 [2024-12-09 06:29:28.347979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.814 [2024-12-09 06:29:28.347986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.814 [2024-12-09 06:29:28.347991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.814 [2024-12-09 06:29:28.348004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.814 qpair failed and we were unable to recover it. 00:30:33.814 [2024-12-09 06:29:28.357966] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.814 [2024-12-09 06:29:28.358018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.814 [2024-12-09 06:29:28.358031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.814 [2024-12-09 06:29:28.358037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.814 [2024-12-09 06:29:28.358043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.814 [2024-12-09 06:29:28.358056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.814 qpair failed and we were unable to recover it. 00:30:33.814 [2024-12-09 06:29:28.367870] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.814 [2024-12-09 06:29:28.367923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.814 [2024-12-09 06:29:28.367935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.814 [2024-12-09 06:29:28.367942] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.814 [2024-12-09 06:29:28.367947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.814 [2024-12-09 06:29:28.367960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.814 qpair failed and we were unable to recover it. 00:30:33.814 [2024-12-09 06:29:28.377928] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.814 [2024-12-09 06:29:28.377976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.814 [2024-12-09 06:29:28.377992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.814 [2024-12-09 06:29:28.377998] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.814 [2024-12-09 06:29:28.378004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.814 [2024-12-09 06:29:28.378017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.814 qpair failed and we were unable to recover it. 00:30:33.814 [2024-12-09 06:29:28.387894] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.814 [2024-12-09 06:29:28.387939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.814 [2024-12-09 06:29:28.387951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.814 [2024-12-09 06:29:28.387958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.814 [2024-12-09 06:29:28.387964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:33.814 [2024-12-09 06:29:28.387976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:33.814 qpair failed and we were unable to recover it. 00:30:34.074 [2024-12-09 06:29:28.398073] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.074 [2024-12-09 06:29:28.398127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.074 [2024-12-09 06:29:28.398140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.074 [2024-12-09 06:29:28.398147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.074 [2024-12-09 06:29:28.398153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.074 [2024-12-09 06:29:28.398166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.074 qpair failed and we were unable to recover it. 00:30:34.074 [2024-12-09 06:29:28.408087] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.074 [2024-12-09 06:29:28.408143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.074 [2024-12-09 06:29:28.408156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.074 [2024-12-09 06:29:28.408162] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.074 [2024-12-09 06:29:28.408168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.074 [2024-12-09 06:29:28.408181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.074 qpair failed and we were unable to recover it. 00:30:34.074 [2024-12-09 06:29:28.418098] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.074 [2024-12-09 06:29:28.418143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.074 [2024-12-09 06:29:28.418155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.074 [2024-12-09 06:29:28.418162] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.074 [2024-12-09 06:29:28.418170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.075 [2024-12-09 06:29:28.418183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.075 qpair failed and we were unable to recover it. 00:30:34.075 [2024-12-09 06:29:28.428129] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.075 [2024-12-09 06:29:28.428172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.075 [2024-12-09 06:29:28.428185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.075 [2024-12-09 06:29:28.428191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.075 [2024-12-09 06:29:28.428197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.075 [2024-12-09 06:29:28.428209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.075 qpair failed and we were unable to recover it. 00:30:34.075 [2024-12-09 06:29:28.438190] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.075 [2024-12-09 06:29:28.438242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.075 [2024-12-09 06:29:28.438255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.075 [2024-12-09 06:29:28.438261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.075 [2024-12-09 06:29:28.438266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.075 [2024-12-09 06:29:28.438279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.075 qpair failed and we were unable to recover it. 00:30:34.075 [2024-12-09 06:29:28.448195] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.075 [2024-12-09 06:29:28.448242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.075 [2024-12-09 06:29:28.448255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.075 [2024-12-09 06:29:28.448261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.075 [2024-12-09 06:29:28.448267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.075 [2024-12-09 06:29:28.448280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.075 qpair failed and we were unable to recover it. 00:30:34.075 [2024-12-09 06:29:28.458244] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.075 [2024-12-09 06:29:28.458293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.075 [2024-12-09 06:29:28.458305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.075 [2024-12-09 06:29:28.458312] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.075 [2024-12-09 06:29:28.458318] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.075 [2024-12-09 06:29:28.458330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.075 qpair failed and we were unable to recover it. 00:30:34.075 [2024-12-09 06:29:28.468201] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.075 [2024-12-09 06:29:28.468274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.075 [2024-12-09 06:29:28.468286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.075 [2024-12-09 06:29:28.468292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.075 [2024-12-09 06:29:28.468298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.075 [2024-12-09 06:29:28.468311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.075 qpair failed and we were unable to recover it. 00:30:34.075 [2024-12-09 06:29:28.478286] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.075 [2024-12-09 06:29:28.478344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.075 [2024-12-09 06:29:28.478357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.075 [2024-12-09 06:29:28.478363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.075 [2024-12-09 06:29:28.478369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.075 [2024-12-09 06:29:28.478382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.075 qpair failed and we were unable to recover it. 00:30:34.075 [2024-12-09 06:29:28.488320] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.075 [2024-12-09 06:29:28.488368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.075 [2024-12-09 06:29:28.488380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.075 [2024-12-09 06:29:28.488387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.075 [2024-12-09 06:29:28.488392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.075 [2024-12-09 06:29:28.488405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.075 qpair failed and we were unable to recover it. 00:30:34.075 [2024-12-09 06:29:28.498336] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.075 [2024-12-09 06:29:28.498383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.075 [2024-12-09 06:29:28.498396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.075 [2024-12-09 06:29:28.498403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.075 [2024-12-09 06:29:28.498408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.075 [2024-12-09 06:29:28.498421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.075 qpair failed and we were unable to recover it. 00:30:34.075 [2024-12-09 06:29:28.508333] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.075 [2024-12-09 06:29:28.508378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.075 [2024-12-09 06:29:28.508394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.075 [2024-12-09 06:29:28.508400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.075 [2024-12-09 06:29:28.508406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.075 [2024-12-09 06:29:28.508419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.075 qpair failed and we were unable to recover it. 00:30:34.075 [2024-12-09 06:29:28.518395] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.075 [2024-12-09 06:29:28.518452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.075 [2024-12-09 06:29:28.518465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.075 [2024-12-09 06:29:28.518471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.075 [2024-12-09 06:29:28.518477] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.075 [2024-12-09 06:29:28.518490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.075 qpair failed and we were unable to recover it. 00:30:34.075 [2024-12-09 06:29:28.528406] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.075 [2024-12-09 06:29:28.528456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.075 [2024-12-09 06:29:28.528468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.075 [2024-12-09 06:29:28.528474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.075 [2024-12-09 06:29:28.528481] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.075 [2024-12-09 06:29:28.528494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.075 qpair failed and we were unable to recover it. 00:30:34.075 [2024-12-09 06:29:28.538436] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.075 [2024-12-09 06:29:28.538488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.075 [2024-12-09 06:29:28.538500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.075 [2024-12-09 06:29:28.538507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.075 [2024-12-09 06:29:28.538512] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.075 [2024-12-09 06:29:28.538525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.075 qpair failed and we were unable to recover it. 00:30:34.075 [2024-12-09 06:29:28.548431] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.075 [2024-12-09 06:29:28.548479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.075 [2024-12-09 06:29:28.548492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.075 [2024-12-09 06:29:28.548498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.075 [2024-12-09 06:29:28.548507] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.076 [2024-12-09 06:29:28.548520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.076 qpair failed and we were unable to recover it. 00:30:34.076 [2024-12-09 06:29:28.558394] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.076 [2024-12-09 06:29:28.558446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.076 [2024-12-09 06:29:28.558463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.076 [2024-12-09 06:29:28.558470] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.076 [2024-12-09 06:29:28.558475] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.076 [2024-12-09 06:29:28.558489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.076 qpair failed and we were unable to recover it. 00:30:34.076 [2024-12-09 06:29:28.568515] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.076 [2024-12-09 06:29:28.568564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.076 [2024-12-09 06:29:28.568577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.076 [2024-12-09 06:29:28.568584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.076 [2024-12-09 06:29:28.568590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.076 [2024-12-09 06:29:28.568603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.076 qpair failed and we were unable to recover it. 00:30:34.076 [2024-12-09 06:29:28.578516] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.076 [2024-12-09 06:29:28.578569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.076 [2024-12-09 06:29:28.578582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.076 [2024-12-09 06:29:28.578588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.076 [2024-12-09 06:29:28.578594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.076 [2024-12-09 06:29:28.578607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.076 qpair failed and we were unable to recover it. 00:30:34.076 [2024-12-09 06:29:28.588576] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.076 [2024-12-09 06:29:28.588625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.076 [2024-12-09 06:29:28.588638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.076 [2024-12-09 06:29:28.588645] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.076 [2024-12-09 06:29:28.588651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.076 [2024-12-09 06:29:28.588666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.076 qpair failed and we were unable to recover it. 00:30:34.076 [2024-12-09 06:29:28.598618] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.076 [2024-12-09 06:29:28.598664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.076 [2024-12-09 06:29:28.598677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.076 [2024-12-09 06:29:28.598683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.076 [2024-12-09 06:29:28.598689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.076 [2024-12-09 06:29:28.598702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.076 qpair failed and we were unable to recover it. 00:30:34.076 [2024-12-09 06:29:28.608619] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.076 [2024-12-09 06:29:28.608665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.076 [2024-12-09 06:29:28.608677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.076 [2024-12-09 06:29:28.608683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.076 [2024-12-09 06:29:28.608689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.076 [2024-12-09 06:29:28.608702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.076 qpair failed and we were unable to recover it. 00:30:34.076 [2024-12-09 06:29:28.618677] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.076 [2024-12-09 06:29:28.618718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.076 [2024-12-09 06:29:28.618730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.076 [2024-12-09 06:29:28.618737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.076 [2024-12-09 06:29:28.618743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.076 [2024-12-09 06:29:28.618755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.076 qpair failed and we were unable to recover it. 00:30:34.076 [2024-12-09 06:29:28.628677] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.076 [2024-12-09 06:29:28.628725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.076 [2024-12-09 06:29:28.628737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.076 [2024-12-09 06:29:28.628744] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.076 [2024-12-09 06:29:28.628750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.076 [2024-12-09 06:29:28.628762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.076 qpair failed and we were unable to recover it. 00:30:34.076 [2024-12-09 06:29:28.638720] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.076 [2024-12-09 06:29:28.638773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.076 [2024-12-09 06:29:28.638792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.076 [2024-12-09 06:29:28.638798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.076 [2024-12-09 06:29:28.638804] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.076 [2024-12-09 06:29:28.638817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.076 qpair failed and we were unable to recover it. 00:30:34.076 [2024-12-09 06:29:28.648732] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.076 [2024-12-09 06:29:28.648778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.076 [2024-12-09 06:29:28.648790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.076 [2024-12-09 06:29:28.648797] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.076 [2024-12-09 06:29:28.648803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.076 [2024-12-09 06:29:28.648816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.076 qpair failed and we were unable to recover it. 00:30:34.076 [2024-12-09 06:29:28.658739] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.337 [2024-12-09 06:29:28.658794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.337 [2024-12-09 06:29:28.658808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.337 [2024-12-09 06:29:28.658814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.337 [2024-12-09 06:29:28.658823] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.337 [2024-12-09 06:29:28.658838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.337 qpair failed and we were unable to recover it. 00:30:34.337 [2024-12-09 06:29:28.668805] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.337 [2024-12-09 06:29:28.668853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.337 [2024-12-09 06:29:28.668866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.337 [2024-12-09 06:29:28.668872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.337 [2024-12-09 06:29:28.668878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.337 [2024-12-09 06:29:28.668892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.337 qpair failed and we were unable to recover it. 00:30:34.337 [2024-12-09 06:29:28.678879] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.337 [2024-12-09 06:29:28.678931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.337 [2024-12-09 06:29:28.678943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.337 [2024-12-09 06:29:28.678954] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.337 [2024-12-09 06:29:28.678959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.337 [2024-12-09 06:29:28.678972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.337 qpair failed and we were unable to recover it. 00:30:34.337 [2024-12-09 06:29:28.688884] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.337 [2024-12-09 06:29:28.688931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.337 [2024-12-09 06:29:28.688943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.337 [2024-12-09 06:29:28.688949] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.337 [2024-12-09 06:29:28.688955] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.337 [2024-12-09 06:29:28.688968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.337 qpair failed and we were unable to recover it. 00:30:34.337 [2024-12-09 06:29:28.698869] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.337 [2024-12-09 06:29:28.698920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.337 [2024-12-09 06:29:28.698932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.337 [2024-12-09 06:29:28.698939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.337 [2024-12-09 06:29:28.698945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.337 [2024-12-09 06:29:28.698958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.337 qpair failed and we were unable to recover it. 00:30:34.337 [2024-12-09 06:29:28.708877] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.337 [2024-12-09 06:29:28.708937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.337 [2024-12-09 06:29:28.708950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.337 [2024-12-09 06:29:28.708956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.337 [2024-12-09 06:29:28.708962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.337 [2024-12-09 06:29:28.708975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.337 qpair failed and we were unable to recover it. 00:30:34.337 [2024-12-09 06:29:28.718926] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.337 [2024-12-09 06:29:28.718981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.337 [2024-12-09 06:29:28.718993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.337 [2024-12-09 06:29:28.718999] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.337 [2024-12-09 06:29:28.719005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.337 [2024-12-09 06:29:28.719018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.337 qpair failed and we were unable to recover it. 00:30:34.337 [2024-12-09 06:29:28.728955] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.337 [2024-12-09 06:29:28.729008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.337 [2024-12-09 06:29:28.729020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.337 [2024-12-09 06:29:28.729027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.337 [2024-12-09 06:29:28.729032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.337 [2024-12-09 06:29:28.729045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.337 qpair failed and we were unable to recover it. 00:30:34.337 [2024-12-09 06:29:28.738982] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.337 [2024-12-09 06:29:28.739032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.337 [2024-12-09 06:29:28.739044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.337 [2024-12-09 06:29:28.739050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.337 [2024-12-09 06:29:28.739056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.337 [2024-12-09 06:29:28.739069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.337 qpair failed and we were unable to recover it. 00:30:34.337 [2024-12-09 06:29:28.748994] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.337 [2024-12-09 06:29:28.749039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.337 [2024-12-09 06:29:28.749052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.337 [2024-12-09 06:29:28.749058] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.337 [2024-12-09 06:29:28.749064] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.337 [2024-12-09 06:29:28.749077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.338 qpair failed and we were unable to recover it. 00:30:34.338 [2024-12-09 06:29:28.759057] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.338 [2024-12-09 06:29:28.759108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.338 [2024-12-09 06:29:28.759120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.338 [2024-12-09 06:29:28.759127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.338 [2024-12-09 06:29:28.759132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.338 [2024-12-09 06:29:28.759145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.338 qpair failed and we were unable to recover it. 00:30:34.338 [2024-12-09 06:29:28.769080] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.338 [2024-12-09 06:29:28.769133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.338 [2024-12-09 06:29:28.769146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.338 [2024-12-09 06:29:28.769153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.338 [2024-12-09 06:29:28.769158] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.338 [2024-12-09 06:29:28.769171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.338 qpair failed and we were unable to recover it. 00:30:34.338 [2024-12-09 06:29:28.779063] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.338 [2024-12-09 06:29:28.779122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.338 [2024-12-09 06:29:28.779135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.338 [2024-12-09 06:29:28.779141] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.338 [2024-12-09 06:29:28.779147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.338 [2024-12-09 06:29:28.779159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.338 qpair failed and we were unable to recover it. 00:30:34.338 [2024-12-09 06:29:28.789101] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.338 [2024-12-09 06:29:28.789148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.338 [2024-12-09 06:29:28.789160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.338 [2024-12-09 06:29:28.789166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.338 [2024-12-09 06:29:28.789172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.338 [2024-12-09 06:29:28.789185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.338 qpair failed and we were unable to recover it. 00:30:34.338 [2024-12-09 06:29:28.799123] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.338 [2024-12-09 06:29:28.799184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.338 [2024-12-09 06:29:28.799207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.338 [2024-12-09 06:29:28.799215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.338 [2024-12-09 06:29:28.799221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.338 [2024-12-09 06:29:28.799239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.338 qpair failed and we were unable to recover it. 00:30:34.338 [2024-12-09 06:29:28.809111] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.338 [2024-12-09 06:29:28.809161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.338 [2024-12-09 06:29:28.809175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.338 [2024-12-09 06:29:28.809191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.338 [2024-12-09 06:29:28.809197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.338 [2024-12-09 06:29:28.809211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.338 qpair failed and we were unable to recover it. 00:30:34.338 [2024-12-09 06:29:28.819154] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.338 [2024-12-09 06:29:28.819201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.338 [2024-12-09 06:29:28.819223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.338 [2024-12-09 06:29:28.819231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.338 [2024-12-09 06:29:28.819237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.338 [2024-12-09 06:29:28.819255] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.338 qpair failed and we were unable to recover it. 00:30:34.338 [2024-12-09 06:29:28.829099] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.338 [2024-12-09 06:29:28.829145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.338 [2024-12-09 06:29:28.829160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.338 [2024-12-09 06:29:28.829166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.338 [2024-12-09 06:29:28.829172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.338 [2024-12-09 06:29:28.829186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.338 qpair failed and we were unable to recover it. 00:30:34.338 [2024-12-09 06:29:28.839157] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.338 [2024-12-09 06:29:28.839223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.338 [2024-12-09 06:29:28.839235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.338 [2024-12-09 06:29:28.839242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.338 [2024-12-09 06:29:28.839248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.338 [2024-12-09 06:29:28.839260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.338 qpair failed and we were unable to recover it. 00:30:34.338 [2024-12-09 06:29:28.849293] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.338 [2024-12-09 06:29:28.849350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.338 [2024-12-09 06:29:28.849372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.338 [2024-12-09 06:29:28.849380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.338 [2024-12-09 06:29:28.849387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.338 [2024-12-09 06:29:28.849409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.338 qpair failed and we were unable to recover it. 00:30:34.338 [2024-12-09 06:29:28.859296] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.338 [2024-12-09 06:29:28.859336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.338 [2024-12-09 06:29:28.859350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.338 [2024-12-09 06:29:28.859357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.338 [2024-12-09 06:29:28.859363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.338 [2024-12-09 06:29:28.859377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.338 qpair failed and we were unable to recover it. 00:30:34.338 [2024-12-09 06:29:28.869329] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.338 [2024-12-09 06:29:28.869376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.338 [2024-12-09 06:29:28.869389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.338 [2024-12-09 06:29:28.869395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.338 [2024-12-09 06:29:28.869401] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.338 [2024-12-09 06:29:28.869415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.338 qpair failed and we were unable to recover it. 00:30:34.338 [2024-12-09 06:29:28.879367] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.338 [2024-12-09 06:29:28.879432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.338 [2024-12-09 06:29:28.879444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.338 [2024-12-09 06:29:28.879454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.338 [2024-12-09 06:29:28.879460] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.338 [2024-12-09 06:29:28.879473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.339 qpair failed and we were unable to recover it. 00:30:34.339 [2024-12-09 06:29:28.889413] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.339 [2024-12-09 06:29:28.889463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.339 [2024-12-09 06:29:28.889476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.339 [2024-12-09 06:29:28.889482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.339 [2024-12-09 06:29:28.889488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.339 [2024-12-09 06:29:28.889501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.339 qpair failed and we were unable to recover it. 00:30:34.339 [2024-12-09 06:29:28.899419] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.339 [2024-12-09 06:29:28.899467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.339 [2024-12-09 06:29:28.899480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.339 [2024-12-09 06:29:28.899486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.339 [2024-12-09 06:29:28.899492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.339 [2024-12-09 06:29:28.899505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.339 qpair failed and we were unable to recover it. 00:30:34.339 [2024-12-09 06:29:28.909445] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.339 [2024-12-09 06:29:28.909497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.339 [2024-12-09 06:29:28.909509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.339 [2024-12-09 06:29:28.909516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.339 [2024-12-09 06:29:28.909522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.339 [2024-12-09 06:29:28.909535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.339 qpair failed and we were unable to recover it. 00:30:34.339 [2024-12-09 06:29:28.919516] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.339 [2024-12-09 06:29:28.919563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.339 [2024-12-09 06:29:28.919575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.339 [2024-12-09 06:29:28.919582] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.339 [2024-12-09 06:29:28.919587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.339 [2024-12-09 06:29:28.919600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.339 qpair failed and we were unable to recover it. 00:30:34.609 [2024-12-09 06:29:28.929541] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.609 [2024-12-09 06:29:28.929589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.609 [2024-12-09 06:29:28.929602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.609 [2024-12-09 06:29:28.929609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.609 [2024-12-09 06:29:28.929615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.609 [2024-12-09 06:29:28.929628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.609 qpair failed and we were unable to recover it. 00:30:34.609 [2024-12-09 06:29:28.939493] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.609 [2024-12-09 06:29:28.939538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.609 [2024-12-09 06:29:28.939553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.609 [2024-12-09 06:29:28.939560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.609 [2024-12-09 06:29:28.939566] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.609 [2024-12-09 06:29:28.939579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.609 qpair failed and we were unable to recover it. 00:30:34.609 [2024-12-09 06:29:28.949532] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.609 [2024-12-09 06:29:28.949575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.609 [2024-12-09 06:29:28.949588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.609 [2024-12-09 06:29:28.949594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.609 [2024-12-09 06:29:28.949600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.609 [2024-12-09 06:29:28.949613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.609 qpair failed and we were unable to recover it. 00:30:34.609 [2024-12-09 06:29:28.959498] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.609 [2024-12-09 06:29:28.959552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.609 [2024-12-09 06:29:28.959565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.609 [2024-12-09 06:29:28.959571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.609 [2024-12-09 06:29:28.959577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.609 [2024-12-09 06:29:28.959590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.609 qpair failed and we were unable to recover it. 00:30:34.609 [2024-12-09 06:29:28.969605] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.609 [2024-12-09 06:29:28.969651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.609 [2024-12-09 06:29:28.969664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.609 [2024-12-09 06:29:28.969671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.609 [2024-12-09 06:29:28.969677] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.610 [2024-12-09 06:29:28.969690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.610 qpair failed and we were unable to recover it. 00:30:34.610 [2024-12-09 06:29:28.979629] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.610 [2024-12-09 06:29:28.979676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.610 [2024-12-09 06:29:28.979688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.610 [2024-12-09 06:29:28.979695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.610 [2024-12-09 06:29:28.979704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.610 [2024-12-09 06:29:28.979717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.610 qpair failed and we were unable to recover it. 00:30:34.610 [2024-12-09 06:29:28.989664] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.610 [2024-12-09 06:29:28.989721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.610 [2024-12-09 06:29:28.989734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.610 [2024-12-09 06:29:28.989740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.610 [2024-12-09 06:29:28.989746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.610 [2024-12-09 06:29:28.989759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.610 qpair failed and we were unable to recover it. 00:30:34.610 [2024-12-09 06:29:28.999720] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.610 [2024-12-09 06:29:28.999806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.610 [2024-12-09 06:29:28.999819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.610 [2024-12-09 06:29:28.999826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.610 [2024-12-09 06:29:28.999831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.610 [2024-12-09 06:29:28.999845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.610 qpair failed and we were unable to recover it. 00:30:34.610 [2024-12-09 06:29:29.009732] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.610 [2024-12-09 06:29:29.009824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.610 [2024-12-09 06:29:29.009836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.610 [2024-12-09 06:29:29.009843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.610 [2024-12-09 06:29:29.009848] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.610 [2024-12-09 06:29:29.009861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.610 qpair failed and we were unable to recover it. 00:30:34.610 [2024-12-09 06:29:29.019775] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.610 [2024-12-09 06:29:29.019817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.610 [2024-12-09 06:29:29.019829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.610 [2024-12-09 06:29:29.019835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.610 [2024-12-09 06:29:29.019841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.610 [2024-12-09 06:29:29.019854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.610 qpair failed and we were unable to recover it. 00:30:34.610 [2024-12-09 06:29:29.029749] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.610 [2024-12-09 06:29:29.029796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.610 [2024-12-09 06:29:29.029808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.610 [2024-12-09 06:29:29.029815] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.610 [2024-12-09 06:29:29.029820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.610 [2024-12-09 06:29:29.029834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.610 qpair failed and we were unable to recover it. 00:30:34.610 [2024-12-09 06:29:29.039831] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.610 [2024-12-09 06:29:29.039879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.610 [2024-12-09 06:29:29.039891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.610 [2024-12-09 06:29:29.039898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.610 [2024-12-09 06:29:29.039904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.610 [2024-12-09 06:29:29.039916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.610 qpair failed and we were unable to recover it. 00:30:34.610 [2024-12-09 06:29:29.049854] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.610 [2024-12-09 06:29:29.049900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.610 [2024-12-09 06:29:29.049912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.610 [2024-12-09 06:29:29.049919] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.610 [2024-12-09 06:29:29.049924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.610 [2024-12-09 06:29:29.049937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.610 qpair failed and we were unable to recover it. 00:30:34.610 [2024-12-09 06:29:29.059847] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.610 [2024-12-09 06:29:29.059910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.610 [2024-12-09 06:29:29.059922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.610 [2024-12-09 06:29:29.059928] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.610 [2024-12-09 06:29:29.059934] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.610 [2024-12-09 06:29:29.059946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.610 qpair failed and we were unable to recover it. 00:30:34.610 [2024-12-09 06:29:29.069879] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.610 [2024-12-09 06:29:29.069930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.610 [2024-12-09 06:29:29.069945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.610 [2024-12-09 06:29:29.069951] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.610 [2024-12-09 06:29:29.069957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.610 [2024-12-09 06:29:29.069970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.610 qpair failed and we were unable to recover it. 00:30:34.610 [2024-12-09 06:29:29.079941] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.610 [2024-12-09 06:29:29.079996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.610 [2024-12-09 06:29:29.080008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.610 [2024-12-09 06:29:29.080014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.610 [2024-12-09 06:29:29.080020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.610 [2024-12-09 06:29:29.080033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.610 qpair failed and we were unable to recover it. 00:30:34.610 [2024-12-09 06:29:29.089961] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.610 [2024-12-09 06:29:29.090010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.610 [2024-12-09 06:29:29.090022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.610 [2024-12-09 06:29:29.090028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.610 [2024-12-09 06:29:29.090034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.610 [2024-12-09 06:29:29.090047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.610 qpair failed and we were unable to recover it. 00:30:34.610 [2024-12-09 06:29:29.099921] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.610 [2024-12-09 06:29:29.099958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.610 [2024-12-09 06:29:29.099970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.610 [2024-12-09 06:29:29.099977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.610 [2024-12-09 06:29:29.099982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.610 [2024-12-09 06:29:29.099995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.611 qpair failed and we were unable to recover it. 00:30:34.611 [2024-12-09 06:29:29.109953] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.611 [2024-12-09 06:29:29.109998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.611 [2024-12-09 06:29:29.110011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.611 [2024-12-09 06:29:29.110017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.611 [2024-12-09 06:29:29.110026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.611 [2024-12-09 06:29:29.110039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.611 qpair failed and we were unable to recover it. 00:30:34.611 [2024-12-09 06:29:29.120017] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.611 [2024-12-09 06:29:29.120064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.611 [2024-12-09 06:29:29.120076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.611 [2024-12-09 06:29:29.120083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.611 [2024-12-09 06:29:29.120088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.611 [2024-12-09 06:29:29.120101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.611 qpair failed and we were unable to recover it. 00:30:34.611 [2024-12-09 06:29:29.130068] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.611 [2024-12-09 06:29:29.130128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.611 [2024-12-09 06:29:29.130140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.611 [2024-12-09 06:29:29.130147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.611 [2024-12-09 06:29:29.130152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.611 [2024-12-09 06:29:29.130165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.611 qpair failed and we were unable to recover it. 00:30:34.611 [2024-12-09 06:29:29.140067] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.611 [2024-12-09 06:29:29.140139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.611 [2024-12-09 06:29:29.140151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.611 [2024-12-09 06:29:29.140158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.611 [2024-12-09 06:29:29.140163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.611 [2024-12-09 06:29:29.140176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.611 qpair failed and we were unable to recover it. 00:30:34.611 [2024-12-09 06:29:29.149980] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.611 [2024-12-09 06:29:29.150026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.611 [2024-12-09 06:29:29.150039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.611 [2024-12-09 06:29:29.150046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.611 [2024-12-09 06:29:29.150052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.611 [2024-12-09 06:29:29.150065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.611 qpair failed and we were unable to recover it. 00:30:34.611 [2024-12-09 06:29:29.160158] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.611 [2024-12-09 06:29:29.160209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.611 [2024-12-09 06:29:29.160222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.611 [2024-12-09 06:29:29.160229] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.611 [2024-12-09 06:29:29.160234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.611 [2024-12-09 06:29:29.160247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.611 qpair failed and we were unable to recover it. 00:30:34.611 [2024-12-09 06:29:29.170176] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.611 [2024-12-09 06:29:29.170218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.611 [2024-12-09 06:29:29.170230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.611 [2024-12-09 06:29:29.170237] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.611 [2024-12-09 06:29:29.170242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.611 [2024-12-09 06:29:29.170255] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.611 qpair failed and we were unable to recover it. 00:30:34.611 [2024-12-09 06:29:29.180158] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.611 [2024-12-09 06:29:29.180202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.611 [2024-12-09 06:29:29.180214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.611 [2024-12-09 06:29:29.180220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.611 [2024-12-09 06:29:29.180226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.611 [2024-12-09 06:29:29.180239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.611 qpair failed and we were unable to recover it. 00:30:34.611 [2024-12-09 06:29:29.190208] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.611 [2024-12-09 06:29:29.190263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.611 [2024-12-09 06:29:29.190275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.611 [2024-12-09 06:29:29.190282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.611 [2024-12-09 06:29:29.190288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.611 [2024-12-09 06:29:29.190300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.611 qpair failed and we were unable to recover it. 00:30:34.872 [2024-12-09 06:29:29.200253] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.872 [2024-12-09 06:29:29.200305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.872 [2024-12-09 06:29:29.200320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.872 [2024-12-09 06:29:29.200327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.872 [2024-12-09 06:29:29.200333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.872 [2024-12-09 06:29:29.200346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.872 qpair failed and we were unable to recover it. 00:30:34.872 [2024-12-09 06:29:29.210279] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.872 [2024-12-09 06:29:29.210367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.872 [2024-12-09 06:29:29.210380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.872 [2024-12-09 06:29:29.210386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.872 [2024-12-09 06:29:29.210392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.872 [2024-12-09 06:29:29.210404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.872 qpair failed and we were unable to recover it. 00:30:34.872 [2024-12-09 06:29:29.220159] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.872 [2024-12-09 06:29:29.220226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.872 [2024-12-09 06:29:29.220240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.872 [2024-12-09 06:29:29.220246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.872 [2024-12-09 06:29:29.220252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.872 [2024-12-09 06:29:29.220267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.872 qpair failed and we were unable to recover it. 00:30:34.872 [2024-12-09 06:29:29.230229] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.872 [2024-12-09 06:29:29.230276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.872 [2024-12-09 06:29:29.230289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.872 [2024-12-09 06:29:29.230295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.872 [2024-12-09 06:29:29.230301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.872 [2024-12-09 06:29:29.230314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.872 qpair failed and we were unable to recover it. 00:30:34.872 [2024-12-09 06:29:29.240344] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.872 [2024-12-09 06:29:29.240390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.872 [2024-12-09 06:29:29.240402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.872 [2024-12-09 06:29:29.240411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.872 [2024-12-09 06:29:29.240417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.872 [2024-12-09 06:29:29.240430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.872 qpair failed and we were unable to recover it. 00:30:34.872 [2024-12-09 06:29:29.250389] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.872 [2024-12-09 06:29:29.250451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.872 [2024-12-09 06:29:29.250464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.872 [2024-12-09 06:29:29.250471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.872 [2024-12-09 06:29:29.250477] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.872 [2024-12-09 06:29:29.250490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.872 qpair failed and we were unable to recover it. 00:30:34.872 [2024-12-09 06:29:29.260256] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.872 [2024-12-09 06:29:29.260295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.872 [2024-12-09 06:29:29.260308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.872 [2024-12-09 06:29:29.260314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.872 [2024-12-09 06:29:29.260320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.872 [2024-12-09 06:29:29.260333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.872 qpair failed and we were unable to recover it. 00:30:34.872 [2024-12-09 06:29:29.270419] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.872 [2024-12-09 06:29:29.270466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.872 [2024-12-09 06:29:29.270479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.872 [2024-12-09 06:29:29.270486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.872 [2024-12-09 06:29:29.270492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.872 [2024-12-09 06:29:29.270505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.872 qpair failed and we were unable to recover it. 00:30:34.872 [2024-12-09 06:29:29.280482] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.872 [2024-12-09 06:29:29.280567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.872 [2024-12-09 06:29:29.280580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.872 [2024-12-09 06:29:29.280586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.872 [2024-12-09 06:29:29.280592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.872 [2024-12-09 06:29:29.280608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.872 qpair failed and we were unable to recover it. 00:30:34.872 [2024-12-09 06:29:29.290530] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.872 [2024-12-09 06:29:29.290578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.872 [2024-12-09 06:29:29.290591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.872 [2024-12-09 06:29:29.290597] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.872 [2024-12-09 06:29:29.290603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.872 [2024-12-09 06:29:29.290616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.872 qpair failed and we were unable to recover it. 00:30:34.872 [2024-12-09 06:29:29.300502] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.872 [2024-12-09 06:29:29.300549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.872 [2024-12-09 06:29:29.300562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.872 [2024-12-09 06:29:29.300568] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.872 [2024-12-09 06:29:29.300574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.872 [2024-12-09 06:29:29.300587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.872 qpair failed and we were unable to recover it. 00:30:34.872 [2024-12-09 06:29:29.310517] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.872 [2024-12-09 06:29:29.310570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.873 [2024-12-09 06:29:29.310583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.873 [2024-12-09 06:29:29.310590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.873 [2024-12-09 06:29:29.310596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.873 [2024-12-09 06:29:29.310609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.873 qpair failed and we were unable to recover it. 00:30:34.873 [2024-12-09 06:29:29.320588] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.873 [2024-12-09 06:29:29.320641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.873 [2024-12-09 06:29:29.320655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.873 [2024-12-09 06:29:29.320661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.873 [2024-12-09 06:29:29.320667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.873 [2024-12-09 06:29:29.320680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.873 qpair failed and we were unable to recover it. 00:30:34.873 [2024-12-09 06:29:29.330589] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.873 [2024-12-09 06:29:29.330635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.873 [2024-12-09 06:29:29.330648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.873 [2024-12-09 06:29:29.330654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.873 [2024-12-09 06:29:29.330660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.873 [2024-12-09 06:29:29.330673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.873 qpair failed and we were unable to recover it. 00:30:34.873 [2024-12-09 06:29:29.340645] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.873 [2024-12-09 06:29:29.340717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.873 [2024-12-09 06:29:29.340729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.873 [2024-12-09 06:29:29.340736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.873 [2024-12-09 06:29:29.340742] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.873 [2024-12-09 06:29:29.340754] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.873 qpair failed and we were unable to recover it. 00:30:34.873 [2024-12-09 06:29:29.350568] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.873 [2024-12-09 06:29:29.350613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.873 [2024-12-09 06:29:29.350626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.873 [2024-12-09 06:29:29.350632] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.873 [2024-12-09 06:29:29.350638] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.873 [2024-12-09 06:29:29.350651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.873 qpair failed and we were unable to recover it. 00:30:34.873 [2024-12-09 06:29:29.360700] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.873 [2024-12-09 06:29:29.360762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.873 [2024-12-09 06:29:29.360775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.873 [2024-12-09 06:29:29.360781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.873 [2024-12-09 06:29:29.360787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.873 [2024-12-09 06:29:29.360800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.873 qpair failed and we were unable to recover it. 00:30:34.873 [2024-12-09 06:29:29.370727] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.873 [2024-12-09 06:29:29.370778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.873 [2024-12-09 06:29:29.370790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.873 [2024-12-09 06:29:29.370803] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.873 [2024-12-09 06:29:29.370809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.873 [2024-12-09 06:29:29.370822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.873 qpair failed and we were unable to recover it. 00:30:34.873 [2024-12-09 06:29:29.380721] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.873 [2024-12-09 06:29:29.380809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.873 [2024-12-09 06:29:29.380821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.873 [2024-12-09 06:29:29.380827] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.873 [2024-12-09 06:29:29.380833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.873 [2024-12-09 06:29:29.380846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.873 qpair failed and we were unable to recover it. 00:30:34.873 [2024-12-09 06:29:29.390750] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.873 [2024-12-09 06:29:29.390803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.873 [2024-12-09 06:29:29.390816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.873 [2024-12-09 06:29:29.390822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.873 [2024-12-09 06:29:29.390828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.873 [2024-12-09 06:29:29.390840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.873 qpair failed and we were unable to recover it. 00:30:34.873 [2024-12-09 06:29:29.400810] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.873 [2024-12-09 06:29:29.400857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.873 [2024-12-09 06:29:29.400870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.873 [2024-12-09 06:29:29.400876] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.873 [2024-12-09 06:29:29.400882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.873 [2024-12-09 06:29:29.400895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.873 qpair failed and we were unable to recover it. 00:30:34.873 [2024-12-09 06:29:29.410856] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.873 [2024-12-09 06:29:29.410901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.873 [2024-12-09 06:29:29.410913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.873 [2024-12-09 06:29:29.410919] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.873 [2024-12-09 06:29:29.410925] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.873 [2024-12-09 06:29:29.410941] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.873 qpair failed and we were unable to recover it. 00:30:34.873 [2024-12-09 06:29:29.420825] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.873 [2024-12-09 06:29:29.420869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.873 [2024-12-09 06:29:29.420881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.873 [2024-12-09 06:29:29.420887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.873 [2024-12-09 06:29:29.420893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.873 [2024-12-09 06:29:29.420905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.873 qpair failed and we were unable to recover it. 00:30:34.873 [2024-12-09 06:29:29.430855] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.873 [2024-12-09 06:29:29.430927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.873 [2024-12-09 06:29:29.430939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.873 [2024-12-09 06:29:29.430945] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.873 [2024-12-09 06:29:29.430951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.873 [2024-12-09 06:29:29.430963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.873 qpair failed and we were unable to recover it. 00:30:34.873 [2024-12-09 06:29:29.440918] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.873 [2024-12-09 06:29:29.440967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.873 [2024-12-09 06:29:29.440977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.874 [2024-12-09 06:29:29.440982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.874 [2024-12-09 06:29:29.440986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.874 [2024-12-09 06:29:29.440996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.874 qpair failed and we were unable to recover it. 00:30:34.874 [2024-12-09 06:29:29.450820] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.874 [2024-12-09 06:29:29.450864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.874 [2024-12-09 06:29:29.450874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.874 [2024-12-09 06:29:29.450879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.874 [2024-12-09 06:29:29.450883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:34.874 [2024-12-09 06:29:29.450894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.874 qpair failed and we were unable to recover it. 00:30:35.135 [2024-12-09 06:29:29.460935] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.135 [2024-12-09 06:29:29.460973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.135 [2024-12-09 06:29:29.460983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.135 [2024-12-09 06:29:29.460988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.135 [2024-12-09 06:29:29.460992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.135 [2024-12-09 06:29:29.461002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.135 qpair failed and we were unable to recover it. 00:30:35.135 [2024-12-09 06:29:29.470978] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.135 [2024-12-09 06:29:29.471021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.135 [2024-12-09 06:29:29.471032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.135 [2024-12-09 06:29:29.471037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.135 [2024-12-09 06:29:29.471041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.135 [2024-12-09 06:29:29.471051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.135 qpair failed and we were unable to recover it. 00:30:35.135 [2024-12-09 06:29:29.481048] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.135 [2024-12-09 06:29:29.481095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.135 [2024-12-09 06:29:29.481105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.135 [2024-12-09 06:29:29.481110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.135 [2024-12-09 06:29:29.481115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.135 [2024-12-09 06:29:29.481125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.135 qpair failed and we were unable to recover it. 00:30:35.135 [2024-12-09 06:29:29.491067] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.135 [2024-12-09 06:29:29.491127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.135 [2024-12-09 06:29:29.491137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.135 [2024-12-09 06:29:29.491142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.135 [2024-12-09 06:29:29.491147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.135 [2024-12-09 06:29:29.491157] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.135 qpair failed and we were unable to recover it. 00:30:35.135 [2024-12-09 06:29:29.501051] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.135 [2024-12-09 06:29:29.501091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.135 [2024-12-09 06:29:29.501104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.135 [2024-12-09 06:29:29.501110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.135 [2024-12-09 06:29:29.501114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.135 [2024-12-09 06:29:29.501124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.135 qpair failed and we were unable to recover it. 00:30:35.135 [2024-12-09 06:29:29.511091] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.135 [2024-12-09 06:29:29.511147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.135 [2024-12-09 06:29:29.511158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.135 [2024-12-09 06:29:29.511163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.135 [2024-12-09 06:29:29.511168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.135 [2024-12-09 06:29:29.511178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.135 qpair failed and we were unable to recover it. 00:30:35.135 [2024-12-09 06:29:29.521155] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.135 [2024-12-09 06:29:29.521201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.135 [2024-12-09 06:29:29.521212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.135 [2024-12-09 06:29:29.521217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.135 [2024-12-09 06:29:29.521221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.135 [2024-12-09 06:29:29.521231] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.135 qpair failed and we were unable to recover it. 00:30:35.135 [2024-12-09 06:29:29.531209] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.135 [2024-12-09 06:29:29.531287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.135 [2024-12-09 06:29:29.531297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.135 [2024-12-09 06:29:29.531302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.135 [2024-12-09 06:29:29.531306] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.135 [2024-12-09 06:29:29.531316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.135 qpair failed and we were unable to recover it. 00:30:35.135 [2024-12-09 06:29:29.541174] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.135 [2024-12-09 06:29:29.541214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.135 [2024-12-09 06:29:29.541224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.135 [2024-12-09 06:29:29.541229] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.136 [2024-12-09 06:29:29.541236] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.136 [2024-12-09 06:29:29.541246] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.136 qpair failed and we were unable to recover it. 00:30:35.136 [2024-12-09 06:29:29.551178] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.136 [2024-12-09 06:29:29.551222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.136 [2024-12-09 06:29:29.551233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.136 [2024-12-09 06:29:29.551238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.136 [2024-12-09 06:29:29.551243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.136 [2024-12-09 06:29:29.551252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.136 qpair failed and we were unable to recover it. 00:30:35.136 [2024-12-09 06:29:29.561143] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.136 [2024-12-09 06:29:29.561185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.136 [2024-12-09 06:29:29.561195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.136 [2024-12-09 06:29:29.561200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.136 [2024-12-09 06:29:29.561204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.136 [2024-12-09 06:29:29.561214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.136 qpair failed and we were unable to recover it. 00:30:35.136 [2024-12-09 06:29:29.571183] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.136 [2024-12-09 06:29:29.571238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.136 [2024-12-09 06:29:29.571248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.136 [2024-12-09 06:29:29.571252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.136 [2024-12-09 06:29:29.571257] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.136 [2024-12-09 06:29:29.571267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.136 qpair failed and we were unable to recover it. 00:30:35.136 [2024-12-09 06:29:29.581274] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.136 [2024-12-09 06:29:29.581323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.136 [2024-12-09 06:29:29.581341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.136 [2024-12-09 06:29:29.581348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.136 [2024-12-09 06:29:29.581353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.136 [2024-12-09 06:29:29.581367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.136 qpair failed and we were unable to recover it. 00:30:35.136 [2024-12-09 06:29:29.591309] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.136 [2024-12-09 06:29:29.591351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.136 [2024-12-09 06:29:29.591363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.136 [2024-12-09 06:29:29.591368] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.136 [2024-12-09 06:29:29.591373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.136 [2024-12-09 06:29:29.591384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.136 qpair failed and we were unable to recover it. 00:30:35.136 [2024-12-09 06:29:29.601282] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.136 [2024-12-09 06:29:29.601332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.136 [2024-12-09 06:29:29.601343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.136 [2024-12-09 06:29:29.601348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.136 [2024-12-09 06:29:29.601353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.136 [2024-12-09 06:29:29.601364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.136 qpair failed and we were unable to recover it. 00:30:35.136 [2024-12-09 06:29:29.611360] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.136 [2024-12-09 06:29:29.611405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.136 [2024-12-09 06:29:29.611415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.136 [2024-12-09 06:29:29.611420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.136 [2024-12-09 06:29:29.611425] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.136 [2024-12-09 06:29:29.611435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.136 qpair failed and we were unable to recover it. 00:30:35.136 [2024-12-09 06:29:29.621380] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.136 [2024-12-09 06:29:29.621429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.136 [2024-12-09 06:29:29.621439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.136 [2024-12-09 06:29:29.621444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.136 [2024-12-09 06:29:29.621453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.136 [2024-12-09 06:29:29.621463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.136 qpair failed and we were unable to recover it. 00:30:35.136 [2024-12-09 06:29:29.631392] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.136 [2024-12-09 06:29:29.631434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.136 [2024-12-09 06:29:29.631447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.136 [2024-12-09 06:29:29.631455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.136 [2024-12-09 06:29:29.631460] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.136 [2024-12-09 06:29:29.631470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.136 qpair failed and we were unable to recover it. 00:30:35.136 [2024-12-09 06:29:29.641502] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.136 [2024-12-09 06:29:29.641594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.136 [2024-12-09 06:29:29.641604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.136 [2024-12-09 06:29:29.641609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.136 [2024-12-09 06:29:29.641614] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.136 [2024-12-09 06:29:29.641624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.136 qpair failed and we were unable to recover it. 00:30:35.136 [2024-12-09 06:29:29.651481] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.137 [2024-12-09 06:29:29.651522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.137 [2024-12-09 06:29:29.651533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.137 [2024-12-09 06:29:29.651538] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.137 [2024-12-09 06:29:29.651542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.137 [2024-12-09 06:29:29.651553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.137 qpair failed and we were unable to recover it. 00:30:35.137 [2024-12-09 06:29:29.661463] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.137 [2024-12-09 06:29:29.661502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.137 [2024-12-09 06:29:29.661513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.137 [2024-12-09 06:29:29.661518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.137 [2024-12-09 06:29:29.661522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.137 [2024-12-09 06:29:29.661532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.137 qpair failed and we were unable to recover it. 00:30:35.137 [2024-12-09 06:29:29.671411] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.137 [2024-12-09 06:29:29.671456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.137 [2024-12-09 06:29:29.671467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.137 [2024-12-09 06:29:29.671472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.137 [2024-12-09 06:29:29.671480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.137 [2024-12-09 06:29:29.671490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.137 qpair failed and we were unable to recover it. 00:30:35.137 [2024-12-09 06:29:29.681620] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.137 [2024-12-09 06:29:29.681662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.137 [2024-12-09 06:29:29.681673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.137 [2024-12-09 06:29:29.681678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.137 [2024-12-09 06:29:29.681682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.137 [2024-12-09 06:29:29.681692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.137 qpair failed and we were unable to recover it. 00:30:35.137 [2024-12-09 06:29:29.691472] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.137 [2024-12-09 06:29:29.691545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.137 [2024-12-09 06:29:29.691555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.137 [2024-12-09 06:29:29.691560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.137 [2024-12-09 06:29:29.691564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.137 [2024-12-09 06:29:29.691574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.137 qpair failed and we were unable to recover it. 00:30:35.137 [2024-12-09 06:29:29.701494] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.137 [2024-12-09 06:29:29.701544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.137 [2024-12-09 06:29:29.701554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.137 [2024-12-09 06:29:29.701559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.137 [2024-12-09 06:29:29.701564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.137 [2024-12-09 06:29:29.701574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.137 qpair failed and we were unable to recover it. 00:30:35.137 [2024-12-09 06:29:29.711619] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.137 [2024-12-09 06:29:29.711725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.137 [2024-12-09 06:29:29.711735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.137 [2024-12-09 06:29:29.711740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.137 [2024-12-09 06:29:29.711744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.137 [2024-12-09 06:29:29.711755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.137 qpair failed and we were unable to recover it. 00:30:35.398 [2024-12-09 06:29:29.721693] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.398 [2024-12-09 06:29:29.721736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.398 [2024-12-09 06:29:29.721746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.398 [2024-12-09 06:29:29.721750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.398 [2024-12-09 06:29:29.721755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.398 [2024-12-09 06:29:29.721765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.398 qpair failed and we were unable to recover it. 00:30:35.398 [2024-12-09 06:29:29.731608] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.398 [2024-12-09 06:29:29.731651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.398 [2024-12-09 06:29:29.731663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.398 [2024-12-09 06:29:29.731668] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.398 [2024-12-09 06:29:29.731672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.398 [2024-12-09 06:29:29.731682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.398 qpair failed and we were unable to recover it. 00:30:35.398 [2024-12-09 06:29:29.741604] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.398 [2024-12-09 06:29:29.741643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.398 [2024-12-09 06:29:29.741654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.398 [2024-12-09 06:29:29.741659] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.398 [2024-12-09 06:29:29.741663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.398 [2024-12-09 06:29:29.741674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.398 qpair failed and we were unable to recover it. 00:30:35.398 [2024-12-09 06:29:29.751733] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.398 [2024-12-09 06:29:29.751785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.398 [2024-12-09 06:29:29.751795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.398 [2024-12-09 06:29:29.751800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.398 [2024-12-09 06:29:29.751805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.398 [2024-12-09 06:29:29.751815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.398 qpair failed and we were unable to recover it. 00:30:35.398 [2024-12-09 06:29:29.761799] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.398 [2024-12-09 06:29:29.761846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.398 [2024-12-09 06:29:29.761858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.398 [2024-12-09 06:29:29.761863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.398 [2024-12-09 06:29:29.761868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.398 [2024-12-09 06:29:29.761878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.398 qpair failed and we were unable to recover it. 00:30:35.398 [2024-12-09 06:29:29.771692] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.398 [2024-12-09 06:29:29.771740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.398 [2024-12-09 06:29:29.771750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.398 [2024-12-09 06:29:29.771755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.398 [2024-12-09 06:29:29.771759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.398 [2024-12-09 06:29:29.771770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.398 qpair failed and we were unable to recover it. 00:30:35.398 [2024-12-09 06:29:29.781779] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.398 [2024-12-09 06:29:29.781821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.398 [2024-12-09 06:29:29.781839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.398 [2024-12-09 06:29:29.781844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.398 [2024-12-09 06:29:29.781849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.398 [2024-12-09 06:29:29.781863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.398 qpair failed and we were unable to recover it. 00:30:35.398 [2024-12-09 06:29:29.791853] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.398 [2024-12-09 06:29:29.791894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.398 [2024-12-09 06:29:29.791904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.398 [2024-12-09 06:29:29.791909] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.398 [2024-12-09 06:29:29.791914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.398 [2024-12-09 06:29:29.791924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.398 qpair failed and we were unable to recover it. 00:30:35.398 [2024-12-09 06:29:29.801882] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.398 [2024-12-09 06:29:29.801939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.398 [2024-12-09 06:29:29.801950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.398 [2024-12-09 06:29:29.801957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.399 [2024-12-09 06:29:29.801962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.399 [2024-12-09 06:29:29.801972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.399 qpair failed and we were unable to recover it. 00:30:35.399 [2024-12-09 06:29:29.811915] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.399 [2024-12-09 06:29:29.811958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.399 [2024-12-09 06:29:29.811968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.399 [2024-12-09 06:29:29.811973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.399 [2024-12-09 06:29:29.811978] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.399 [2024-12-09 06:29:29.811988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.399 qpair failed and we were unable to recover it. 00:30:35.399 [2024-12-09 06:29:29.821919] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.399 [2024-12-09 06:29:29.821957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.399 [2024-12-09 06:29:29.821967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.399 [2024-12-09 06:29:29.821972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.399 [2024-12-09 06:29:29.821976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.399 [2024-12-09 06:29:29.821987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.399 qpair failed and we were unable to recover it. 00:30:35.399 [2024-12-09 06:29:29.831961] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.399 [2024-12-09 06:29:29.832004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.399 [2024-12-09 06:29:29.832014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.399 [2024-12-09 06:29:29.832019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.399 [2024-12-09 06:29:29.832024] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.399 [2024-12-09 06:29:29.832034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.399 qpair failed and we were unable to recover it. 00:30:35.399 [2024-12-09 06:29:29.842006] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.399 [2024-12-09 06:29:29.842057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.399 [2024-12-09 06:29:29.842067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.399 [2024-12-09 06:29:29.842072] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.399 [2024-12-09 06:29:29.842076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.399 [2024-12-09 06:29:29.842089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.399 qpair failed and we were unable to recover it. 00:30:35.399 [2024-12-09 06:29:29.852022] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.399 [2024-12-09 06:29:29.852075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.399 [2024-12-09 06:29:29.852085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.399 [2024-12-09 06:29:29.852090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.399 [2024-12-09 06:29:29.852094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.399 [2024-12-09 06:29:29.852104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.399 qpair failed and we were unable to recover it. 00:30:35.399 [2024-12-09 06:29:29.862028] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.399 [2024-12-09 06:29:29.862070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.399 [2024-12-09 06:29:29.862081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.399 [2024-12-09 06:29:29.862086] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.399 [2024-12-09 06:29:29.862090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.399 [2024-12-09 06:29:29.862100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.399 qpair failed and we were unable to recover it. 00:30:35.399 [2024-12-09 06:29:29.872037] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.399 [2024-12-09 06:29:29.872079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.399 [2024-12-09 06:29:29.872089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.399 [2024-12-09 06:29:29.872094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.399 [2024-12-09 06:29:29.872098] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.399 [2024-12-09 06:29:29.872108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.399 qpair failed and we were unable to recover it. 00:30:35.399 [2024-12-09 06:29:29.882034] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.399 [2024-12-09 06:29:29.882080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.399 [2024-12-09 06:29:29.882090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.399 [2024-12-09 06:29:29.882095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.399 [2024-12-09 06:29:29.882099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.399 [2024-12-09 06:29:29.882110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.399 qpair failed and we were unable to recover it. 00:30:35.399 [2024-12-09 06:29:29.891995] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.399 [2024-12-09 06:29:29.892082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.399 [2024-12-09 06:29:29.892092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.399 [2024-12-09 06:29:29.892098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.399 [2024-12-09 06:29:29.892102] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.399 [2024-12-09 06:29:29.892112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.399 qpair failed and we were unable to recover it. 00:30:35.399 [2024-12-09 06:29:29.902105] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.399 [2024-12-09 06:29:29.902142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.399 [2024-12-09 06:29:29.902152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.399 [2024-12-09 06:29:29.902157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.399 [2024-12-09 06:29:29.902161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.399 [2024-12-09 06:29:29.902172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.399 qpair failed and we were unable to recover it. 00:30:35.399 [2024-12-09 06:29:29.912149] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.399 [2024-12-09 06:29:29.912207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.399 [2024-12-09 06:29:29.912217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.399 [2024-12-09 06:29:29.912222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.399 [2024-12-09 06:29:29.912226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.399 [2024-12-09 06:29:29.912236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.399 qpair failed and we were unable to recover it. 00:30:35.399 [2024-12-09 06:29:29.922118] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.399 [2024-12-09 06:29:29.922168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.399 [2024-12-09 06:29:29.922178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.399 [2024-12-09 06:29:29.922183] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.399 [2024-12-09 06:29:29.922187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.399 [2024-12-09 06:29:29.922198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.399 qpair failed and we were unable to recover it. 00:30:35.399 [2024-12-09 06:29:29.932239] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.399 [2024-12-09 06:29:29.932280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.400 [2024-12-09 06:29:29.932290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.400 [2024-12-09 06:29:29.932298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.400 [2024-12-09 06:29:29.932302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.400 [2024-12-09 06:29:29.932313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.400 qpair failed and we were unable to recover it. 00:30:35.400 [2024-12-09 06:29:29.942224] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.400 [2024-12-09 06:29:29.942280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.400 [2024-12-09 06:29:29.942290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.400 [2024-12-09 06:29:29.942296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.400 [2024-12-09 06:29:29.942300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.400 [2024-12-09 06:29:29.942310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.400 qpair failed and we were unable to recover it. 00:30:35.400 [2024-12-09 06:29:29.952141] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.400 [2024-12-09 06:29:29.952184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.400 [2024-12-09 06:29:29.952195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.400 [2024-12-09 06:29:29.952200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.400 [2024-12-09 06:29:29.952204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.400 [2024-12-09 06:29:29.952215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.400 qpair failed and we were unable to recover it. 00:30:35.400 [2024-12-09 06:29:29.962312] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.400 [2024-12-09 06:29:29.962354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.400 [2024-12-09 06:29:29.962364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.400 [2024-12-09 06:29:29.962370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.400 [2024-12-09 06:29:29.962374] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.400 [2024-12-09 06:29:29.962384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.400 qpair failed and we were unable to recover it. 00:30:35.400 [2024-12-09 06:29:29.972234] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.400 [2024-12-09 06:29:29.972284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.400 [2024-12-09 06:29:29.972294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.400 [2024-12-09 06:29:29.972299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.400 [2024-12-09 06:29:29.972304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.400 [2024-12-09 06:29:29.972317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.400 qpair failed and we were unable to recover it. 00:30:35.400 [2024-12-09 06:29:29.982244] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.400 [2024-12-09 06:29:29.982281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.400 [2024-12-09 06:29:29.982290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.400 [2024-12-09 06:29:29.982295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.400 [2024-12-09 06:29:29.982299] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.400 [2024-12-09 06:29:29.982310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.400 qpair failed and we were unable to recover it. 00:30:35.661 [2024-12-09 06:29:29.992380] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.661 [2024-12-09 06:29:29.992422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.661 [2024-12-09 06:29:29.992432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.661 [2024-12-09 06:29:29.992437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.661 [2024-12-09 06:29:29.992442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.661 [2024-12-09 06:29:29.992455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.661 qpair failed and we were unable to recover it. 00:30:35.661 [2024-12-09 06:29:30.002775] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.661 [2024-12-09 06:29:30.002822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.661 [2024-12-09 06:29:30.002833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.661 [2024-12-09 06:29:30.002839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.661 [2024-12-09 06:29:30.002843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.661 [2024-12-09 06:29:30.002853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.661 qpair failed and we were unable to recover it. 00:30:35.661 [2024-12-09 06:29:30.012677] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.661 [2024-12-09 06:29:30.012723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.661 [2024-12-09 06:29:30.012733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.661 [2024-12-09 06:29:30.012738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.661 [2024-12-09 06:29:30.012743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.661 [2024-12-09 06:29:30.012753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.661 qpair failed and we were unable to recover it. 00:30:35.661 [2024-12-09 06:29:30.022795] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.661 [2024-12-09 06:29:30.022836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.661 [2024-12-09 06:29:30.022846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.661 [2024-12-09 06:29:30.022852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.661 [2024-12-09 06:29:30.022857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.661 [2024-12-09 06:29:30.022867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.661 qpair failed and we were unable to recover it. 00:30:35.661 [2024-12-09 06:29:30.032828] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.661 [2024-12-09 06:29:30.032872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.661 [2024-12-09 06:29:30.032883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.661 [2024-12-09 06:29:30.032888] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.661 [2024-12-09 06:29:30.032893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.661 [2024-12-09 06:29:30.032902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.661 qpair failed and we were unable to recover it. 00:30:35.661 [2024-12-09 06:29:30.042872] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.661 [2024-12-09 06:29:30.042912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.661 [2024-12-09 06:29:30.042922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.661 [2024-12-09 06:29:30.042927] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.661 [2024-12-09 06:29:30.042932] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.661 [2024-12-09 06:29:30.042941] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.661 qpair failed and we were unable to recover it. 00:30:35.661 [2024-12-09 06:29:30.052879] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.661 [2024-12-09 06:29:30.052919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.661 [2024-12-09 06:29:30.052930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.661 [2024-12-09 06:29:30.052935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.661 [2024-12-09 06:29:30.052939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.661 [2024-12-09 06:29:30.052950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.661 qpair failed and we were unable to recover it. 00:30:35.661 [2024-12-09 06:29:30.062792] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.661 [2024-12-09 06:29:30.062834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.661 [2024-12-09 06:29:30.062849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.661 [2024-12-09 06:29:30.062854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.661 [2024-12-09 06:29:30.062859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.661 [2024-12-09 06:29:30.062869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.661 qpair failed and we were unable to recover it. 00:30:35.661 [2024-12-09 06:29:30.072947] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.661 [2024-12-09 06:29:30.072989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.661 [2024-12-09 06:29:30.073000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.661 [2024-12-09 06:29:30.073005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.661 [2024-12-09 06:29:30.073010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.661 [2024-12-09 06:29:30.073020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.661 qpair failed and we were unable to recover it. 00:30:35.661 [2024-12-09 06:29:30.082993] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.661 [2024-12-09 06:29:30.083037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.661 [2024-12-09 06:29:30.083047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.662 [2024-12-09 06:29:30.083052] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.662 [2024-12-09 06:29:30.083057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.662 [2024-12-09 06:29:30.083067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.662 qpair failed and we were unable to recover it. 00:30:35.662 [2024-12-09 06:29:30.092878] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.662 [2024-12-09 06:29:30.092917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.662 [2024-12-09 06:29:30.092927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.662 [2024-12-09 06:29:30.092933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.662 [2024-12-09 06:29:30.092937] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.662 [2024-12-09 06:29:30.092948] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.662 qpair failed and we were unable to recover it. 00:30:35.662 [2024-12-09 06:29:30.103035] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.662 [2024-12-09 06:29:30.103073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.662 [2024-12-09 06:29:30.103083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.662 [2024-12-09 06:29:30.103089] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.662 [2024-12-09 06:29:30.103096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.662 [2024-12-09 06:29:30.103106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.662 qpair failed and we were unable to recover it. 00:30:35.662 [2024-12-09 06:29:30.113035] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.662 [2024-12-09 06:29:30.113085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.662 [2024-12-09 06:29:30.113096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.662 [2024-12-09 06:29:30.113101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.662 [2024-12-09 06:29:30.113106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.662 [2024-12-09 06:29:30.113116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.662 qpair failed and we were unable to recover it. 00:30:35.662 [2024-12-09 06:29:30.123100] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.662 [2024-12-09 06:29:30.123148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.662 [2024-12-09 06:29:30.123158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.662 [2024-12-09 06:29:30.123163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.662 [2024-12-09 06:29:30.123167] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.662 [2024-12-09 06:29:30.123178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.662 qpair failed and we were unable to recover it. 00:30:35.662 [2024-12-09 06:29:30.133117] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.662 [2024-12-09 06:29:30.133155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.662 [2024-12-09 06:29:30.133165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.662 [2024-12-09 06:29:30.133171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.662 [2024-12-09 06:29:30.133175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.662 [2024-12-09 06:29:30.133186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.662 qpair failed and we were unable to recover it. 00:30:35.662 [2024-12-09 06:29:30.143162] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.662 [2024-12-09 06:29:30.143206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.662 [2024-12-09 06:29:30.143216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.662 [2024-12-09 06:29:30.143221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.662 [2024-12-09 06:29:30.143226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.662 [2024-12-09 06:29:30.143236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.662 qpair failed and we were unable to recover it. 00:30:35.662 [2024-12-09 06:29:30.153167] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.662 [2024-12-09 06:29:30.153211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.662 [2024-12-09 06:29:30.153222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.662 [2024-12-09 06:29:30.153227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.662 [2024-12-09 06:29:30.153231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.662 [2024-12-09 06:29:30.153241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.662 qpair failed and we were unable to recover it. 00:30:35.662 [2024-12-09 06:29:30.163189] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.662 [2024-12-09 06:29:30.163233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.662 [2024-12-09 06:29:30.163242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.662 [2024-12-09 06:29:30.163247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.662 [2024-12-09 06:29:30.163252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.662 [2024-12-09 06:29:30.163262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.662 qpair failed and we were unable to recover it. 00:30:35.662 [2024-12-09 06:29:30.173216] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.662 [2024-12-09 06:29:30.173258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.662 [2024-12-09 06:29:30.173268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.662 [2024-12-09 06:29:30.173273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.662 [2024-12-09 06:29:30.173278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.662 [2024-12-09 06:29:30.173289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.662 qpair failed and we were unable to recover it. 00:30:35.662 [2024-12-09 06:29:30.183267] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.662 [2024-12-09 06:29:30.183310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.662 [2024-12-09 06:29:30.183320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.662 [2024-12-09 06:29:30.183325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.662 [2024-12-09 06:29:30.183329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.662 [2024-12-09 06:29:30.183339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.662 qpair failed and we were unable to recover it. 00:30:35.662 [2024-12-09 06:29:30.193284] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.662 [2024-12-09 06:29:30.193328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.662 [2024-12-09 06:29:30.193340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.662 [2024-12-09 06:29:30.193345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.662 [2024-12-09 06:29:30.193349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.662 [2024-12-09 06:29:30.193360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.662 qpair failed and we were unable to recover it. 00:30:35.662 [2024-12-09 06:29:30.203329] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.662 [2024-12-09 06:29:30.203370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.662 [2024-12-09 06:29:30.203380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.662 [2024-12-09 06:29:30.203385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.662 [2024-12-09 06:29:30.203389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.662 [2024-12-09 06:29:30.203399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.662 qpair failed and we were unable to recover it. 00:30:35.662 [2024-12-09 06:29:30.213305] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.662 [2024-12-09 06:29:30.213347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.662 [2024-12-09 06:29:30.213357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.663 [2024-12-09 06:29:30.213363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.663 [2024-12-09 06:29:30.213367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.663 [2024-12-09 06:29:30.213377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.663 qpair failed and we were unable to recover it. 00:30:35.663 [2024-12-09 06:29:30.223407] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.663 [2024-12-09 06:29:30.223484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.663 [2024-12-09 06:29:30.223494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.663 [2024-12-09 06:29:30.223500] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.663 [2024-12-09 06:29:30.223504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.663 [2024-12-09 06:29:30.223515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.663 qpair failed and we were unable to recover it. 00:30:35.663 [2024-12-09 06:29:30.233392] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.663 [2024-12-09 06:29:30.233450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.663 [2024-12-09 06:29:30.233460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.663 [2024-12-09 06:29:30.233465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.663 [2024-12-09 06:29:30.233472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.663 [2024-12-09 06:29:30.233483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.663 qpair failed and we were unable to recover it. 00:30:35.663 [2024-12-09 06:29:30.243430] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.663 [2024-12-09 06:29:30.243474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.663 [2024-12-09 06:29:30.243484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.663 [2024-12-09 06:29:30.243489] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.663 [2024-12-09 06:29:30.243493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.663 [2024-12-09 06:29:30.243504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.663 qpair failed and we were unable to recover it. 00:30:35.924 [2024-12-09 06:29:30.253477] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.924 [2024-12-09 06:29:30.253524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.924 [2024-12-09 06:29:30.253534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.924 [2024-12-09 06:29:30.253539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.924 [2024-12-09 06:29:30.253544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.924 [2024-12-09 06:29:30.253554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.924 qpair failed and we were unable to recover it. 00:30:35.924 [2024-12-09 06:29:30.263472] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.924 [2024-12-09 06:29:30.263509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.924 [2024-12-09 06:29:30.263519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.924 [2024-12-09 06:29:30.263524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.924 [2024-12-09 06:29:30.263529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.924 [2024-12-09 06:29:30.263539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.924 qpair failed and we were unable to recover it. 00:30:35.924 [2024-12-09 06:29:30.273512] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.924 [2024-12-09 06:29:30.273556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.924 [2024-12-09 06:29:30.273566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.924 [2024-12-09 06:29:30.273571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.924 [2024-12-09 06:29:30.273576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.924 [2024-12-09 06:29:30.273586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.924 qpair failed and we were unable to recover it. 00:30:35.924 [2024-12-09 06:29:30.283534] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.924 [2024-12-09 06:29:30.283589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.924 [2024-12-09 06:29:30.283600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.924 [2024-12-09 06:29:30.283605] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.924 [2024-12-09 06:29:30.283609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.924 [2024-12-09 06:29:30.283620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.924 qpair failed and we were unable to recover it. 00:30:35.924 [2024-12-09 06:29:30.293425] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.924 [2024-12-09 06:29:30.293464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.924 [2024-12-09 06:29:30.293474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.924 [2024-12-09 06:29:30.293479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.924 [2024-12-09 06:29:30.293483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.924 [2024-12-09 06:29:30.293494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.924 qpair failed and we were unable to recover it. 00:30:35.924 [2024-12-09 06:29:30.303455] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.924 [2024-12-09 06:29:30.303497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.924 [2024-12-09 06:29:30.303508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.924 [2024-12-09 06:29:30.303513] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.924 [2024-12-09 06:29:30.303517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.924 [2024-12-09 06:29:30.303528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.924 qpair failed and we were unable to recover it. 00:30:35.924 [2024-12-09 06:29:30.313660] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.924 [2024-12-09 06:29:30.313745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.924 [2024-12-09 06:29:30.313755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.924 [2024-12-09 06:29:30.313760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.924 [2024-12-09 06:29:30.313765] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.924 [2024-12-09 06:29:30.313775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.924 qpair failed and we were unable to recover it. 00:30:35.924 [2024-12-09 06:29:30.323649] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.925 [2024-12-09 06:29:30.323722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.925 [2024-12-09 06:29:30.323734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.925 [2024-12-09 06:29:30.323739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.925 [2024-12-09 06:29:30.323743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.925 [2024-12-09 06:29:30.323753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.925 qpair failed and we were unable to recover it. 00:30:35.925 [2024-12-09 06:29:30.333675] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.925 [2024-12-09 06:29:30.333719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.925 [2024-12-09 06:29:30.333729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.925 [2024-12-09 06:29:30.333734] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.925 [2024-12-09 06:29:30.333739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.925 [2024-12-09 06:29:30.333749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.925 qpair failed and we were unable to recover it. 00:30:35.925 [2024-12-09 06:29:30.343735] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.925 [2024-12-09 06:29:30.343773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.925 [2024-12-09 06:29:30.343783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.925 [2024-12-09 06:29:30.343788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.925 [2024-12-09 06:29:30.343792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.925 [2024-12-09 06:29:30.343802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.925 qpair failed and we were unable to recover it. 00:30:35.925 [2024-12-09 06:29:30.353741] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.925 [2024-12-09 06:29:30.353782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.925 [2024-12-09 06:29:30.353792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.925 [2024-12-09 06:29:30.353798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.925 [2024-12-09 06:29:30.353802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.925 [2024-12-09 06:29:30.353812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.925 qpair failed and we were unable to recover it. 00:30:35.925 [2024-12-09 06:29:30.363740] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.925 [2024-12-09 06:29:30.363783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.925 [2024-12-09 06:29:30.363793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.925 [2024-12-09 06:29:30.363800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.925 [2024-12-09 06:29:30.363805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.925 [2024-12-09 06:29:30.363815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.925 qpair failed and we were unable to recover it. 00:30:35.925 [2024-12-09 06:29:30.373766] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.925 [2024-12-09 06:29:30.373821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.925 [2024-12-09 06:29:30.373831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.925 [2024-12-09 06:29:30.373836] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.925 [2024-12-09 06:29:30.373841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.925 [2024-12-09 06:29:30.373851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.925 qpair failed and we were unable to recover it. 00:30:35.925 [2024-12-09 06:29:30.383795] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.925 [2024-12-09 06:29:30.383833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.925 [2024-12-09 06:29:30.383843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.925 [2024-12-09 06:29:30.383848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.925 [2024-12-09 06:29:30.383852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.925 [2024-12-09 06:29:30.383862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.925 qpair failed and we were unable to recover it. 00:30:35.925 [2024-12-09 06:29:30.393833] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.925 [2024-12-09 06:29:30.393873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.925 [2024-12-09 06:29:30.393883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.925 [2024-12-09 06:29:30.393888] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.925 [2024-12-09 06:29:30.393892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.925 [2024-12-09 06:29:30.393902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.925 qpair failed and we were unable to recover it. 00:30:35.925 [2024-12-09 06:29:30.403742] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.925 [2024-12-09 06:29:30.403783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.925 [2024-12-09 06:29:30.403793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.925 [2024-12-09 06:29:30.403798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.925 [2024-12-09 06:29:30.403803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.925 [2024-12-09 06:29:30.403816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.925 qpair failed and we were unable to recover it. 00:30:35.925 [2024-12-09 06:29:30.413909] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.925 [2024-12-09 06:29:30.413955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.925 [2024-12-09 06:29:30.413965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.925 [2024-12-09 06:29:30.413970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.925 [2024-12-09 06:29:30.413975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.925 [2024-12-09 06:29:30.413985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.925 qpair failed and we were unable to recover it. 00:30:35.925 [2024-12-09 06:29:30.423916] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.925 [2024-12-09 06:29:30.423966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.925 [2024-12-09 06:29:30.423975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.925 [2024-12-09 06:29:30.423980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.925 [2024-12-09 06:29:30.423984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.925 [2024-12-09 06:29:30.423994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.925 qpair failed and we were unable to recover it. 00:30:35.925 [2024-12-09 06:29:30.433945] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.925 [2024-12-09 06:29:30.434001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.925 [2024-12-09 06:29:30.434010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.925 [2024-12-09 06:29:30.434015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.925 [2024-12-09 06:29:30.434019] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.925 [2024-12-09 06:29:30.434029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.925 qpair failed and we were unable to recover it. 00:30:35.925 [2024-12-09 06:29:30.443982] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.925 [2024-12-09 06:29:30.444028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.925 [2024-12-09 06:29:30.444037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.925 [2024-12-09 06:29:30.444042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.925 [2024-12-09 06:29:30.444047] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.925 [2024-12-09 06:29:30.444057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.925 qpair failed and we were unable to recover it. 00:30:35.925 [2024-12-09 06:29:30.453984] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.926 [2024-12-09 06:29:30.454035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.926 [2024-12-09 06:29:30.454046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.926 [2024-12-09 06:29:30.454051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.926 [2024-12-09 06:29:30.454055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.926 [2024-12-09 06:29:30.454065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.926 qpair failed and we were unable to recover it. 00:30:35.926 [2024-12-09 06:29:30.464015] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.926 [2024-12-09 06:29:30.464053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.926 [2024-12-09 06:29:30.464063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.926 [2024-12-09 06:29:30.464068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.926 [2024-12-09 06:29:30.464072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.926 [2024-12-09 06:29:30.464082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.926 qpair failed and we were unable to recover it. 00:30:35.926 [2024-12-09 06:29:30.474059] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.926 [2024-12-09 06:29:30.474104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.926 [2024-12-09 06:29:30.474114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.926 [2024-12-09 06:29:30.474119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.926 [2024-12-09 06:29:30.474123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.926 [2024-12-09 06:29:30.474133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.926 qpair failed and we were unable to recover it. 00:30:35.926 [2024-12-09 06:29:30.484130] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.926 [2024-12-09 06:29:30.484177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.926 [2024-12-09 06:29:30.484186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.926 [2024-12-09 06:29:30.484191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.926 [2024-12-09 06:29:30.484195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.926 [2024-12-09 06:29:30.484206] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.926 qpair failed and we were unable to recover it. 00:30:35.926 [2024-12-09 06:29:30.494101] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.926 [2024-12-09 06:29:30.494142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.926 [2024-12-09 06:29:30.494152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.926 [2024-12-09 06:29:30.494159] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.926 [2024-12-09 06:29:30.494164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.926 [2024-12-09 06:29:30.494174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.926 qpair failed and we were unable to recover it. 00:30:35.926 [2024-12-09 06:29:30.503993] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.926 [2024-12-09 06:29:30.504033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.926 [2024-12-09 06:29:30.504043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.926 [2024-12-09 06:29:30.504049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.926 [2024-12-09 06:29:30.504053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:35.926 [2024-12-09 06:29:30.504063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.926 qpair failed and we were unable to recover it. 00:30:36.188 [2024-12-09 06:29:30.514154] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.188 [2024-12-09 06:29:30.514198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.188 [2024-12-09 06:29:30.514208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.188 [2024-12-09 06:29:30.514213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.188 [2024-12-09 06:29:30.514217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.188 [2024-12-09 06:29:30.514227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.188 qpair failed and we were unable to recover it. 00:30:36.188 [2024-12-09 06:29:30.524192] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.188 [2024-12-09 06:29:30.524288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.188 [2024-12-09 06:29:30.524306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.188 [2024-12-09 06:29:30.524313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.188 [2024-12-09 06:29:30.524318] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.188 [2024-12-09 06:29:30.524332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.188 qpair failed and we were unable to recover it. 00:30:36.188 [2024-12-09 06:29:30.534211] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.188 [2024-12-09 06:29:30.534250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.188 [2024-12-09 06:29:30.534262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.188 [2024-12-09 06:29:30.534267] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.188 [2024-12-09 06:29:30.534272] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.188 [2024-12-09 06:29:30.534287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.188 qpair failed and we were unable to recover it. 00:30:36.188 [2024-12-09 06:29:30.544240] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.188 [2024-12-09 06:29:30.544278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.188 [2024-12-09 06:29:30.544288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.188 [2024-12-09 06:29:30.544293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.188 [2024-12-09 06:29:30.544298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.188 [2024-12-09 06:29:30.544309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.188 qpair failed and we were unable to recover it. 00:30:36.188 [2024-12-09 06:29:30.554240] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.188 [2024-12-09 06:29:30.554286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.188 [2024-12-09 06:29:30.554296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.188 [2024-12-09 06:29:30.554301] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.188 [2024-12-09 06:29:30.554306] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.188 [2024-12-09 06:29:30.554316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.188 qpair failed and we were unable to recover it. 00:30:36.188 [2024-12-09 06:29:30.564296] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.188 [2024-12-09 06:29:30.564341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.188 [2024-12-09 06:29:30.564351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.188 [2024-12-09 06:29:30.564356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.188 [2024-12-09 06:29:30.564361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.188 [2024-12-09 06:29:30.564371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.188 qpair failed and we were unable to recover it. 00:30:36.188 [2024-12-09 06:29:30.574315] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.188 [2024-12-09 06:29:30.574367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.188 [2024-12-09 06:29:30.574378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.188 [2024-12-09 06:29:30.574383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.188 [2024-12-09 06:29:30.574388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.188 [2024-12-09 06:29:30.574399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.188 qpair failed and we were unable to recover it. 00:30:36.188 [2024-12-09 06:29:30.584334] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.188 [2024-12-09 06:29:30.584373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.188 [2024-12-09 06:29:30.584384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.188 [2024-12-09 06:29:30.584389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.188 [2024-12-09 06:29:30.584393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.188 [2024-12-09 06:29:30.584404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.188 qpair failed and we were unable to recover it. 00:30:36.188 [2024-12-09 06:29:30.594366] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.188 [2024-12-09 06:29:30.594406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.188 [2024-12-09 06:29:30.594416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.188 [2024-12-09 06:29:30.594421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.188 [2024-12-09 06:29:30.594425] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.188 [2024-12-09 06:29:30.594435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.188 qpair failed and we were unable to recover it. 00:30:36.188 [2024-12-09 06:29:30.604296] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.188 [2024-12-09 06:29:30.604339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.188 [2024-12-09 06:29:30.604350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.188 [2024-12-09 06:29:30.604355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.188 [2024-12-09 06:29:30.604360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.188 [2024-12-09 06:29:30.604371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.188 qpair failed and we were unable to recover it. 00:30:36.188 [2024-12-09 06:29:30.614444] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.188 [2024-12-09 06:29:30.614522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.188 [2024-12-09 06:29:30.614532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.188 [2024-12-09 06:29:30.614537] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.188 [2024-12-09 06:29:30.614541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.188 [2024-12-09 06:29:30.614552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.188 qpair failed and we were unable to recover it. 00:30:36.188 [2024-12-09 06:29:30.624334] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.188 [2024-12-09 06:29:30.624374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.188 [2024-12-09 06:29:30.624387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.188 [2024-12-09 06:29:30.624392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.188 [2024-12-09 06:29:30.624397] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.189 [2024-12-09 06:29:30.624407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.189 qpair failed and we were unable to recover it. 00:30:36.189 [2024-12-09 06:29:30.634504] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.189 [2024-12-09 06:29:30.634546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.189 [2024-12-09 06:29:30.634556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.189 [2024-12-09 06:29:30.634561] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.189 [2024-12-09 06:29:30.634566] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.189 [2024-12-09 06:29:30.634576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.189 qpair failed and we were unable to recover it. 00:30:36.189 [2024-12-09 06:29:30.644527] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.189 [2024-12-09 06:29:30.644571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.189 [2024-12-09 06:29:30.644582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.189 [2024-12-09 06:29:30.644587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.189 [2024-12-09 06:29:30.644591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.189 [2024-12-09 06:29:30.644602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.189 qpair failed and we were unable to recover it. 00:30:36.189 [2024-12-09 06:29:30.654527] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.189 [2024-12-09 06:29:30.654594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.189 [2024-12-09 06:29:30.654605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.189 [2024-12-09 06:29:30.654610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.189 [2024-12-09 06:29:30.654615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.189 [2024-12-09 06:29:30.654625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.189 qpair failed and we were unable to recover it. 00:30:36.189 [2024-12-09 06:29:30.664614] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.189 [2024-12-09 06:29:30.664662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.189 [2024-12-09 06:29:30.664671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.189 [2024-12-09 06:29:30.664677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.189 [2024-12-09 06:29:30.664684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.189 [2024-12-09 06:29:30.664694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.189 qpair failed and we were unable to recover it. 00:30:36.189 [2024-12-09 06:29:30.674634] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.189 [2024-12-09 06:29:30.674679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.189 [2024-12-09 06:29:30.674689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.189 [2024-12-09 06:29:30.674694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.189 [2024-12-09 06:29:30.674699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.189 [2024-12-09 06:29:30.674709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.189 qpair failed and we were unable to recover it. 00:30:36.189 [2024-12-09 06:29:30.684634] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.189 [2024-12-09 06:29:30.684675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.189 [2024-12-09 06:29:30.684685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.189 [2024-12-09 06:29:30.684690] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.189 [2024-12-09 06:29:30.684695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.189 [2024-12-09 06:29:30.684705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.189 qpair failed and we were unable to recover it. 00:30:36.189 [2024-12-09 06:29:30.694545] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.189 [2024-12-09 06:29:30.694583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.189 [2024-12-09 06:29:30.694594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.189 [2024-12-09 06:29:30.694599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.189 [2024-12-09 06:29:30.694604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.189 [2024-12-09 06:29:30.694615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.189 qpair failed and we were unable to recover it. 00:30:36.189 [2024-12-09 06:29:30.704684] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.189 [2024-12-09 06:29:30.704741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.189 [2024-12-09 06:29:30.704751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.189 [2024-12-09 06:29:30.704756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.189 [2024-12-09 06:29:30.704761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.189 [2024-12-09 06:29:30.704771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.189 qpair failed and we were unable to recover it. 00:30:36.189 [2024-12-09 06:29:30.714572] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.189 [2024-12-09 06:29:30.714614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.189 [2024-12-09 06:29:30.714624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.189 [2024-12-09 06:29:30.714629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.189 [2024-12-09 06:29:30.714633] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.189 [2024-12-09 06:29:30.714643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.189 qpair failed and we were unable to recover it. 00:30:36.189 [2024-12-09 06:29:30.724751] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.189 [2024-12-09 06:29:30.724795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.189 [2024-12-09 06:29:30.724805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.189 [2024-12-09 06:29:30.724810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.189 [2024-12-09 06:29:30.724814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.189 [2024-12-09 06:29:30.724824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.189 qpair failed and we were unable to recover it. 00:30:36.189 [2024-12-09 06:29:30.734670] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.189 [2024-12-09 06:29:30.734718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.189 [2024-12-09 06:29:30.734727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.189 [2024-12-09 06:29:30.734733] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.189 [2024-12-09 06:29:30.734737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.189 [2024-12-09 06:29:30.734747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.189 qpair failed and we were unable to recover it. 00:30:36.189 [2024-12-09 06:29:30.744766] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.189 [2024-12-09 06:29:30.744807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.189 [2024-12-09 06:29:30.744817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.189 [2024-12-09 06:29:30.744822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.189 [2024-12-09 06:29:30.744827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.189 [2024-12-09 06:29:30.744836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.189 qpair failed and we were unable to recover it. 00:30:36.189 [2024-12-09 06:29:30.754685] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.189 [2024-12-09 06:29:30.754725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.189 [2024-12-09 06:29:30.754741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.189 [2024-12-09 06:29:30.754746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.189 [2024-12-09 06:29:30.754750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.190 [2024-12-09 06:29:30.754761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.190 qpair failed and we were unable to recover it. 00:30:36.190 [2024-12-09 06:29:30.764838] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.190 [2024-12-09 06:29:30.764880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.190 [2024-12-09 06:29:30.764890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.190 [2024-12-09 06:29:30.764895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.190 [2024-12-09 06:29:30.764899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.190 [2024-12-09 06:29:30.764909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.190 qpair failed and we were unable to recover it. 00:30:36.450 [2024-12-09 06:29:30.774864] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.450 [2024-12-09 06:29:30.774954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.450 [2024-12-09 06:29:30.774964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.450 [2024-12-09 06:29:30.774969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.450 [2024-12-09 06:29:30.774974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.450 [2024-12-09 06:29:30.774984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.450 qpair failed and we were unable to recover it. 00:30:36.450 [2024-12-09 06:29:30.784896] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.450 [2024-12-09 06:29:30.784940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.450 [2024-12-09 06:29:30.784950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.450 [2024-12-09 06:29:30.784955] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.450 [2024-12-09 06:29:30.784960] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.450 [2024-12-09 06:29:30.784969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.450 qpair failed and we were unable to recover it. 00:30:36.450 [2024-12-09 06:29:30.794911] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.450 [2024-12-09 06:29:30.794953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.450 [2024-12-09 06:29:30.794963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.450 [2024-12-09 06:29:30.794968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.450 [2024-12-09 06:29:30.794974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.450 [2024-12-09 06:29:30.794984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.451 qpair failed and we were unable to recover it. 00:30:36.451 [2024-12-09 06:29:30.804925] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.451 [2024-12-09 06:29:30.804969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.451 [2024-12-09 06:29:30.804980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.451 [2024-12-09 06:29:30.804985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.451 [2024-12-09 06:29:30.804990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.451 [2024-12-09 06:29:30.805000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.451 qpair failed and we were unable to recover it. 00:30:36.451 [2024-12-09 06:29:30.814965] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.451 [2024-12-09 06:29:30.815032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.451 [2024-12-09 06:29:30.815042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.451 [2024-12-09 06:29:30.815047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.451 [2024-12-09 06:29:30.815052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.451 [2024-12-09 06:29:30.815062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.451 qpair failed and we were unable to recover it. 00:30:36.451 [2024-12-09 06:29:30.824988] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.451 [2024-12-09 06:29:30.825048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.451 [2024-12-09 06:29:30.825058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.451 [2024-12-09 06:29:30.825063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.451 [2024-12-09 06:29:30.825067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.451 [2024-12-09 06:29:30.825077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.451 qpair failed and we were unable to recover it. 00:30:36.451 [2024-12-09 06:29:30.835011] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.451 [2024-12-09 06:29:30.835049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.451 [2024-12-09 06:29:30.835060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.451 [2024-12-09 06:29:30.835065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.451 [2024-12-09 06:29:30.835069] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.451 [2024-12-09 06:29:30.835079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.451 qpair failed and we were unable to recover it. 00:30:36.451 [2024-12-09 06:29:30.845096] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.451 [2024-12-09 06:29:30.845142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.451 [2024-12-09 06:29:30.845152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.451 [2024-12-09 06:29:30.845157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.451 [2024-12-09 06:29:30.845162] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.451 [2024-12-09 06:29:30.845172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.451 qpair failed and we were unable to recover it. 00:30:36.451 [2024-12-09 06:29:30.854954] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.451 [2024-12-09 06:29:30.855002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.451 [2024-12-09 06:29:30.855012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.451 [2024-12-09 06:29:30.855017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.451 [2024-12-09 06:29:30.855021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.451 [2024-12-09 06:29:30.855031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.451 qpair failed and we were unable to recover it. 00:30:36.451 [2024-12-09 06:29:30.865089] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.451 [2024-12-09 06:29:30.865130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.451 [2024-12-09 06:29:30.865140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.451 [2024-12-09 06:29:30.865145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.451 [2024-12-09 06:29:30.865149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.451 [2024-12-09 06:29:30.865159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.451 qpair failed and we were unable to recover it. 00:30:36.451 [2024-12-09 06:29:30.875150] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.451 [2024-12-09 06:29:30.875252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.451 [2024-12-09 06:29:30.875261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.451 [2024-12-09 06:29:30.875266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.451 [2024-12-09 06:29:30.875271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.451 [2024-12-09 06:29:30.875281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.451 qpair failed and we were unable to recover it. 00:30:36.451 [2024-12-09 06:29:30.885181] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.451 [2024-12-09 06:29:30.885226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.451 [2024-12-09 06:29:30.885248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.451 [2024-12-09 06:29:30.885255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.451 [2024-12-09 06:29:30.885260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.451 [2024-12-09 06:29:30.885274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.451 qpair failed and we were unable to recover it. 00:30:36.451 [2024-12-09 06:29:30.895182] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.451 [2024-12-09 06:29:30.895225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.451 [2024-12-09 06:29:30.895237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.451 [2024-12-09 06:29:30.895243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.451 [2024-12-09 06:29:30.895247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.451 [2024-12-09 06:29:30.895258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.451 qpair failed and we were unable to recover it. 00:30:36.451 [2024-12-09 06:29:30.905177] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.451 [2024-12-09 06:29:30.905219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.451 [2024-12-09 06:29:30.905229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.451 [2024-12-09 06:29:30.905234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.451 [2024-12-09 06:29:30.905239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.451 [2024-12-09 06:29:30.905250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.451 qpair failed and we were unable to recover it. 00:30:36.451 [2024-12-09 06:29:30.915203] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.451 [2024-12-09 06:29:30.915246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.451 [2024-12-09 06:29:30.915256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.451 [2024-12-09 06:29:30.915261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.451 [2024-12-09 06:29:30.915266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.451 [2024-12-09 06:29:30.915276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.451 qpair failed and we were unable to recover it. 00:30:36.451 [2024-12-09 06:29:30.925249] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.451 [2024-12-09 06:29:30.925291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.451 [2024-12-09 06:29:30.925301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.451 [2024-12-09 06:29:30.925309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.451 [2024-12-09 06:29:30.925314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.451 [2024-12-09 06:29:30.925324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.452 qpair failed and we were unable to recover it. 00:30:36.452 [2024-12-09 06:29:30.935288] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.452 [2024-12-09 06:29:30.935328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.452 [2024-12-09 06:29:30.935338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.452 [2024-12-09 06:29:30.935343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.452 [2024-12-09 06:29:30.935348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.452 [2024-12-09 06:29:30.935358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.452 qpair failed and we were unable to recover it. 00:30:36.452 [2024-12-09 06:29:30.945299] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.452 [2024-12-09 06:29:30.945342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.452 [2024-12-09 06:29:30.945352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.452 [2024-12-09 06:29:30.945357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.452 [2024-12-09 06:29:30.945362] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.452 [2024-12-09 06:29:30.945372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.452 qpair failed and we were unable to recover it. 00:30:36.452 [2024-12-09 06:29:30.955327] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.452 [2024-12-09 06:29:30.955369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.452 [2024-12-09 06:29:30.955380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.452 [2024-12-09 06:29:30.955385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.452 [2024-12-09 06:29:30.955390] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.452 [2024-12-09 06:29:30.955400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.452 qpair failed and we were unable to recover it. 00:30:36.452 [2024-12-09 06:29:30.965381] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.452 [2024-12-09 06:29:30.965423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.452 [2024-12-09 06:29:30.965433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.452 [2024-12-09 06:29:30.965438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.452 [2024-12-09 06:29:30.965442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.452 [2024-12-09 06:29:30.965461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.452 qpair failed and we were unable to recover it. 00:30:36.452 [2024-12-09 06:29:30.975404] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.452 [2024-12-09 06:29:30.975457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.452 [2024-12-09 06:29:30.975468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.452 [2024-12-09 06:29:30.975473] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.452 [2024-12-09 06:29:30.975477] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.452 [2024-12-09 06:29:30.975488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.452 qpair failed and we were unable to recover it. 00:30:36.452 [2024-12-09 06:29:30.985428] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.452 [2024-12-09 06:29:30.985493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.452 [2024-12-09 06:29:30.985503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.452 [2024-12-09 06:29:30.985508] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.452 [2024-12-09 06:29:30.985512] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.452 [2024-12-09 06:29:30.985522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.452 qpair failed and we were unable to recover it. 00:30:36.452 [2024-12-09 06:29:30.995466] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.452 [2024-12-09 06:29:30.995508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.452 [2024-12-09 06:29:30.995517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.452 [2024-12-09 06:29:30.995522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.452 [2024-12-09 06:29:30.995527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.452 [2024-12-09 06:29:30.995537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.452 qpair failed and we were unable to recover it. 00:30:36.452 [2024-12-09 06:29:31.005496] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.452 [2024-12-09 06:29:31.005534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.452 [2024-12-09 06:29:31.005544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.452 [2024-12-09 06:29:31.005549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.452 [2024-12-09 06:29:31.005554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.452 [2024-12-09 06:29:31.005564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.452 qpair failed and we were unable to recover it. 00:30:36.452 [2024-12-09 06:29:31.015536] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.452 [2024-12-09 06:29:31.015581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.452 [2024-12-09 06:29:31.015591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.452 [2024-12-09 06:29:31.015596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.452 [2024-12-09 06:29:31.015600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.452 [2024-12-09 06:29:31.015611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.452 qpair failed and we were unable to recover it. 00:30:36.452 [2024-12-09 06:29:31.025534] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.452 [2024-12-09 06:29:31.025577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.452 [2024-12-09 06:29:31.025586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.452 [2024-12-09 06:29:31.025591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.452 [2024-12-09 06:29:31.025595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.452 [2024-12-09 06:29:31.025606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.452 qpair failed and we were unable to recover it. 00:30:36.713 [2024-12-09 06:29:31.035551] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.713 [2024-12-09 06:29:31.035590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.713 [2024-12-09 06:29:31.035600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.713 [2024-12-09 06:29:31.035605] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.713 [2024-12-09 06:29:31.035609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.713 [2024-12-09 06:29:31.035620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.713 qpair failed and we were unable to recover it. 00:30:36.713 [2024-12-09 06:29:31.045496] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.713 [2024-12-09 06:29:31.045537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.713 [2024-12-09 06:29:31.045547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.713 [2024-12-09 06:29:31.045552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.713 [2024-12-09 06:29:31.045557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.714 [2024-12-09 06:29:31.045566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.714 qpair failed and we were unable to recover it. 00:30:36.714 [2024-12-09 06:29:31.055617] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.714 [2024-12-09 06:29:31.055668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.714 [2024-12-09 06:29:31.055678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.714 [2024-12-09 06:29:31.055686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.714 [2024-12-09 06:29:31.055690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.714 [2024-12-09 06:29:31.055700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.714 qpair failed and we were unable to recover it. 00:30:36.714 [2024-12-09 06:29:31.065632] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.714 [2024-12-09 06:29:31.065667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.714 [2024-12-09 06:29:31.065677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.714 [2024-12-09 06:29:31.065683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.714 [2024-12-09 06:29:31.065687] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.714 [2024-12-09 06:29:31.065697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.714 qpair failed and we were unable to recover it. 00:30:36.714 [2024-12-09 06:29:31.075676] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.714 [2024-12-09 06:29:31.075717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.714 [2024-12-09 06:29:31.075728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.714 [2024-12-09 06:29:31.075733] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.714 [2024-12-09 06:29:31.075737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.714 [2024-12-09 06:29:31.075747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.714 qpair failed and we were unable to recover it. 00:30:36.714 [2024-12-09 06:29:31.085748] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.714 [2024-12-09 06:29:31.085796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.714 [2024-12-09 06:29:31.085806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.714 [2024-12-09 06:29:31.085811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.714 [2024-12-09 06:29:31.085815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.714 [2024-12-09 06:29:31.085825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.714 qpair failed and we were unable to recover it. 00:30:36.714 [2024-12-09 06:29:31.095608] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.714 [2024-12-09 06:29:31.095648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.714 [2024-12-09 06:29:31.095658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.714 [2024-12-09 06:29:31.095663] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.714 [2024-12-09 06:29:31.095668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.714 [2024-12-09 06:29:31.095680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.714 qpair failed and we were unable to recover it. 00:30:36.714 [2024-12-09 06:29:31.105753] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.714 [2024-12-09 06:29:31.105789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.714 [2024-12-09 06:29:31.105799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.714 [2024-12-09 06:29:31.105804] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.714 [2024-12-09 06:29:31.105808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.714 [2024-12-09 06:29:31.105819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.714 qpair failed and we were unable to recover it. 00:30:36.714 [2024-12-09 06:29:31.115797] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.714 [2024-12-09 06:29:31.115865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.714 [2024-12-09 06:29:31.115875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.714 [2024-12-09 06:29:31.115880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.714 [2024-12-09 06:29:31.115885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.714 [2024-12-09 06:29:31.115894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.714 qpair failed and we were unable to recover it. 00:30:36.714 [2024-12-09 06:29:31.125830] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.714 [2024-12-09 06:29:31.125874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.714 [2024-12-09 06:29:31.125884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.714 [2024-12-09 06:29:31.125889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.714 [2024-12-09 06:29:31.125893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.714 [2024-12-09 06:29:31.125903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.714 qpair failed and we were unable to recover it. 00:30:36.714 [2024-12-09 06:29:31.135745] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.714 [2024-12-09 06:29:31.135787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.714 [2024-12-09 06:29:31.135798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.714 [2024-12-09 06:29:31.135803] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.714 [2024-12-09 06:29:31.135808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.714 [2024-12-09 06:29:31.135818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.714 qpair failed and we were unable to recover it. 00:30:36.714 [2024-12-09 06:29:31.145861] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.714 [2024-12-09 06:29:31.145903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.714 [2024-12-09 06:29:31.145913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.714 [2024-12-09 06:29:31.145918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.714 [2024-12-09 06:29:31.145923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.714 [2024-12-09 06:29:31.145933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.714 qpair failed and we were unable to recover it. 00:30:36.714 [2024-12-09 06:29:31.155872] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.714 [2024-12-09 06:29:31.155915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.714 [2024-12-09 06:29:31.155925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.714 [2024-12-09 06:29:31.155930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.714 [2024-12-09 06:29:31.155935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.714 [2024-12-09 06:29:31.155945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.714 qpair failed and we were unable to recover it. 00:30:36.714 [2024-12-09 06:29:31.165916] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.714 [2024-12-09 06:29:31.165957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.714 [2024-12-09 06:29:31.165967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.714 [2024-12-09 06:29:31.165972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.714 [2024-12-09 06:29:31.165977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.714 [2024-12-09 06:29:31.165987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.714 qpair failed and we were unable to recover it. 00:30:36.714 [2024-12-09 06:29:31.175952] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.714 [2024-12-09 06:29:31.175993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.715 [2024-12-09 06:29:31.176003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.715 [2024-12-09 06:29:31.176009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.715 [2024-12-09 06:29:31.176013] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.715 [2024-12-09 06:29:31.176024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.715 qpair failed and we were unable to recover it. 00:30:36.715 [2024-12-09 06:29:31.185980] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.715 [2024-12-09 06:29:31.186022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.715 [2024-12-09 06:29:31.186035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.715 [2024-12-09 06:29:31.186040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.715 [2024-12-09 06:29:31.186044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.715 [2024-12-09 06:29:31.186054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.715 qpair failed and we were unable to recover it. 00:30:36.715 [2024-12-09 06:29:31.196000] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.715 [2024-12-09 06:29:31.196052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.715 [2024-12-09 06:29:31.196062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.715 [2024-12-09 06:29:31.196067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.715 [2024-12-09 06:29:31.196072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.715 [2024-12-09 06:29:31.196081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.715 qpair failed and we were unable to recover it. 00:30:36.715 [2024-12-09 06:29:31.205928] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.715 [2024-12-09 06:29:31.206010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.715 [2024-12-09 06:29:31.206021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.715 [2024-12-09 06:29:31.206026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.715 [2024-12-09 06:29:31.206030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.715 [2024-12-09 06:29:31.206041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.715 qpair failed and we were unable to recover it. 00:30:36.715 [2024-12-09 06:29:31.216041] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.715 [2024-12-09 06:29:31.216086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.715 [2024-12-09 06:29:31.216096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.715 [2024-12-09 06:29:31.216101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.715 [2024-12-09 06:29:31.216105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.715 [2024-12-09 06:29:31.216116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.715 qpair failed and we were unable to recover it. 00:30:36.715 [2024-12-09 06:29:31.226088] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.715 [2024-12-09 06:29:31.226143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.715 [2024-12-09 06:29:31.226153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.715 [2024-12-09 06:29:31.226158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.715 [2024-12-09 06:29:31.226165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.715 [2024-12-09 06:29:31.226175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.715 qpair failed and we were unable to recover it. 00:30:36.715 [2024-12-09 06:29:31.236121] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.715 [2024-12-09 06:29:31.236176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.715 [2024-12-09 06:29:31.236186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.715 [2024-12-09 06:29:31.236191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.715 [2024-12-09 06:29:31.236195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.715 [2024-12-09 06:29:31.236206] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.715 qpair failed and we were unable to recover it. 00:30:36.715 [2024-12-09 06:29:31.246191] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.715 [2024-12-09 06:29:31.246278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.715 [2024-12-09 06:29:31.246287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.715 [2024-12-09 06:29:31.246292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.715 [2024-12-09 06:29:31.246297] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.715 [2024-12-09 06:29:31.246307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.715 qpair failed and we were unable to recover it. 00:30:36.715 [2024-12-09 06:29:31.256055] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.715 [2024-12-09 06:29:31.256098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.715 [2024-12-09 06:29:31.256109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.715 [2024-12-09 06:29:31.256114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.715 [2024-12-09 06:29:31.256119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.715 [2024-12-09 06:29:31.256129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.715 qpair failed and we were unable to recover it. 00:30:36.715 [2024-12-09 06:29:31.266176] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.715 [2024-12-09 06:29:31.266217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.715 [2024-12-09 06:29:31.266227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.715 [2024-12-09 06:29:31.266232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.715 [2024-12-09 06:29:31.266236] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.715 [2024-12-09 06:29:31.266247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.715 qpair failed and we were unable to recover it. 00:30:36.715 [2024-12-09 06:29:31.276227] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.715 [2024-12-09 06:29:31.276273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.715 [2024-12-09 06:29:31.276283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.715 [2024-12-09 06:29:31.276288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.715 [2024-12-09 06:29:31.276293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.715 [2024-12-09 06:29:31.276303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.715 qpair failed and we were unable to recover it. 00:30:36.715 [2024-12-09 06:29:31.286144] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.715 [2024-12-09 06:29:31.286185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.715 [2024-12-09 06:29:31.286195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.715 [2024-12-09 06:29:31.286200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.715 [2024-12-09 06:29:31.286204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.715 [2024-12-09 06:29:31.286214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.715 qpair failed and we were unable to recover it. 00:30:36.715 [2024-12-09 06:29:31.296151] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.715 [2024-12-09 06:29:31.296192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.715 [2024-12-09 06:29:31.296203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.715 [2024-12-09 06:29:31.296207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.715 [2024-12-09 06:29:31.296212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.715 [2024-12-09 06:29:31.296222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.715 qpair failed and we were unable to recover it. 00:30:36.977 [2024-12-09 06:29:31.306192] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.977 [2024-12-09 06:29:31.306229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.977 [2024-12-09 06:29:31.306240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.977 [2024-12-09 06:29:31.306246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.977 [2024-12-09 06:29:31.306250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.977 [2024-12-09 06:29:31.306261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.977 qpair failed and we were unable to recover it. 00:30:36.977 [2024-12-09 06:29:31.316349] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.977 [2024-12-09 06:29:31.316393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.977 [2024-12-09 06:29:31.316406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.977 [2024-12-09 06:29:31.316411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.977 [2024-12-09 06:29:31.316415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.977 [2024-12-09 06:29:31.316426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.977 qpair failed and we were unable to recover it. 00:30:36.977 [2024-12-09 06:29:31.326378] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.977 [2024-12-09 06:29:31.326423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.977 [2024-12-09 06:29:31.326433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.977 [2024-12-09 06:29:31.326439] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.977 [2024-12-09 06:29:31.326443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.977 [2024-12-09 06:29:31.326457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.977 qpair failed and we were unable to recover it. 00:30:36.977 [2024-12-09 06:29:31.336388] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.977 [2024-12-09 06:29:31.336452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.977 [2024-12-09 06:29:31.336463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.977 [2024-12-09 06:29:31.336468] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.978 [2024-12-09 06:29:31.336472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.978 [2024-12-09 06:29:31.336482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.978 qpair failed and we were unable to recover it. 00:30:36.978 [2024-12-09 06:29:31.346420] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.978 [2024-12-09 06:29:31.346464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.978 [2024-12-09 06:29:31.346474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.978 [2024-12-09 06:29:31.346479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.978 [2024-12-09 06:29:31.346483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.978 [2024-12-09 06:29:31.346493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.978 qpair failed and we were unable to recover it. 00:30:36.978 [2024-12-09 06:29:31.356420] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.978 [2024-12-09 06:29:31.356464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.978 [2024-12-09 06:29:31.356474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.978 [2024-12-09 06:29:31.356479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.978 [2024-12-09 06:29:31.356486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.978 [2024-12-09 06:29:31.356497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.978 qpair failed and we were unable to recover it. 00:30:36.978 [2024-12-09 06:29:31.366362] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.978 [2024-12-09 06:29:31.366423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.978 [2024-12-09 06:29:31.366433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.978 [2024-12-09 06:29:31.366438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.978 [2024-12-09 06:29:31.366442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.978 [2024-12-09 06:29:31.366455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.978 qpair failed and we were unable to recover it. 00:30:36.978 [2024-12-09 06:29:31.376500] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.978 [2024-12-09 06:29:31.376545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.978 [2024-12-09 06:29:31.376555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.978 [2024-12-09 06:29:31.376559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.978 [2024-12-09 06:29:31.376564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.978 [2024-12-09 06:29:31.376574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.978 qpair failed and we were unable to recover it. 00:30:36.978 [2024-12-09 06:29:31.386521] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.978 [2024-12-09 06:29:31.386567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.978 [2024-12-09 06:29:31.386578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.978 [2024-12-09 06:29:31.386583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.978 [2024-12-09 06:29:31.386587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.978 [2024-12-09 06:29:31.386597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.978 qpair failed and we were unable to recover it. 00:30:36.978 [2024-12-09 06:29:31.396538] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.978 [2024-12-09 06:29:31.396581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.978 [2024-12-09 06:29:31.396590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.978 [2024-12-09 06:29:31.396596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.978 [2024-12-09 06:29:31.396600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.978 [2024-12-09 06:29:31.396610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.978 qpair failed and we were unable to recover it. 00:30:36.978 [2024-12-09 06:29:31.406577] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.978 [2024-12-09 06:29:31.406615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.978 [2024-12-09 06:29:31.406626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.978 [2024-12-09 06:29:31.406631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.978 [2024-12-09 06:29:31.406635] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.978 [2024-12-09 06:29:31.406645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.978 qpair failed and we were unable to recover it. 00:30:36.978 [2024-12-09 06:29:31.416570] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.978 [2024-12-09 06:29:31.416613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.978 [2024-12-09 06:29:31.416623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.978 [2024-12-09 06:29:31.416629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.978 [2024-12-09 06:29:31.416633] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.978 [2024-12-09 06:29:31.416643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.978 qpair failed and we were unable to recover it. 00:30:36.978 [2024-12-09 06:29:31.426621] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.978 [2024-12-09 06:29:31.426659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.978 [2024-12-09 06:29:31.426669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.978 [2024-12-09 06:29:31.426674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.978 [2024-12-09 06:29:31.426678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.978 [2024-12-09 06:29:31.426688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.978 qpair failed and we were unable to recover it. 00:30:36.978 [2024-12-09 06:29:31.436683] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.978 [2024-12-09 06:29:31.436725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.978 [2024-12-09 06:29:31.436736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.978 [2024-12-09 06:29:31.436742] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.978 [2024-12-09 06:29:31.436748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.978 [2024-12-09 06:29:31.436759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.978 qpair failed and we were unable to recover it. 00:30:36.978 [2024-12-09 06:29:31.446715] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.978 [2024-12-09 06:29:31.446760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.978 [2024-12-09 06:29:31.446770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.978 [2024-12-09 06:29:31.446775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.978 [2024-12-09 06:29:31.446779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.978 [2024-12-09 06:29:31.446790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.978 qpair failed and we were unable to recover it. 00:30:36.978 [2024-12-09 06:29:31.456707] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.978 [2024-12-09 06:29:31.456755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.978 [2024-12-09 06:29:31.456765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.978 [2024-12-09 06:29:31.456770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.978 [2024-12-09 06:29:31.456774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.978 [2024-12-09 06:29:31.456784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.978 qpair failed and we were unable to recover it. 00:30:36.978 [2024-12-09 06:29:31.466740] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.978 [2024-12-09 06:29:31.466783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.978 [2024-12-09 06:29:31.466793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.979 [2024-12-09 06:29:31.466798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.979 [2024-12-09 06:29:31.466802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.979 [2024-12-09 06:29:31.466812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.979 qpair failed and we were unable to recover it. 00:30:36.979 [2024-12-09 06:29:31.476772] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.979 [2024-12-09 06:29:31.476862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.979 [2024-12-09 06:29:31.476873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.979 [2024-12-09 06:29:31.476878] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.979 [2024-12-09 06:29:31.476882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.979 [2024-12-09 06:29:31.476892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.979 qpair failed and we were unable to recover it. 00:30:36.979 [2024-12-09 06:29:31.486830] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.979 [2024-12-09 06:29:31.486878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.979 [2024-12-09 06:29:31.486888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.979 [2024-12-09 06:29:31.486899] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.979 [2024-12-09 06:29:31.486904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.979 [2024-12-09 06:29:31.486914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.979 qpair failed and we were unable to recover it. 00:30:36.979 [2024-12-09 06:29:31.496760] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.979 [2024-12-09 06:29:31.496798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.979 [2024-12-09 06:29:31.496808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.979 [2024-12-09 06:29:31.496813] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.979 [2024-12-09 06:29:31.496817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.979 [2024-12-09 06:29:31.496827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.979 qpair failed and we were unable to recover it. 00:30:36.979 [2024-12-09 06:29:31.506859] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.979 [2024-12-09 06:29:31.506915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.979 [2024-12-09 06:29:31.506925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.979 [2024-12-09 06:29:31.506930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.979 [2024-12-09 06:29:31.506934] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.979 [2024-12-09 06:29:31.506944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.979 qpair failed and we were unable to recover it. 00:30:36.979 [2024-12-09 06:29:31.516858] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.979 [2024-12-09 06:29:31.516905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.979 [2024-12-09 06:29:31.516914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.979 [2024-12-09 06:29:31.516919] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.979 [2024-12-09 06:29:31.516924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.979 [2024-12-09 06:29:31.516934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.979 qpair failed and we were unable to recover it. 00:30:36.979 [2024-12-09 06:29:31.526902] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.979 [2024-12-09 06:29:31.526943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.979 [2024-12-09 06:29:31.526952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.979 [2024-12-09 06:29:31.526957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.979 [2024-12-09 06:29:31.526962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.979 [2024-12-09 06:29:31.526974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.979 qpair failed and we were unable to recover it. 00:30:36.979 [2024-12-09 06:29:31.536907] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.979 [2024-12-09 06:29:31.536947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.979 [2024-12-09 06:29:31.536956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.979 [2024-12-09 06:29:31.536961] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.979 [2024-12-09 06:29:31.536966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.979 [2024-12-09 06:29:31.536976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.979 qpair failed and we were unable to recover it. 00:30:36.979 [2024-12-09 06:29:31.546963] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.979 [2024-12-09 06:29:31.547005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.979 [2024-12-09 06:29:31.547014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.979 [2024-12-09 06:29:31.547019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.979 [2024-12-09 06:29:31.547024] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.979 [2024-12-09 06:29:31.547033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.979 qpair failed and we were unable to recover it. 00:30:36.979 [2024-12-09 06:29:31.557008] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.979 [2024-12-09 06:29:31.557052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.979 [2024-12-09 06:29:31.557062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.979 [2024-12-09 06:29:31.557067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.979 [2024-12-09 06:29:31.557071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:36.979 [2024-12-09 06:29:31.557082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.979 qpair failed and we were unable to recover it. 00:30:37.240 [2024-12-09 06:29:31.567039] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.240 [2024-12-09 06:29:31.567086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.240 [2024-12-09 06:29:31.567096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.240 [2024-12-09 06:29:31.567101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.240 [2024-12-09 06:29:31.567106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:37.240 [2024-12-09 06:29:31.567116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:37.240 qpair failed and we were unable to recover it. 00:30:37.240 [2024-12-09 06:29:31.577052] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.240 [2024-12-09 06:29:31.577094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.240 [2024-12-09 06:29:31.577104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.240 [2024-12-09 06:29:31.577109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.240 [2024-12-09 06:29:31.577114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:37.240 [2024-12-09 06:29:31.577123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:37.241 qpair failed and we were unable to recover it. 00:30:37.241 [2024-12-09 06:29:31.587074] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.241 [2024-12-09 06:29:31.587116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.241 [2024-12-09 06:29:31.587126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.241 [2024-12-09 06:29:31.587131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.241 [2024-12-09 06:29:31.587135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:37.241 [2024-12-09 06:29:31.587145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:37.241 qpair failed and we were unable to recover it. 00:30:37.241 [2024-12-09 06:29:31.596963] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.241 [2024-12-09 06:29:31.597005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.241 [2024-12-09 06:29:31.597016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.241 [2024-12-09 06:29:31.597021] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.241 [2024-12-09 06:29:31.597025] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:37.241 [2024-12-09 06:29:31.597036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:37.241 qpair failed and we were unable to recover it. 00:30:37.241 [2024-12-09 06:29:31.607000] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.241 [2024-12-09 06:29:31.607047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.241 [2024-12-09 06:29:31.607058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.241 [2024-12-09 06:29:31.607063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.241 [2024-12-09 06:29:31.607067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:37.241 [2024-12-09 06:29:31.607077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:37.241 qpair failed and we were unable to recover it. 00:30:37.241 [2024-12-09 06:29:31.617134] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.241 [2024-12-09 06:29:31.617173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.241 [2024-12-09 06:29:31.617183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.241 [2024-12-09 06:29:31.617190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.241 [2024-12-09 06:29:31.617195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:37.241 [2024-12-09 06:29:31.617205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:37.241 qpair failed and we were unable to recover it. 00:30:37.241 [2024-12-09 06:29:31.627153] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.241 [2024-12-09 06:29:31.627192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.241 [2024-12-09 06:29:31.627202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.241 [2024-12-09 06:29:31.627207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.241 [2024-12-09 06:29:31.627211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:37.241 [2024-12-09 06:29:31.627221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:37.241 qpair failed and we were unable to recover it. 00:30:37.241 [2024-12-09 06:29:31.637207] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.241 [2024-12-09 06:29:31.637246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.241 [2024-12-09 06:29:31.637257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.241 [2024-12-09 06:29:31.637263] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.241 [2024-12-09 06:29:31.637269] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:37.241 [2024-12-09 06:29:31.637280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:37.241 qpair failed and we were unable to recover it. 00:30:37.241 [2024-12-09 06:29:31.647232] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.241 [2024-12-09 06:29:31.647281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.241 [2024-12-09 06:29:31.647290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.241 [2024-12-09 06:29:31.647295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.241 [2024-12-09 06:29:31.647300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:37.241 [2024-12-09 06:29:31.647310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:37.241 qpair failed and we were unable to recover it. 00:30:37.241 [2024-12-09 06:29:31.657248] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.241 [2024-12-09 06:29:31.657287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.241 [2024-12-09 06:29:31.657297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.241 [2024-12-09 06:29:31.657302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.241 [2024-12-09 06:29:31.657306] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:37.241 [2024-12-09 06:29:31.657319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:37.241 qpair failed and we were unable to recover it. 00:30:37.241 [2024-12-09 06:29:31.667293] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.241 [2024-12-09 06:29:31.667336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.241 [2024-12-09 06:29:31.667346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.241 [2024-12-09 06:29:31.667352] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.241 [2024-12-09 06:29:31.667356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:37.241 [2024-12-09 06:29:31.667366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:37.241 qpair failed and we were unable to recover it. 00:30:37.241 [2024-12-09 06:29:31.677323] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.241 [2024-12-09 06:29:31.677366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.241 [2024-12-09 06:29:31.677377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.241 [2024-12-09 06:29:31.677382] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.241 [2024-12-09 06:29:31.677386] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:37.241 [2024-12-09 06:29:31.677396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:37.241 qpair failed and we were unable to recover it. 00:30:37.241 [2024-12-09 06:29:31.687223] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.241 [2024-12-09 06:29:31.687265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.241 [2024-12-09 06:29:31.687275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.241 [2024-12-09 06:29:31.687280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.241 [2024-12-09 06:29:31.687285] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:37.241 [2024-12-09 06:29:31.687295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:37.241 qpair failed and we were unable to recover it. 00:30:37.241 [2024-12-09 06:29:31.697322] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.241 [2024-12-09 06:29:31.697359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.241 [2024-12-09 06:29:31.697369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.241 [2024-12-09 06:29:31.697374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.241 [2024-12-09 06:29:31.697378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:37.241 [2024-12-09 06:29:31.697388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:37.241 qpair failed and we were unable to recover it. 00:30:37.241 [2024-12-09 06:29:31.707393] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.241 [2024-12-09 06:29:31.707476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.241 [2024-12-09 06:29:31.707487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.241 [2024-12-09 06:29:31.707492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.241 [2024-12-09 06:29:31.707497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:37.242 [2024-12-09 06:29:31.707507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:37.242 qpair failed and we were unable to recover it. 00:30:37.242 [2024-12-09 06:29:31.717423] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.242 [2024-12-09 06:29:31.717465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.242 [2024-12-09 06:29:31.717475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.242 [2024-12-09 06:29:31.717481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.242 [2024-12-09 06:29:31.717485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:37.242 [2024-12-09 06:29:31.717495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:37.242 qpair failed and we were unable to recover it. 00:30:37.242 [2024-12-09 06:29:31.727444] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.242 [2024-12-09 06:29:31.727491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.242 [2024-12-09 06:29:31.727500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.242 [2024-12-09 06:29:31.727505] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.242 [2024-12-09 06:29:31.727510] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:37.242 [2024-12-09 06:29:31.727520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:37.242 qpair failed and we were unable to recover it. 00:30:37.242 [2024-12-09 06:29:31.737491] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.242 [2024-12-09 06:29:31.737532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.242 [2024-12-09 06:29:31.737542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.242 [2024-12-09 06:29:31.737547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.242 [2024-12-09 06:29:31.737552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:37.242 [2024-12-09 06:29:31.737562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:37.242 qpair failed and we were unable to recover it. 00:30:37.242 [2024-12-09 06:29:31.747469] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.242 [2024-12-09 06:29:31.747505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.242 [2024-12-09 06:29:31.747518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.242 [2024-12-09 06:29:31.747523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.242 [2024-12-09 06:29:31.747527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:37.242 [2024-12-09 06:29:31.747538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:37.242 qpair failed and we were unable to recover it. 00:30:37.242 [2024-12-09 06:29:31.757573] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.242 [2024-12-09 06:29:31.757618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.242 [2024-12-09 06:29:31.757628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.242 [2024-12-09 06:29:31.757633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.242 [2024-12-09 06:29:31.757638] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:37.242 [2024-12-09 06:29:31.757648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:37.242 qpair failed and we were unable to recover it. 00:30:37.242 [2024-12-09 06:29:31.767542] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.242 [2024-12-09 06:29:31.767588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.242 [2024-12-09 06:29:31.767598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.242 [2024-12-09 06:29:31.767603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.242 [2024-12-09 06:29:31.767608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:37.242 [2024-12-09 06:29:31.767618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:37.242 qpair failed and we were unable to recover it. 00:30:37.242 [2024-12-09 06:29:31.777590] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.242 [2024-12-09 06:29:31.777660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.242 [2024-12-09 06:29:31.777670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.242 [2024-12-09 06:29:31.777675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.242 [2024-12-09 06:29:31.777680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:37.242 [2024-12-09 06:29:31.777690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:37.242 qpair failed and we were unable to recover it. 00:30:37.242 [2024-12-09 06:29:31.787483] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.242 [2024-12-09 06:29:31.787541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.242 [2024-12-09 06:29:31.787551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.242 [2024-12-09 06:29:31.787556] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.242 [2024-12-09 06:29:31.787565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:37.242 [2024-12-09 06:29:31.787575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:37.242 qpair failed and we were unable to recover it. 00:30:37.242 [2024-12-09 06:29:31.797647] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.242 [2024-12-09 06:29:31.797688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.242 [2024-12-09 06:29:31.797698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.242 [2024-12-09 06:29:31.797703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.242 [2024-12-09 06:29:31.797707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:37.242 [2024-12-09 06:29:31.797717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:37.242 qpair failed and we were unable to recover it. 00:30:37.242 [2024-12-09 06:29:31.807677] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.242 [2024-12-09 06:29:31.807721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.242 [2024-12-09 06:29:31.807731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.242 [2024-12-09 06:29:31.807737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.242 [2024-12-09 06:29:31.807741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:37.242 [2024-12-09 06:29:31.807751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:37.242 qpair failed and we were unable to recover it. 00:30:37.242 [2024-12-09 06:29:31.817685] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.242 [2024-12-09 06:29:31.817739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.242 [2024-12-09 06:29:31.817749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.242 [2024-12-09 06:29:31.817754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.242 [2024-12-09 06:29:31.817758] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:37.242 [2024-12-09 06:29:31.817768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:37.242 qpair failed and we were unable to recover it. 00:30:37.503 [2024-12-09 06:29:31.827721] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.503 [2024-12-09 06:29:31.827803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.503 [2024-12-09 06:29:31.827813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.503 [2024-12-09 06:29:31.827818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.503 [2024-12-09 06:29:31.827822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:37.503 [2024-12-09 06:29:31.827832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:37.503 qpair failed and we were unable to recover it. 00:30:37.503 [2024-12-09 06:29:31.837743] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.503 [2024-12-09 06:29:31.837784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.503 [2024-12-09 06:29:31.837795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.503 [2024-12-09 06:29:31.837800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.503 [2024-12-09 06:29:31.837805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a8000b90 00:30:37.503 [2024-12-09 06:29:31.837815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:37.503 qpair failed and we were unable to recover it. 00:30:37.503 [2024-12-09 06:29:31.847791] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.504 [2024-12-09 06:29:31.847846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.504 [2024-12-09 06:29:31.847869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.504 [2024-12-09 06:29:31.847878] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.504 [2024-12-09 06:29:31.847885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71b0000b90 00:30:37.504 [2024-12-09 06:29:31.847904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.504 qpair failed and we were unable to recover it. 00:30:37.504 [2024-12-09 06:29:31.857804] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.504 [2024-12-09 06:29:31.857851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.504 [2024-12-09 06:29:31.857866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.504 [2024-12-09 06:29:31.857872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.504 [2024-12-09 06:29:31.857879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71b0000b90 00:30:37.504 [2024-12-09 06:29:31.857893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.504 qpair failed and we were unable to recover it. 00:30:37.504 [2024-12-09 06:29:31.867828] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.504 [2024-12-09 06:29:31.867875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.504 [2024-12-09 06:29:31.867897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.504 [2024-12-09 06:29:31.867906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.504 [2024-12-09 06:29:31.867912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a4000b90 00:30:37.504 [2024-12-09 06:29:31.867931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.504 qpair failed and we were unable to recover it. 00:30:37.504 [2024-12-09 06:29:31.877855] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.504 [2024-12-09 06:29:31.877908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.504 [2024-12-09 06:29:31.877930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.504 [2024-12-09 06:29:31.877937] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.504 [2024-12-09 06:29:31.877943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f71a4000b90 00:30:37.504 [2024-12-09 06:29:31.877958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.504 qpair failed and we were unable to recover it. 00:30:37.504 [2024-12-09 06:29:31.887872] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.504 [2024-12-09 06:29:31.887979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.504 [2024-12-09 06:29:31.888043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.504 [2024-12-09 06:29:31.888070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.504 [2024-12-09 06:29:31.888090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaaed30 00:30:37.504 [2024-12-09 06:29:31.888142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:37.504 qpair failed and we were unable to recover it. 00:30:37.504 [2024-12-09 06:29:31.897912] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.504 [2024-12-09 06:29:31.897977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.504 [2024-12-09 06:29:31.898008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.504 [2024-12-09 06:29:31.898024] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.504 [2024-12-09 06:29:31.898038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaaed30 00:30:37.504 [2024-12-09 06:29:31.898073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:37.504 qpair failed and we were unable to recover it. 00:30:37.504 [2024-12-09 06:29:31.898227] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:30:37.504 A controller has encountered a failure and is being reset. 00:30:37.504 [2024-12-09 06:29:31.898348] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa38a0 (9): Bad file descriptor 00:30:37.504 Controller properly reset. 00:30:37.764 Initializing NVMe Controllers 00:30:37.764 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:37.764 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:37.764 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:30:37.764 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:30:37.764 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:30:37.764 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:30:37.764 Initialization complete. Launching workers. 00:30:37.764 Starting thread on core 1 00:30:37.764 Starting thread on core 2 00:30:37.764 Starting thread on core 3 00:30:37.764 Starting thread on core 0 00:30:37.765 06:29:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:30:37.765 00:30:37.765 real 0m11.698s 00:30:37.765 user 0m21.907s 00:30:37.765 sys 0m3.744s 00:30:37.765 06:29:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:37.765 06:29:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:37.765 ************************************ 00:30:37.765 END TEST nvmf_target_disconnect_tc2 00:30:37.765 ************************************ 00:30:37.765 06:29:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:30:37.765 06:29:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:30:37.765 06:29:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:30:37.765 06:29:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:37.765 06:29:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:30:37.765 06:29:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:37.765 06:29:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:30:37.765 06:29:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:37.765 06:29:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:37.765 rmmod nvme_tcp 00:30:37.765 rmmod nvme_fabrics 00:30:37.765 rmmod nvme_keyring 00:30:37.765 06:29:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:37.765 06:29:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:30:37.765 06:29:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:30:37.765 06:29:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 505124 ']' 00:30:37.765 06:29:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 505124 00:30:37.765 06:29:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 505124 ']' 00:30:37.765 06:29:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 505124 00:30:37.765 06:29:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:30:37.765 06:29:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:37.765 06:29:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 505124 00:30:37.765 06:29:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:30:37.765 06:29:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:30:37.765 06:29:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 505124' 00:30:37.765 killing process with pid 505124 00:30:37.765 06:29:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 505124 00:30:37.765 06:29:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 505124 00:30:38.025 06:29:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:38.025 06:29:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:38.025 06:29:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:38.025 06:29:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:30:38.025 06:29:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:30:38.025 06:29:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:38.025 06:29:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:30:38.025 06:29:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:38.025 06:29:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:38.025 06:29:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:38.025 06:29:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:38.025 06:29:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:39.935 06:29:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:39.935 00:30:39.935 real 0m21.870s 00:30:39.935 user 0m51.008s 00:30:39.935 sys 0m9.644s 00:30:39.935 06:29:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:39.935 06:29:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:39.935 ************************************ 00:30:39.935 END TEST nvmf_target_disconnect 00:30:39.935 ************************************ 00:30:39.935 06:29:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:30:40.195 00:30:40.195 real 6m31.409s 00:30:40.195 user 11m14.099s 00:30:40.195 sys 2m12.124s 00:30:40.195 06:29:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:40.195 06:29:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:40.195 ************************************ 00:30:40.195 END TEST nvmf_host 00:30:40.195 ************************************ 00:30:40.195 06:29:34 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:30:40.195 06:29:34 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:30:40.195 06:29:34 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:30:40.195 06:29:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:40.195 06:29:34 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:40.195 06:29:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:40.195 ************************************ 00:30:40.195 START TEST nvmf_target_core_interrupt_mode 00:30:40.195 ************************************ 00:30:40.195 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:30:40.195 * Looking for test storage... 00:30:40.195 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:30:40.195 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:40.195 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lcov --version 00:30:40.195 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:40.195 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:40.195 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:40.195 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:40.195 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:40.195 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:30:40.195 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:30:40.195 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:30:40.195 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:30:40.195 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:30:40.195 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:30:40.195 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:30:40.195 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:40.195 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:30:40.195 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:30:40.195 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:40.195 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:40.195 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:30:40.455 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:30:40.455 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:40.455 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:30:40.455 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:30:40.455 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:30:40.455 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:30:40.455 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:40.455 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:30:40.455 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:30:40.455 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:40.455 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:40.455 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:30:40.455 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:40.455 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:40.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:40.455 --rc genhtml_branch_coverage=1 00:30:40.455 --rc genhtml_function_coverage=1 00:30:40.455 --rc genhtml_legend=1 00:30:40.455 --rc geninfo_all_blocks=1 00:30:40.455 --rc geninfo_unexecuted_blocks=1 00:30:40.455 00:30:40.455 ' 00:30:40.455 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:40.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:40.455 --rc genhtml_branch_coverage=1 00:30:40.455 --rc genhtml_function_coverage=1 00:30:40.455 --rc genhtml_legend=1 00:30:40.455 --rc geninfo_all_blocks=1 00:30:40.455 --rc geninfo_unexecuted_blocks=1 00:30:40.455 00:30:40.455 ' 00:30:40.455 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:40.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:40.455 --rc genhtml_branch_coverage=1 00:30:40.455 --rc genhtml_function_coverage=1 00:30:40.455 --rc genhtml_legend=1 00:30:40.455 --rc geninfo_all_blocks=1 00:30:40.455 --rc geninfo_unexecuted_blocks=1 00:30:40.455 00:30:40.455 ' 00:30:40.455 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:40.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:40.455 --rc genhtml_branch_coverage=1 00:30:40.455 --rc genhtml_function_coverage=1 00:30:40.455 --rc genhtml_legend=1 00:30:40.455 --rc geninfo_all_blocks=1 00:30:40.455 --rc geninfo_unexecuted_blocks=1 00:30:40.455 00:30:40.455 ' 00:30:40.455 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:30:40.455 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:30:40.455 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:40.455 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:30:40.455 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:40.455 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:40.455 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:40.455 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:40.455 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:40.455 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:40.455 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:40.455 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:40.455 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:40.455 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:40.455 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:30:40.455 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:30:40.455 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:40.456 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:40.456 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:40.456 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:40.456 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:40.456 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:30:40.456 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:40.456 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:40.456 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:40.456 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.456 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.456 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.456 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:30:40.456 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.456 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:30:40.456 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:40.456 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:40.456 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:40.456 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:40.456 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:40.456 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:40.456 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:40.456 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:40.456 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:40.456 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:40.456 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:30:40.456 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:30:40.456 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:30:40.456 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:30:40.456 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:40.456 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:40.456 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:40.456 ************************************ 00:30:40.456 START TEST nvmf_abort 00:30:40.456 ************************************ 00:30:40.456 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:30:40.456 * Looking for test storage... 00:30:40.456 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:40.456 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:40.456 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:40.456 06:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:30:40.716 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:40.716 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:40.716 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:40.716 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:40.716 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:30:40.716 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:30:40.716 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:30:40.716 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:30:40.716 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:30:40.716 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:30:40.716 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:30:40.716 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:40.716 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:30:40.716 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:30:40.716 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:40.716 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:40.716 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:30:40.716 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:30:40.716 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:40.716 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:30:40.716 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:30:40.716 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:30:40.716 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:30:40.716 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:40.717 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:30:40.717 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:30:40.717 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:40.717 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:40.717 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:30:40.717 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:40.717 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:40.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:40.717 --rc genhtml_branch_coverage=1 00:30:40.717 --rc genhtml_function_coverage=1 00:30:40.717 --rc genhtml_legend=1 00:30:40.717 --rc geninfo_all_blocks=1 00:30:40.717 --rc geninfo_unexecuted_blocks=1 00:30:40.717 00:30:40.717 ' 00:30:40.717 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:40.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:40.717 --rc genhtml_branch_coverage=1 00:30:40.717 --rc genhtml_function_coverage=1 00:30:40.717 --rc genhtml_legend=1 00:30:40.717 --rc geninfo_all_blocks=1 00:30:40.717 --rc geninfo_unexecuted_blocks=1 00:30:40.717 00:30:40.717 ' 00:30:40.717 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:40.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:40.717 --rc genhtml_branch_coverage=1 00:30:40.717 --rc genhtml_function_coverage=1 00:30:40.717 --rc genhtml_legend=1 00:30:40.717 --rc geninfo_all_blocks=1 00:30:40.717 --rc geninfo_unexecuted_blocks=1 00:30:40.717 00:30:40.717 ' 00:30:40.717 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:40.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:40.717 --rc genhtml_branch_coverage=1 00:30:40.717 --rc genhtml_function_coverage=1 00:30:40.717 --rc genhtml_legend=1 00:30:40.717 --rc geninfo_all_blocks=1 00:30:40.717 --rc geninfo_unexecuted_blocks=1 00:30:40.717 00:30:40.717 ' 00:30:40.717 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:40.717 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:30:40.717 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:40.717 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:40.717 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:40.717 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:40.717 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:40.717 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:40.717 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:40.717 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:40.717 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:40.717 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:40.717 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:30:40.717 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:30:40.717 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:40.717 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:40.717 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:40.717 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:40.717 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:40.717 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:30:40.717 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:40.717 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:40.717 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:40.717 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.717 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.717 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.717 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:30:40.717 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.717 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:30:40.717 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:40.717 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:40.717 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:40.717 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:40.717 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:40.717 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:40.717 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:40.717 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:40.717 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:40.717 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:40.717 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:40.717 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:30:40.717 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:30:40.717 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:40.717 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:40.717 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:40.717 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:40.717 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:40.717 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:40.717 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:40.717 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:40.717 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:40.717 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:40.717 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:30:40.717 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:48.855 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:48.855 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:48.855 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:48.855 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:48.855 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:48.855 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.638 ms 00:30:48.855 00:30:48.855 --- 10.0.0.2 ping statistics --- 00:30:48.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:48.855 rtt min/avg/max/mdev = 0.638/0.638/0.638/0.000 ms 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:48.855 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:48.855 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:30:48.855 00:30:48.855 --- 10.0.0.1 ping statistics --- 00:30:48.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:48.855 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=510334 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 510334 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 510334 ']' 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:48.855 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:48.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:48.856 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:48.856 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:48.856 [2024-12-09 06:29:42.553839] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:48.856 [2024-12-09 06:29:42.554932] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:30:48.856 [2024-12-09 06:29:42.554983] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:48.856 [2024-12-09 06:29:42.633817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:48.856 [2024-12-09 06:29:42.684160] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:48.856 [2024-12-09 06:29:42.684213] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:48.856 [2024-12-09 06:29:42.684220] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:48.856 [2024-12-09 06:29:42.684227] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:48.856 [2024-12-09 06:29:42.684234] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:48.856 [2024-12-09 06:29:42.686230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:48.856 [2024-12-09 06:29:42.686385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:48.856 [2024-12-09 06:29:42.686385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:48.856 [2024-12-09 06:29:42.761352] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:48.856 [2024-12-09 06:29:42.762389] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:48.856 [2024-12-09 06:29:42.762541] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:48.856 [2024-12-09 06:29:42.762582] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:48.856 06:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:48.856 06:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:30:48.856 06:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:48.856 06:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:48.856 06:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:48.856 06:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:48.856 06:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:30:48.856 06:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:48.856 06:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:48.856 [2024-12-09 06:29:43.423260] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:48.856 06:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:48.856 06:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:30:48.856 06:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:48.856 06:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:49.117 Malloc0 00:30:49.117 06:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.117 06:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:49.117 06:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.117 06:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:49.117 Delay0 00:30:49.117 06:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.117 06:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:49.117 06:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.117 06:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:49.117 06:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.117 06:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:30:49.117 06:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.117 06:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:49.117 06:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.117 06:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:49.117 06:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.117 06:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:49.117 [2024-12-09 06:29:43.511173] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:49.117 06:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.117 06:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:49.117 06:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.117 06:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:49.117 06:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.117 06:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:30:49.117 [2024-12-09 06:29:43.656082] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:30:51.659 Initializing NVMe Controllers 00:30:51.659 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:30:51.659 controller IO queue size 128 less than required 00:30:51.659 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:30:51.659 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:30:51.659 Initialization complete. Launching workers. 00:30:51.659 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 31075 00:30:51.659 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 31132, failed to submit 66 00:30:51.659 success 31075, unsuccessful 57, failed 0 00:30:51.659 06:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:51.659 06:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.659 06:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:51.659 06:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.659 06:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:30:51.659 06:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:30:51.659 06:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:51.659 06:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:30:51.659 06:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:51.659 06:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:30:51.659 06:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:51.659 06:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:51.659 rmmod nvme_tcp 00:30:51.659 rmmod nvme_fabrics 00:30:51.659 rmmod nvme_keyring 00:30:51.659 06:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:51.659 06:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:30:51.659 06:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:30:51.659 06:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 510334 ']' 00:30:51.659 06:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 510334 00:30:51.659 06:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 510334 ']' 00:30:51.659 06:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 510334 00:30:51.659 06:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:30:51.659 06:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:51.659 06:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 510334 00:30:51.659 06:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:51.659 06:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:51.659 06:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 510334' 00:30:51.659 killing process with pid 510334 00:30:51.659 06:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 510334 00:30:51.659 06:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 510334 00:30:51.659 06:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:51.659 06:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:51.659 06:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:51.659 06:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:30:51.659 06:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:30:51.659 06:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:51.659 06:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:30:51.659 06:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:51.659 06:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:51.659 06:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:51.659 06:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:51.659 06:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:53.573 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:53.573 00:30:53.573 real 0m13.255s 00:30:53.573 user 0m11.164s 00:30:53.573 sys 0m6.697s 00:30:53.573 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:53.573 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:53.573 ************************************ 00:30:53.573 END TEST nvmf_abort 00:30:53.573 ************************************ 00:30:53.573 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:30:53.834 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:53.834 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:53.834 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:53.834 ************************************ 00:30:53.834 START TEST nvmf_ns_hotplug_stress 00:30:53.834 ************************************ 00:30:53.835 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:30:53.835 * Looking for test storage... 00:30:53.835 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:53.835 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:53.835 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:30:53.835 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:53.835 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:53.835 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:53.835 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:53.835 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:53.835 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:30:53.835 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:30:53.835 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:30:53.835 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:30:53.835 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:30:53.835 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:30:53.835 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:30:53.835 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:53.835 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:30:53.835 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:30:53.835 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:53.835 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:53.835 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:30:53.835 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:30:53.835 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:53.835 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:30:53.835 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:30:53.835 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:30:53.835 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:30:53.835 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:53.835 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:30:53.835 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:30:53.835 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:53.835 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:53.835 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:30:53.835 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:53.835 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:53.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:53.835 --rc genhtml_branch_coverage=1 00:30:53.835 --rc genhtml_function_coverage=1 00:30:53.835 --rc genhtml_legend=1 00:30:53.835 --rc geninfo_all_blocks=1 00:30:53.835 --rc geninfo_unexecuted_blocks=1 00:30:53.835 00:30:53.835 ' 00:30:53.835 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:53.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:53.835 --rc genhtml_branch_coverage=1 00:30:53.835 --rc genhtml_function_coverage=1 00:30:53.835 --rc genhtml_legend=1 00:30:53.835 --rc geninfo_all_blocks=1 00:30:53.835 --rc geninfo_unexecuted_blocks=1 00:30:53.835 00:30:53.835 ' 00:30:53.835 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:53.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:53.835 --rc genhtml_branch_coverage=1 00:30:53.835 --rc genhtml_function_coverage=1 00:30:53.835 --rc genhtml_legend=1 00:30:53.835 --rc geninfo_all_blocks=1 00:30:53.835 --rc geninfo_unexecuted_blocks=1 00:30:53.835 00:30:53.835 ' 00:30:53.835 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:53.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:53.835 --rc genhtml_branch_coverage=1 00:30:53.835 --rc genhtml_function_coverage=1 00:30:53.835 --rc genhtml_legend=1 00:30:53.835 --rc geninfo_all_blocks=1 00:30:53.835 --rc geninfo_unexecuted_blocks=1 00:30:53.835 00:30:53.835 ' 00:30:53.835 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:53.835 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:30:53.835 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:53.835 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:53.835 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:53.835 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:53.835 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:53.835 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:53.835 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:53.835 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:53.835 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:53.835 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:53.835 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:30:53.835 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:30:53.835 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:53.835 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:53.835 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:53.835 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:53.835 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:53.835 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:30:53.835 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:53.835 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:53.835 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:53.835 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:53.835 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:53.835 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:53.835 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:30:53.835 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:53.836 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:30:53.836 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:53.836 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:53.836 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:53.836 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:53.836 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:53.836 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:53.836 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:53.836 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:53.836 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:53.836 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:53.836 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:53.836 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:30:53.836 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:53.836 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:53.836 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:53.836 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:53.836 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:53.836 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:53.836 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:53.836 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:54.096 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:54.096 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:54.096 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:30:54.096 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:00.689 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:00.689 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:31:00.689 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:00.689 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:00.689 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:00.689 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:00.689 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:00.689 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:31:00.689 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:00.689 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:31:00.689 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:31:00.689 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:31:00.689 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:31:00.689 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:31:00.689 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:31:00.689 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:00.689 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:00.689 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:00.689 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:00.689 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:00.690 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:00.690 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:00.690 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:00.690 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:00.690 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:00.690 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:00.690 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:00.690 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:00.690 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:00.690 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:00.690 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:00.690 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:00.690 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:00.690 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:00.690 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:00.690 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:00.690 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:00.690 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:00.690 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:00.690 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:00.690 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:00.690 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:00.690 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:00.690 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:00.690 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:00.690 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:00.690 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:00.690 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:00.690 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:00.690 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:00.690 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:00.690 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:00.690 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:00.690 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:00.690 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:00.690 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:00.690 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:00.690 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:00.690 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:00.690 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:00.690 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:00.690 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:00.690 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:00.690 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:00.690 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:00.690 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:00.690 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:00.690 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:00.690 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:00.690 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:00.690 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:00.690 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:00.690 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:00.690 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:31:00.690 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:00.690 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:00.690 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:00.690 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:00.690 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:00.690 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:00.690 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:00.690 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:00.690 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:00.690 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:00.690 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:00.690 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:00.690 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:00.690 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:00.690 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:00.690 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:00.690 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:00.690 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:00.950 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:00.950 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:00.950 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:00.950 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:00.950 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:00.950 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:00.950 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:00.950 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:00.950 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:00.950 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.601 ms 00:31:00.950 00:31:00.950 --- 10.0.0.2 ping statistics --- 00:31:00.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:00.950 rtt min/avg/max/mdev = 0.601/0.601/0.601/0.000 ms 00:31:00.950 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:00.950 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:00.950 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:31:00.950 00:31:00.950 --- 10.0.0.1 ping statistics --- 00:31:00.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:00.950 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:31:00.950 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:00.950 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:31:00.950 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:00.950 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:00.950 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:00.950 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:00.950 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:00.950 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:00.950 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:00.950 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:31:00.950 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:00.950 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:00.950 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:00.950 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=514863 00:31:00.950 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 514863 00:31:00.950 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 514863 ']' 00:31:00.950 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:00.950 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:00.950 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:00.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:00.950 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:00.950 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:00.950 06:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:31:01.210 [2024-12-09 06:29:55.536336] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:01.210 [2024-12-09 06:29:55.537421] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:31:01.210 [2024-12-09 06:29:55.537484] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:01.210 [2024-12-09 06:29:55.618991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:01.210 [2024-12-09 06:29:55.668792] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:01.210 [2024-12-09 06:29:55.668844] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:01.210 [2024-12-09 06:29:55.668854] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:01.210 [2024-12-09 06:29:55.668861] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:01.210 [2024-12-09 06:29:55.668867] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:01.210 [2024-12-09 06:29:55.670685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:01.210 [2024-12-09 06:29:55.670838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:01.210 [2024-12-09 06:29:55.670839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:01.210 [2024-12-09 06:29:55.747037] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:01.210 [2024-12-09 06:29:55.748064] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:01.210 [2024-12-09 06:29:55.748307] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:01.210 [2024-12-09 06:29:55.748346] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:01.779 06:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:01.779 06:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:31:01.779 06:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:01.779 06:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:01.779 06:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:02.040 06:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:02.040 06:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:31:02.040 06:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:02.040 [2024-12-09 06:29:56.543735] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:02.040 06:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:02.301 06:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:02.561 [2024-12-09 06:29:56.916434] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:02.561 06:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:02.561 06:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:31:02.822 Malloc0 00:31:02.822 06:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:03.082 Delay0 00:31:03.082 06:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:03.082 06:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:31:03.343 NULL1 00:31:03.343 06:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:31:03.603 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=515222 00:31:03.603 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 515222 00:31:03.603 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:03.603 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:31:03.864 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:03.864 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:31:03.864 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:31:04.124 true 00:31:04.124 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 515222 00:31:04.124 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:04.385 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:04.385 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:31:04.385 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:31:04.645 true 00:31:04.645 06:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 515222 00:31:04.645 06:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:04.905 06:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:04.905 06:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:31:04.905 06:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:31:05.165 true 00:31:05.165 06:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 515222 00:31:05.165 06:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:05.426 06:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:05.426 06:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:31:05.426 06:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:31:05.686 true 00:31:05.686 06:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 515222 00:31:05.686 06:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:05.946 06:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:06.207 06:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:31:06.207 06:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:31:06.207 true 00:31:06.207 06:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 515222 00:31:06.207 06:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:06.467 06:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:06.727 06:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:31:06.727 06:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:31:06.727 true 00:31:06.727 06:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 515222 00:31:06.727 06:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:06.988 06:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:07.247 06:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:31:07.247 06:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:31:07.247 true 00:31:07.247 06:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 515222 00:31:07.247 06:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:07.506 06:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:07.766 06:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:31:07.766 06:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:31:07.766 true 00:31:07.766 06:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 515222 00:31:07.766 06:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:08.025 06:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:08.284 06:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:31:08.284 06:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:31:08.284 true 00:31:08.284 06:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 515222 00:31:08.284 06:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:08.544 06:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:08.803 06:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:31:08.803 06:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:31:08.803 true 00:31:08.803 06:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 515222 00:31:08.803 06:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:09.063 06:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:09.359 06:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:31:09.359 06:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:31:09.359 true 00:31:09.359 06:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 515222 00:31:09.359 06:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:09.619 06:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:09.878 06:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:31:09.878 06:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:31:09.878 true 00:31:09.878 06:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 515222 00:31:09.878 06:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:10.137 06:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:10.395 06:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:31:10.395 06:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:31:10.395 true 00:31:10.395 06:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 515222 00:31:10.395 06:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:10.654 06:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:10.913 06:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:31:10.913 06:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:31:10.913 true 00:31:10.913 06:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 515222 00:31:10.913 06:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:11.173 06:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:11.433 06:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:31:11.433 06:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:31:11.433 true 00:31:11.433 06:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 515222 00:31:11.433 06:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:11.694 06:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:11.694 06:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:31:11.694 06:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:31:11.954 true 00:31:11.955 06:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 515222 00:31:11.955 06:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:12.214 06:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:12.214 06:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:31:12.214 06:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:31:12.475 true 00:31:12.475 06:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 515222 00:31:12.475 06:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:12.745 06:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:12.745 06:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:31:12.745 06:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:31:13.006 true 00:31:13.006 06:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 515222 00:31:13.006 06:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:13.266 06:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:13.266 06:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:31:13.266 06:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:31:13.527 true 00:31:13.527 06:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 515222 00:31:13.527 06:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:13.787 06:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:13.787 06:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:31:13.787 06:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:31:14.047 true 00:31:14.048 06:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 515222 00:31:14.048 06:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:14.308 06:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:14.308 06:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:31:14.308 06:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:31:14.569 true 00:31:14.569 06:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 515222 00:31:14.569 06:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:14.828 06:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:14.828 06:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:31:14.828 06:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:31:15.089 true 00:31:15.089 06:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 515222 00:31:15.089 06:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:15.348 06:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:15.348 06:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:31:15.348 06:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:31:15.608 true 00:31:15.608 06:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 515222 00:31:15.608 06:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:15.868 06:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:15.868 06:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:31:15.868 06:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:31:16.129 true 00:31:16.129 06:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 515222 00:31:16.129 06:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:16.389 06:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:16.389 06:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:31:16.389 06:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:31:16.650 true 00:31:16.650 06:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 515222 00:31:16.650 06:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:16.911 06:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:16.911 06:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:31:16.911 06:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:31:17.173 true 00:31:17.173 06:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 515222 00:31:17.173 06:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:17.433 06:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:17.433 06:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:31:17.433 06:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:31:17.693 true 00:31:17.693 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 515222 00:31:17.693 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:17.954 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:17.954 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:31:17.954 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:31:18.215 true 00:31:18.215 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 515222 00:31:18.215 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:18.476 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:18.476 06:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:31:18.476 06:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:31:18.737 true 00:31:18.737 06:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 515222 00:31:18.737 06:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:18.996 06:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:18.996 06:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:31:18.996 06:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:31:19.257 true 00:31:19.257 06:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 515222 00:31:19.257 06:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:19.517 06:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:19.517 06:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:31:19.517 06:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:31:19.776 true 00:31:19.776 06:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 515222 00:31:19.776 06:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:20.035 06:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:20.035 06:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:31:20.035 06:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:31:20.295 true 00:31:20.295 06:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 515222 00:31:20.295 06:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:20.554 06:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:20.554 06:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:31:20.554 06:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:31:20.814 true 00:31:20.814 06:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 515222 00:31:20.814 06:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:21.074 06:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:21.074 06:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:31:21.074 06:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:31:21.334 true 00:31:21.334 06:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 515222 00:31:21.334 06:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:21.595 06:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:21.595 06:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:31:21.595 06:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:31:21.856 true 00:31:21.856 06:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 515222 00:31:21.856 06:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:22.116 06:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:22.375 06:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:31:22.375 06:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:31:22.375 true 00:31:22.375 06:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 515222 00:31:22.375 06:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:22.635 06:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:22.894 06:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:31:22.894 06:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:31:22.894 true 00:31:22.894 06:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 515222 00:31:22.894 06:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:23.154 06:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:23.415 06:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:31:23.415 06:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:31:23.415 true 00:31:23.415 06:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 515222 00:31:23.415 06:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:23.675 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:23.935 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:31:23.935 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:31:23.935 true 00:31:23.935 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 515222 00:31:23.935 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:24.195 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:24.456 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:31:24.456 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:31:24.456 true 00:31:24.456 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 515222 00:31:24.456 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:24.717 06:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:24.977 06:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:31:24.977 06:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:31:24.977 true 00:31:24.977 06:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 515222 00:31:24.977 06:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:25.237 06:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:25.498 06:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:31:25.498 06:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:31:25.498 true 00:31:25.498 06:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 515222 00:31:25.498 06:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:25.758 06:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:25.758 06:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:31:25.758 06:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:31:26.019 true 00:31:26.019 06:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 515222 00:31:26.019 06:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:26.280 06:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:26.280 06:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:31:26.280 06:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:31:26.541 true 00:31:26.541 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 515222 00:31:26.541 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:26.802 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:27.062 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:31:27.062 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:31:27.062 true 00:31:27.062 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 515222 00:31:27.062 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:27.323 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:27.583 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:31:27.583 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:31:27.583 true 00:31:27.583 06:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 515222 00:31:27.583 06:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:27.866 06:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:27.866 06:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:31:27.866 06:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:31:28.126 true 00:31:28.126 06:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 515222 00:31:28.127 06:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:28.387 06:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:28.387 06:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:31:28.387 06:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:31:28.647 true 00:31:28.647 06:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 515222 00:31:28.648 06:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:28.907 06:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:28.907 06:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:31:28.907 06:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:31:29.166 true 00:31:29.166 06:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 515222 00:31:29.166 06:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:29.426 06:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:29.685 06:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:31:29.685 06:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:31:29.685 true 00:31:29.685 06:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 515222 00:31:29.685 06:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:29.944 06:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:30.217 06:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:31:30.217 06:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:31:30.217 true 00:31:30.217 06:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 515222 00:31:30.217 06:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:30.477 06:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:30.736 06:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:31:30.736 06:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:31:30.736 true 00:31:30.736 06:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 515222 00:31:30.736 06:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:30.994 06:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:31.253 06:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:31:31.253 06:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:31:31.253 true 00:31:31.253 06:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 515222 00:31:31.254 06:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:31.512 06:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:31.772 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:31:31.772 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:31:31.772 true 00:31:31.772 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 515222 00:31:31.772 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:32.031 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:32.291 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:31:32.291 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:31:32.291 true 00:31:32.291 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 515222 00:31:32.291 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:32.551 06:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:32.811 06:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1056 00:31:32.811 06:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1056 00:31:32.811 true 00:31:32.811 06:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 515222 00:31:32.811 06:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:33.071 06:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:33.332 06:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1057 00:31:33.332 06:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1057 00:31:33.332 true 00:31:33.332 06:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 515222 00:31:33.332 06:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:33.593 06:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:33.854 06:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1058 00:31:33.854 06:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1058 00:31:33.854 Initializing NVMe Controllers 00:31:33.854 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:33.854 Controller SPDK bdev Controller (SPDK00000000000001 ): Skipping inactive NS 1 00:31:33.854 Controller IO queue size 128, less than required. 00:31:33.854 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:33.854 WARNING: Some requested NVMe devices were skipped 00:31:33.854 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:33.854 Initialization complete. Launching workers. 00:31:33.854 ======================================================== 00:31:33.854 Latency(us) 00:31:33.854 Device Information : IOPS MiB/s Average min max 00:31:33.854 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 30221.53 14.76 4235.26 1170.94 9947.95 00:31:33.854 ======================================================== 00:31:33.854 Total : 30221.53 14.76 4235.26 1170.94 9947.95 00:31:33.854 00:31:33.854 true 00:31:33.854 06:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 515222 00:31:33.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (515222) - No such process 00:31:33.854 06:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 515222 00:31:33.854 06:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:34.115 06:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:34.375 06:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:31:34.375 06:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:31:34.375 06:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:31:34.375 06:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:34.375 06:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:31:34.375 null0 00:31:34.375 06:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:34.375 06:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:34.375 06:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:31:34.636 null1 00:31:34.636 06:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:34.636 06:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:34.636 06:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:31:34.897 null2 00:31:34.897 06:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:34.897 06:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:34.897 06:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:31:34.897 null3 00:31:34.897 06:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:34.897 06:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:34.897 06:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:31:35.158 null4 00:31:35.158 06:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:35.158 06:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:35.158 06:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:31:35.158 null5 00:31:35.158 06:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:35.158 06:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:35.158 06:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:31:35.419 null6 00:31:35.419 06:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:35.419 06:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:35.419 06:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:31:35.680 null7 00:31:35.680 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:35.680 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:35.680 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:31:35.680 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:35.680 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:35.680 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:35.680 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:35.680 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:35.680 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:35.680 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:35.680 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:35.680 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:31:35.680 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:35.680 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:35.680 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:31:35.680 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:35.680 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:35.680 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:31:35.680 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:35.680 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:35.680 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:35.680 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:35.680 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:31:35.680 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:35.680 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:31:35.680 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:35.680 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:35.680 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:35.680 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:35.680 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:35.680 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:31:35.680 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:35.680 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:35.680 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:31:35.680 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:31:35.680 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:35.680 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:35.680 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:31:35.680 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:35.680 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:35.681 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:35.681 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:35.681 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:35.681 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:35.681 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:35.681 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:31:35.681 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:31:35.681 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:35.681 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:35.681 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:35.681 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:31:35.681 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:35.681 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:35.681 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:35.681 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 521210 521211 521213 521214 521216 521217 521219 521222 00:31:35.681 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:35.681 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:35.681 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:35.681 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:31:35.681 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:35.681 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:31:35.681 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:31:35.681 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:35.681 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:35.681 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:31:35.681 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:35.681 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:35.681 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:35.681 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:35.681 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:35.681 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:35.681 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:35.681 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:35.681 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:35.681 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:35.941 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:35.941 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:35.941 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:35.941 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:35.941 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:35.941 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:35.941 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:35.941 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:35.941 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:35.941 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:35.941 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:35.941 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:35.942 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:35.942 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:35.942 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:35.942 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:35.942 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:35.942 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:35.942 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:35.942 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:35.942 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:35.942 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:35.942 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:35.942 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:35.942 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:35.942 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:36.203 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:36.203 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:36.203 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:36.203 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:36.203 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:36.203 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:36.203 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:36.203 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:36.203 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:36.203 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:36.203 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:36.203 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:36.203 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:36.203 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:36.466 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:36.466 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:36.466 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:36.466 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:36.466 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:36.466 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:36.466 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:36.466 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:36.466 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:36.466 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:36.466 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:36.466 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:36.466 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:36.466 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:36.466 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:36.466 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:36.466 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:36.466 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:36.466 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:36.466 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:36.466 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:36.466 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:36.466 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:36.467 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:36.467 06:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:36.467 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:36.728 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:36.728 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:36.728 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:36.728 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:36.728 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:36.728 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:36.728 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:36.728 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:36.728 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:36.728 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:36.728 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:36.728 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:36.728 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:36.728 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:36.728 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:36.728 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:36.728 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:36.728 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:36.728 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:36.728 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:36.728 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:36.728 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:36.728 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:36.728 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:36.729 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:36.990 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:36.990 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:36.990 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:36.990 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:36.990 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:36.990 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:36.990 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:36.990 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:36.990 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:36.990 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:36.990 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:36.990 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:36.990 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:36.990 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:36.990 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:36.990 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:36.990 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:36.990 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:36.990 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:36.990 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:36.990 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:36.990 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:36.990 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:36.990 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:36.990 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:36.990 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:36.990 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:36.990 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:36.990 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:36.990 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:36.990 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:37.252 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:37.252 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:37.252 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:37.252 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:37.252 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:37.252 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:37.252 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:37.252 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:37.513 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:37.513 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:37.514 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:37.514 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:37.514 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:37.514 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:37.514 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:37.514 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:37.514 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:37.514 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:37.514 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:37.514 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:37.514 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:37.514 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:37.514 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:37.514 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:37.514 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:37.514 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:37.514 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:37.514 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:37.514 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:37.514 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:37.514 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:37.514 06:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:37.514 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:37.514 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:37.514 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:37.514 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:37.514 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:37.774 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:37.774 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:37.774 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:37.774 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:37.774 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:37.775 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:37.775 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:37.775 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:37.775 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:37.775 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:37.775 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:37.775 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:37.775 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:37.775 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:37.775 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:37.775 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:37.775 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:37.775 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:37.775 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:37.775 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:37.775 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:37.775 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:37.775 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:37.775 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:37.775 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:37.775 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:37.775 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:38.035 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:38.035 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:38.035 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:38.035 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:38.035 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:38.035 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:38.035 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:38.035 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:38.035 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:38.035 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:38.035 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:38.035 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:38.035 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:38.035 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:38.035 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:38.035 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:38.035 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:38.296 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:38.296 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:38.296 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:38.296 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:38.296 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:38.297 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:38.297 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:38.297 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:38.297 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:38.297 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:38.297 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:38.297 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:38.297 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:38.297 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:38.297 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:38.297 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:38.297 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:38.297 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:38.297 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:38.297 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:38.297 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:38.297 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:38.558 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:38.558 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:38.558 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:38.558 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:38.558 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:38.558 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:38.558 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:38.558 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:38.558 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:38.558 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:38.558 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:38.558 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:38.558 06:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:38.558 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:38.558 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:38.558 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:38.558 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:38.558 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:38.558 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:38.558 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:38.558 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:38.559 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:38.559 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:38.559 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:38.559 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:38.559 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:38.559 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:38.819 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:38.819 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:38.819 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:38.819 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:38.819 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:38.819 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:38.819 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:38.819 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:38.820 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:38.820 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:38.820 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:38.820 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:38.820 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:38.820 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:38.820 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:38.820 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:38.820 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:38.820 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:38.820 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:38.820 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:38.820 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:39.080 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:39.080 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:39.080 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:39.080 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:39.080 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:39.080 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:39.080 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:39.080 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:39.080 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:39.080 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:39.080 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:39.080 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:39.080 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:39.080 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:39.080 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:39.080 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:39.080 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:39.080 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:39.080 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:39.080 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:39.080 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:39.341 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:39.341 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:39.341 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:39.341 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:39.341 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:39.341 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:39.341 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:39.341 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:39.341 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:39.341 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:39.341 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:39.341 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:39.341 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:31:39.341 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:31:39.341 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:39.341 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:31:39.341 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:39.341 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:31:39.341 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:39.341 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:39.341 rmmod nvme_tcp 00:31:39.341 rmmod nvme_fabrics 00:31:39.341 rmmod nvme_keyring 00:31:39.341 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:39.341 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:31:39.341 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:31:39.341 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 514863 ']' 00:31:39.341 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 514863 00:31:39.341 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 514863 ']' 00:31:39.341 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 514863 00:31:39.341 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:31:39.341 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:39.341 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 514863 00:31:39.602 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:39.602 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:39.602 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 514863' 00:31:39.602 killing process with pid 514863 00:31:39.602 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 514863 00:31:39.602 06:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 514863 00:31:39.602 06:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:39.602 06:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:39.602 06:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:39.602 06:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:31:39.602 06:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:31:39.602 06:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:39.602 06:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:31:39.602 06:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:39.602 06:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:39.602 06:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:39.602 06:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:39.602 06:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:42.143 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:42.143 00:31:42.144 real 0m47.936s 00:31:42.144 user 3m0.339s 00:31:42.144 sys 0m21.574s 00:31:42.144 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:42.144 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:42.144 ************************************ 00:31:42.144 END TEST nvmf_ns_hotplug_stress 00:31:42.144 ************************************ 00:31:42.144 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:31:42.144 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:42.144 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:42.144 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:42.144 ************************************ 00:31:42.144 START TEST nvmf_delete_subsystem 00:31:42.144 ************************************ 00:31:42.144 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:31:42.144 * Looking for test storage... 00:31:42.144 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:42.144 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:42.144 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:31:42.144 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:42.144 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:42.144 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:42.144 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:42.144 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:42.144 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:31:42.144 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:31:42.144 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:31:42.144 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:31:42.144 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:31:42.144 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:31:42.144 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:31:42.144 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:42.144 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:31:42.144 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:31:42.144 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:42.144 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:42.144 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:31:42.144 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:31:42.144 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:42.144 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:31:42.144 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:31:42.144 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:31:42.144 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:31:42.144 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:42.144 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:31:42.144 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:31:42.144 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:42.144 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:42.144 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:31:42.144 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:42.144 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:42.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:42.144 --rc genhtml_branch_coverage=1 00:31:42.144 --rc genhtml_function_coverage=1 00:31:42.144 --rc genhtml_legend=1 00:31:42.144 --rc geninfo_all_blocks=1 00:31:42.144 --rc geninfo_unexecuted_blocks=1 00:31:42.144 00:31:42.144 ' 00:31:42.144 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:42.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:42.144 --rc genhtml_branch_coverage=1 00:31:42.144 --rc genhtml_function_coverage=1 00:31:42.144 --rc genhtml_legend=1 00:31:42.144 --rc geninfo_all_blocks=1 00:31:42.144 --rc geninfo_unexecuted_blocks=1 00:31:42.144 00:31:42.144 ' 00:31:42.144 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:42.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:42.144 --rc genhtml_branch_coverage=1 00:31:42.144 --rc genhtml_function_coverage=1 00:31:42.144 --rc genhtml_legend=1 00:31:42.144 --rc geninfo_all_blocks=1 00:31:42.144 --rc geninfo_unexecuted_blocks=1 00:31:42.144 00:31:42.144 ' 00:31:42.144 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:42.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:42.144 --rc genhtml_branch_coverage=1 00:31:42.144 --rc genhtml_function_coverage=1 00:31:42.144 --rc genhtml_legend=1 00:31:42.144 --rc geninfo_all_blocks=1 00:31:42.144 --rc geninfo_unexecuted_blocks=1 00:31:42.144 00:31:42.144 ' 00:31:42.144 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:42.144 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:31:42.144 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:42.144 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:42.144 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:42.144 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:42.144 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:42.144 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:42.144 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:42.144 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:42.144 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:42.144 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:42.144 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:31:42.144 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:31:42.144 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:42.144 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:42.144 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:42.144 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:42.144 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:42.144 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:31:42.144 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:42.144 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:42.144 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:42.144 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:42.145 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:42.145 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:42.145 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:31:42.145 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:42.145 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:31:42.145 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:42.145 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:42.145 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:42.145 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:42.145 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:42.145 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:42.145 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:42.145 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:42.145 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:42.145 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:42.145 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:31:42.145 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:42.145 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:42.145 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:42.145 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:42.145 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:42.145 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:42.145 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:42.145 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:42.145 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:42.145 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:42.145 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:31:42.145 06:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:50.280 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:50.281 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:31:50.281 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:50.281 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:50.281 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:50.281 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:50.281 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:50.281 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:31:50.281 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:50.281 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:31:50.281 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:31:50.281 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:31:50.281 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:31:50.281 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:31:50.281 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:31:50.281 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:50.281 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:50.281 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:50.281 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:50.281 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:50.281 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:50.281 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:50.281 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:50.281 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:50.281 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:50.281 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:50.281 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:50.281 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:50.281 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:50.281 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:50.281 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:50.281 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:50.281 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:50.281 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:50.281 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:50.281 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:50.281 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:50.281 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:50.281 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:50.281 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:50.281 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:50.281 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:50.281 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:50.281 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:50.281 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:50.281 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:50.281 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:50.281 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:50.281 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:50.281 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:50.281 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:50.281 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:50.281 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:50.281 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:50.281 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:50.281 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:50.281 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:50.281 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:50.281 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:50.281 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:50.281 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:50.281 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:50.281 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:50.281 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:50.281 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:50.281 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:50.281 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:50.281 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:50.281 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:50.281 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:50.281 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:50.281 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:50.281 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:50.281 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:31:50.282 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:50.282 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:50.282 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:50.282 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:50.282 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:50.282 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:50.282 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:50.282 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:50.282 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:50.282 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:50.282 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:50.282 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:50.282 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:50.282 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:50.282 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:50.282 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:50.282 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:50.282 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:50.282 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:50.282 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:50.282 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:50.282 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:50.282 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:50.282 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:50.282 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:50.282 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:50.282 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:50.282 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.637 ms 00:31:50.282 00:31:50.282 --- 10.0.0.2 ping statistics --- 00:31:50.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:50.282 rtt min/avg/max/mdev = 0.637/0.637/0.637/0.000 ms 00:31:50.282 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:50.282 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:50.282 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.310 ms 00:31:50.282 00:31:50.282 --- 10.0.0.1 ping statistics --- 00:31:50.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:50.282 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:31:50.282 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:50.282 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:31:50.282 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:50.282 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:50.282 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:50.282 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:50.282 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:50.282 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:50.282 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:50.282 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:31:50.282 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:50.282 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:50.282 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:50.282 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=526058 00:31:50.282 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 526058 00:31:50.282 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:31:50.282 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 526058 ']' 00:31:50.282 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:50.282 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:50.282 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:50.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:50.282 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:50.282 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:50.282 [2024-12-09 06:30:43.895058] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:50.282 [2024-12-09 06:30:43.896150] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:31:50.282 [2024-12-09 06:30:43.896199] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:50.282 [2024-12-09 06:30:43.993581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:50.282 [2024-12-09 06:30:44.042478] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:50.282 [2024-12-09 06:30:44.042532] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:50.282 [2024-12-09 06:30:44.042540] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:50.282 [2024-12-09 06:30:44.042547] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:50.282 [2024-12-09 06:30:44.042552] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:50.282 [2024-12-09 06:30:44.044119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:50.282 [2024-12-09 06:30:44.044124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:50.282 [2024-12-09 06:30:44.120059] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:50.283 [2024-12-09 06:30:44.120693] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:50.283 [2024-12-09 06:30:44.120969] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:50.283 06:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:50.283 06:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:31:50.283 06:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:50.283 06:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:50.283 06:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:50.283 06:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:50.283 06:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:50.283 06:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.283 06:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:50.283 [2024-12-09 06:30:44.753107] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:50.283 06:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.283 06:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:50.283 06:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.283 06:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:50.283 06:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.283 06:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:50.283 06:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.283 06:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:50.283 [2024-12-09 06:30:44.781478] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:50.283 06:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.283 06:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:31:50.283 06:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.283 06:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:50.283 NULL1 00:31:50.283 06:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.283 06:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:50.283 06:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.283 06:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:50.283 Delay0 00:31:50.283 06:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.283 06:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:50.283 06:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.283 06:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:50.283 06:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.283 06:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=526123 00:31:50.283 06:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:31:50.283 06:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:31:50.543 [2024-12-09 06:30:44.883585] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:31:52.453 06:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:52.453 06:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:52.453 06:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:52.453 Read completed with error (sct=0, sc=8) 00:31:52.453 Read completed with error (sct=0, sc=8) 00:31:52.453 starting I/O failed: -6 00:31:52.453 Read completed with error (sct=0, sc=8) 00:31:52.453 Read completed with error (sct=0, sc=8) 00:31:52.453 Read completed with error (sct=0, sc=8) 00:31:52.453 Read completed with error (sct=0, sc=8) 00:31:52.453 starting I/O failed: -6 00:31:52.453 Write completed with error (sct=0, sc=8) 00:31:52.453 Write completed with error (sct=0, sc=8) 00:31:52.453 Read completed with error (sct=0, sc=8) 00:31:52.453 Read completed with error (sct=0, sc=8) 00:31:52.453 starting I/O failed: -6 00:31:52.453 Read completed with error (sct=0, sc=8) 00:31:52.453 Read completed with error (sct=0, sc=8) 00:31:52.453 Read completed with error (sct=0, sc=8) 00:31:52.453 Write completed with error (sct=0, sc=8) 00:31:52.453 starting I/O failed: -6 00:31:52.453 Read completed with error (sct=0, sc=8) 00:31:52.453 Read completed with error (sct=0, sc=8) 00:31:52.453 Read completed with error (sct=0, sc=8) 00:31:52.453 Read completed with error (sct=0, sc=8) 00:31:52.453 starting I/O failed: -6 00:31:52.453 Read completed with error (sct=0, sc=8) 00:31:52.453 Write completed with error (sct=0, sc=8) 00:31:52.453 Write completed with error (sct=0, sc=8) 00:31:52.453 Read completed with error (sct=0, sc=8) 00:31:52.453 starting I/O failed: -6 00:31:52.453 Read completed with error (sct=0, sc=8) 00:31:52.453 Write completed with error (sct=0, sc=8) 00:31:52.453 Read completed with error (sct=0, sc=8) 00:31:52.453 Read completed with error (sct=0, sc=8) 00:31:52.453 starting I/O failed: -6 00:31:52.453 Write completed with error (sct=0, sc=8) 00:31:52.453 Read completed with error (sct=0, sc=8) 00:31:52.453 Write completed with error (sct=0, sc=8) 00:31:52.453 Write completed with error (sct=0, sc=8) 00:31:52.453 starting I/O failed: -6 00:31:52.453 Write completed with error (sct=0, sc=8) 00:31:52.453 Read completed with error (sct=0, sc=8) 00:31:52.453 Write completed with error (sct=0, sc=8) 00:31:52.453 Write completed with error (sct=0, sc=8) 00:31:52.453 starting I/O failed: -6 00:31:52.453 Read completed with error (sct=0, sc=8) 00:31:52.453 Read completed with error (sct=0, sc=8) 00:31:52.453 Read completed with error (sct=0, sc=8) 00:31:52.453 Write completed with error (sct=0, sc=8) 00:31:52.453 starting I/O failed: -6 00:31:52.453 Read completed with error (sct=0, sc=8) 00:31:52.453 Read completed with error (sct=0, sc=8) 00:31:52.453 Read completed with error (sct=0, sc=8) 00:31:52.453 Read completed with error (sct=0, sc=8) 00:31:52.453 starting I/O failed: -6 00:31:52.453 Write completed with error (sct=0, sc=8) 00:31:52.453 Write completed with error (sct=0, sc=8) 00:31:52.453 Write completed with error (sct=0, sc=8) 00:31:52.453 [2024-12-09 06:30:46.970388] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5ee8000c40 is same with the state(6) to be set 00:31:52.453 Write completed with error (sct=0, sc=8) 00:31:52.453 Read completed with error (sct=0, sc=8) 00:31:52.453 Read completed with error (sct=0, sc=8) 00:31:52.453 starting I/O failed: -6 00:31:52.453 Read completed with error (sct=0, sc=8) 00:31:52.453 Read completed with error (sct=0, sc=8) 00:31:52.453 Read completed with error (sct=0, sc=8) 00:31:52.453 Write completed with error (sct=0, sc=8) 00:31:52.453 Write completed with error (sct=0, sc=8) 00:31:52.453 Read completed with error (sct=0, sc=8) 00:31:52.453 Write completed with error (sct=0, sc=8) 00:31:52.453 Write completed with error (sct=0, sc=8) 00:31:52.453 Write completed with error (sct=0, sc=8) 00:31:52.453 Read completed with error (sct=0, sc=8) 00:31:52.453 starting I/O failed: -6 00:31:52.453 Read completed with error (sct=0, sc=8) 00:31:52.453 Write completed with error (sct=0, sc=8) 00:31:52.453 Read completed with error (sct=0, sc=8) 00:31:52.453 Read completed with error (sct=0, sc=8) 00:31:52.453 Write completed with error (sct=0, sc=8) 00:31:52.453 Write completed with error (sct=0, sc=8) 00:31:52.453 Read completed with error (sct=0, sc=8) 00:31:52.453 Write completed with error (sct=0, sc=8) 00:31:52.453 Write completed with error (sct=0, sc=8) 00:31:52.453 Write completed with error (sct=0, sc=8) 00:31:52.453 Write completed with error (sct=0, sc=8) 00:31:52.453 starting I/O failed: -6 00:31:52.453 Read completed with error (sct=0, sc=8) 00:31:52.453 Write completed with error (sct=0, sc=8) 00:31:52.453 Read completed with error (sct=0, sc=8) 00:31:52.453 Write completed with error (sct=0, sc=8) 00:31:52.453 Read completed with error (sct=0, sc=8) 00:31:52.453 Read completed with error (sct=0, sc=8) 00:31:52.453 Read completed with error (sct=0, sc=8) 00:31:52.453 Read completed with error (sct=0, sc=8) 00:31:52.453 Read completed with error (sct=0, sc=8) 00:31:52.453 Write completed with error (sct=0, sc=8) 00:31:52.453 Read completed with error (sct=0, sc=8) 00:31:52.453 Read completed with error (sct=0, sc=8) 00:31:52.453 starting I/O failed: -6 00:31:52.453 Read completed with error (sct=0, sc=8) 00:31:52.453 Write completed with error (sct=0, sc=8) 00:31:52.453 Write completed with error (sct=0, sc=8) 00:31:52.453 Read completed with error (sct=0, sc=8) 00:31:52.453 Read completed with error (sct=0, sc=8) 00:31:52.453 Read completed with error (sct=0, sc=8) 00:31:52.453 Write completed with error (sct=0, sc=8) 00:31:52.453 Read completed with error (sct=0, sc=8) 00:31:52.453 Read completed with error (sct=0, sc=8) 00:31:52.453 Read completed with error (sct=0, sc=8) 00:31:52.453 Write completed with error (sct=0, sc=8) 00:31:52.453 Read completed with error (sct=0, sc=8) 00:31:52.453 starting I/O failed: -6 00:31:52.453 Write completed with error (sct=0, sc=8) 00:31:52.453 Read completed with error (sct=0, sc=8) 00:31:52.453 Read completed with error (sct=0, sc=8) 00:31:52.453 Read completed with error (sct=0, sc=8) 00:31:52.453 Write completed with error (sct=0, sc=8) 00:31:52.453 Read completed with error (sct=0, sc=8) 00:31:52.453 Write completed with error (sct=0, sc=8) 00:31:52.453 Read completed with error (sct=0, sc=8) 00:31:52.453 Read completed with error (sct=0, sc=8) 00:31:52.453 Read completed with error (sct=0, sc=8) 00:31:52.453 Read completed with error (sct=0, sc=8) 00:31:52.453 Read completed with error (sct=0, sc=8) 00:31:52.453 starting I/O failed: -6 00:31:52.453 Read completed with error (sct=0, sc=8) 00:31:52.453 Read completed with error (sct=0, sc=8) 00:31:52.453 Read completed with error (sct=0, sc=8) 00:31:52.453 Read completed with error (sct=0, sc=8) 00:31:52.453 Read completed with error (sct=0, sc=8) 00:31:52.453 Write completed with error (sct=0, sc=8) 00:31:52.453 Write completed with error (sct=0, sc=8) 00:31:52.453 Read completed with error (sct=0, sc=8) 00:31:52.453 Read completed with error (sct=0, sc=8) 00:31:52.453 Read completed with error (sct=0, sc=8) 00:31:52.453 Write completed with error (sct=0, sc=8) 00:31:52.453 Read completed with error (sct=0, sc=8) 00:31:52.453 starting I/O failed: -6 00:31:52.454 Read completed with error (sct=0, sc=8) 00:31:52.454 Write completed with error (sct=0, sc=8) 00:31:52.454 Write completed with error (sct=0, sc=8) 00:31:52.454 Write completed with error (sct=0, sc=8) 00:31:52.454 Write completed with error (sct=0, sc=8) 00:31:52.454 Read completed with error (sct=0, sc=8) 00:31:52.454 Read completed with error (sct=0, sc=8) 00:31:52.454 Read completed with error (sct=0, sc=8) 00:31:52.454 Read completed with error (sct=0, sc=8) 00:31:52.454 Read completed with error (sct=0, sc=8) 00:31:52.454 Write completed with error (sct=0, sc=8) 00:31:52.454 Write completed with error (sct=0, sc=8) 00:31:52.454 starting I/O failed: -6 00:31:52.454 Read completed with error (sct=0, sc=8) 00:31:52.454 Write completed with error (sct=0, sc=8) 00:31:52.454 Read completed with error (sct=0, sc=8) 00:31:52.454 Read completed with error (sct=0, sc=8) 00:31:52.454 Read completed with error (sct=0, sc=8) 00:31:52.454 Read completed with error (sct=0, sc=8) 00:31:52.454 Write completed with error (sct=0, sc=8) 00:31:52.454 starting I/O failed: -6 00:31:52.454 Read completed with error (sct=0, sc=8) 00:31:52.454 Read completed with error (sct=0, sc=8) 00:31:52.454 Read completed with error (sct=0, sc=8) 00:31:52.454 Read completed with error (sct=0, sc=8) 00:31:52.454 starting I/O failed: -6 00:31:52.454 Write completed with error (sct=0, sc=8) 00:31:52.454 Read completed with error (sct=0, sc=8) 00:31:52.454 Read completed with error (sct=0, sc=8) 00:31:52.454 Write completed with error (sct=0, sc=8) 00:31:52.454 starting I/O failed: -6 00:31:52.454 [2024-12-09 06:30:46.970736] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0680 is same with the state(6) to be set 00:31:52.454 Write completed with error (sct=0, sc=8) 00:31:52.454 Write completed with error (sct=0, sc=8) 00:31:52.454 Read completed with error (sct=0, sc=8) 00:31:52.454 Read completed with error (sct=0, sc=8) 00:31:52.454 Read completed with error (sct=0, sc=8) 00:31:52.454 Read completed with error (sct=0, sc=8) 00:31:52.454 Read completed with error (sct=0, sc=8) 00:31:52.454 Read completed with error (sct=0, sc=8) 00:31:52.454 Read completed with error (sct=0, sc=8) 00:31:52.454 Read completed with error (sct=0, sc=8) 00:31:52.454 Read completed with error (sct=0, sc=8) 00:31:52.454 Read completed with error (sct=0, sc=8) 00:31:52.454 Read completed with error (sct=0, sc=8) 00:31:52.454 Read completed with error (sct=0, sc=8) 00:31:52.454 Read completed with error (sct=0, sc=8) 00:31:52.454 Write completed with error (sct=0, sc=8) 00:31:52.454 Read completed with error (sct=0, sc=8) 00:31:52.454 Read completed with error (sct=0, sc=8) 00:31:52.454 Read completed with error (sct=0, sc=8) 00:31:52.454 Read completed with error (sct=0, sc=8) 00:31:52.454 Read completed with error (sct=0, sc=8) 00:31:52.454 Read completed with error (sct=0, sc=8) 00:31:52.454 Read completed with error (sct=0, sc=8) 00:31:52.454 Read completed with error (sct=0, sc=8) 00:31:52.454 Read completed with error (sct=0, sc=8) 00:31:52.454 Write completed with error (sct=0, sc=8) 00:31:52.454 Read completed with error (sct=0, sc=8) 00:31:52.454 Read completed with error (sct=0, sc=8) 00:31:52.454 Read completed with error (sct=0, sc=8) 00:31:52.454 Read completed with error (sct=0, sc=8) 00:31:52.454 Write completed with error (sct=0, sc=8) 00:31:52.454 Read completed with error (sct=0, sc=8) 00:31:52.454 Write completed with error (sct=0, sc=8) 00:31:52.454 Read completed with error (sct=0, sc=8) 00:31:52.454 Read completed with error (sct=0, sc=8) 00:31:52.454 Read completed with error (sct=0, sc=8) 00:31:52.454 Write completed with error (sct=0, sc=8) 00:31:52.454 Write completed with error (sct=0, sc=8) 00:31:52.454 Read completed with error (sct=0, sc=8) 00:31:52.454 Read completed with error (sct=0, sc=8) 00:31:52.454 Write completed with error (sct=0, sc=8) 00:31:52.454 Read completed with error (sct=0, sc=8) 00:31:52.454 Write completed with error (sct=0, sc=8) 00:31:52.454 Read completed with error (sct=0, sc=8) 00:31:52.454 Read completed with error (sct=0, sc=8) 00:31:52.454 Write completed with error (sct=0, sc=8) 00:31:52.454 Read completed with error (sct=0, sc=8) 00:31:52.454 Read completed with error (sct=0, sc=8) 00:31:52.454 Write completed with error (sct=0, sc=8) 00:31:52.454 Read completed with error (sct=0, sc=8) 00:31:52.454 Read completed with error (sct=0, sc=8) 00:31:52.454 Read completed with error (sct=0, sc=8) 00:31:52.454 Write completed with error (sct=0, sc=8) 00:31:52.454 Read completed with error (sct=0, sc=8) 00:31:53.396 [2024-12-09 06:30:47.940692] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b19b0 is same with the state(6) to be set 00:31:53.396 Read completed with error (sct=0, sc=8) 00:31:53.396 Read completed with error (sct=0, sc=8) 00:31:53.396 Read completed with error (sct=0, sc=8) 00:31:53.396 Read completed with error (sct=0, sc=8) 00:31:53.396 Read completed with error (sct=0, sc=8) 00:31:53.396 Read completed with error (sct=0, sc=8) 00:31:53.396 Read completed with error (sct=0, sc=8) 00:31:53.396 Read completed with error (sct=0, sc=8) 00:31:53.396 Write completed with error (sct=0, sc=8) 00:31:53.396 Read completed with error (sct=0, sc=8) 00:31:53.396 Read completed with error (sct=0, sc=8) 00:31:53.396 Write completed with error (sct=0, sc=8) 00:31:53.396 Read completed with error (sct=0, sc=8) 00:31:53.396 Read completed with error (sct=0, sc=8) 00:31:53.396 Write completed with error (sct=0, sc=8) 00:31:53.396 Read completed with error (sct=0, sc=8) 00:31:53.396 Read completed with error (sct=0, sc=8) 00:31:53.396 Read completed with error (sct=0, sc=8) 00:31:53.396 Read completed with error (sct=0, sc=8) 00:31:53.396 Read completed with error (sct=0, sc=8) 00:31:53.396 Write completed with error (sct=0, sc=8) 00:31:53.396 Write completed with error (sct=0, sc=8) 00:31:53.396 Read completed with error (sct=0, sc=8) 00:31:53.396 Write completed with error (sct=0, sc=8) 00:31:53.396 [2024-12-09 06:30:47.968128] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5ee800d800 is same with the state(6) to be set 00:31:53.397 Read completed with error (sct=0, sc=8) 00:31:53.397 Read completed with error (sct=0, sc=8) 00:31:53.397 Write completed with error (sct=0, sc=8) 00:31:53.397 Read completed with error (sct=0, sc=8) 00:31:53.397 Write completed with error (sct=0, sc=8) 00:31:53.397 Read completed with error (sct=0, sc=8) 00:31:53.397 Read completed with error (sct=0, sc=8) 00:31:53.397 Read completed with error (sct=0, sc=8) 00:31:53.397 Write completed with error (sct=0, sc=8) 00:31:53.397 Read completed with error (sct=0, sc=8) 00:31:53.397 Write completed with error (sct=0, sc=8) 00:31:53.397 Read completed with error (sct=0, sc=8) 00:31:53.397 Read completed with error (sct=0, sc=8) 00:31:53.397 Write completed with error (sct=0, sc=8) 00:31:53.397 Read completed with error (sct=0, sc=8) 00:31:53.397 Read completed with error (sct=0, sc=8) 00:31:53.397 Read completed with error (sct=0, sc=8) 00:31:53.397 Read completed with error (sct=0, sc=8) 00:31:53.397 Read completed with error (sct=0, sc=8) 00:31:53.397 Read completed with error (sct=0, sc=8) 00:31:53.397 Read completed with error (sct=0, sc=8) 00:31:53.397 [2024-12-09 06:30:47.972645] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0860 is same with the state(6) to be set 00:31:53.397 Read completed with error (sct=0, sc=8) 00:31:53.397 Read completed with error (sct=0, sc=8) 00:31:53.397 Read completed with error (sct=0, sc=8) 00:31:53.397 Read completed with error (sct=0, sc=8) 00:31:53.397 Write completed with error (sct=0, sc=8) 00:31:53.397 Read completed with error (sct=0, sc=8) 00:31:53.397 Read completed with error (sct=0, sc=8) 00:31:53.397 Read completed with error (sct=0, sc=8) 00:31:53.397 Write completed with error (sct=0, sc=8) 00:31:53.397 Write completed with error (sct=0, sc=8) 00:31:53.397 Write completed with error (sct=0, sc=8) 00:31:53.397 Read completed with error (sct=0, sc=8) 00:31:53.397 Write completed with error (sct=0, sc=8) 00:31:53.397 Read completed with error (sct=0, sc=8) 00:31:53.397 Read completed with error (sct=0, sc=8) 00:31:53.397 Write completed with error (sct=0, sc=8) 00:31:53.397 Read completed with error (sct=0, sc=8) 00:31:53.397 Read completed with error (sct=0, sc=8) 00:31:53.397 Write completed with error (sct=0, sc=8) 00:31:53.397 Read completed with error (sct=0, sc=8) 00:31:53.397 Read completed with error (sct=0, sc=8) 00:31:53.397 Read completed with error (sct=0, sc=8) 00:31:53.397 Write completed with error (sct=0, sc=8) 00:31:53.397 Write completed with error (sct=0, sc=8) 00:31:53.397 [2024-12-09 06:30:47.972750] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5ee800d020 is same with the state(6) to be set 00:31:53.397 Write completed with error (sct=0, sc=8) 00:31:53.397 Write completed with error (sct=0, sc=8) 00:31:53.397 Read completed with error (sct=0, sc=8) 00:31:53.397 Write completed with error (sct=0, sc=8) 00:31:53.397 Write completed with error (sct=0, sc=8) 00:31:53.397 Read completed with error (sct=0, sc=8) 00:31:53.397 Read completed with error (sct=0, sc=8) 00:31:53.397 Read completed with error (sct=0, sc=8) 00:31:53.397 Read completed with error (sct=0, sc=8) 00:31:53.397 Write completed with error (sct=0, sc=8) 00:31:53.397 Read completed with error (sct=0, sc=8) 00:31:53.397 Read completed with error (sct=0, sc=8) 00:31:53.397 Read completed with error (sct=0, sc=8) 00:31:53.397 Write completed with error (sct=0, sc=8) 00:31:53.397 Read completed with error (sct=0, sc=8) 00:31:53.397 Read completed with error (sct=0, sc=8) 00:31:53.397 Write completed with error (sct=0, sc=8) 00:31:53.397 Read completed with error (sct=0, sc=8) 00:31:53.397 Write completed with error (sct=0, sc=8) 00:31:53.397 Read completed with error (sct=0, sc=8) 00:31:53.397 Read completed with error (sct=0, sc=8) 00:31:53.397 Read completed with error (sct=0, sc=8) 00:31:53.397 [2024-12-09 06:30:47.973272] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b04a0 is same with the state(6) to be set 00:31:53.397 Initializing NVMe Controllers 00:31:53.397 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:53.397 Controller IO queue size 128, less than required. 00:31:53.397 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:53.397 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:31:53.397 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:31:53.397 Initialization complete. Launching workers. 00:31:53.397 ======================================================== 00:31:53.397 Latency(us) 00:31:53.397 Device Information : IOPS MiB/s Average min max 00:31:53.397 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 165.08 0.08 904554.73 214.20 1009835.28 00:31:53.397 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 169.55 0.08 895199.18 292.90 1009693.12 00:31:53.397 ======================================================== 00:31:53.397 Total : 334.63 0.16 899814.40 214.20 1009835.28 00:31:53.397 00:31:53.397 [2024-12-09 06:30:47.973742] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b19b0 (9): Bad file descriptor 00:31:53.397 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:31:53.397 06:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.397 06:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:31:53.397 06:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 526123 00:31:53.397 06:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:31:53.968 06:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:31:53.968 06:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 526123 00:31:53.968 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (526123) - No such process 00:31:53.968 06:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 526123 00:31:53.968 06:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:31:53.968 06:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 526123 00:31:53.968 06:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:31:53.968 06:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:53.968 06:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:31:53.968 06:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:53.968 06:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 526123 00:31:53.968 06:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:31:53.968 06:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:53.968 06:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:53.968 06:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:53.968 06:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:53.968 06:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.968 06:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:53.968 06:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.968 06:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:53.968 06:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.968 06:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:53.968 [2024-12-09 06:30:48.509277] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:53.968 06:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.968 06:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:53.968 06:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.968 06:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:53.968 06:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.968 06:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=526726 00:31:53.968 06:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:31:53.968 06:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:31:53.968 06:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 526726 00:31:53.968 06:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:54.229 [2024-12-09 06:30:48.581321] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:31:54.489 06:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:54.489 06:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 526726 00:31:54.489 06:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:55.059 06:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:55.059 06:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 526726 00:31:55.059 06:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:55.656 06:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:55.656 06:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 526726 00:31:55.656 06:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:56.008 06:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:56.008 06:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 526726 00:31:56.008 06:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:56.599 06:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:56.599 06:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 526726 00:31:56.599 06:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:57.192 06:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:57.192 06:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 526726 00:31:57.192 06:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:57.192 Initializing NVMe Controllers 00:31:57.192 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:57.192 Controller IO queue size 128, less than required. 00:31:57.192 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:57.192 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:31:57.192 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:31:57.192 Initialization complete. Launching workers. 00:31:57.192 ======================================================== 00:31:57.192 Latency(us) 00:31:57.192 Device Information : IOPS MiB/s Average min max 00:31:57.192 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003000.88 1000204.17 1040964.76 00:31:57.192 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004275.02 1000195.22 1041887.02 00:31:57.192 ======================================================== 00:31:57.192 Total : 256.00 0.12 1003637.95 1000195.22 1041887.02 00:31:57.192 00:31:57.829 06:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:57.829 06:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 526726 00:31:57.829 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (526726) - No such process 00:31:57.829 06:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 526726 00:31:57.829 06:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:31:57.829 06:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:31:57.829 06:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:57.829 06:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:31:57.829 06:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:57.829 06:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:31:57.829 06:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:57.829 06:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:57.829 rmmod nvme_tcp 00:31:57.829 rmmod nvme_fabrics 00:31:57.829 rmmod nvme_keyring 00:31:57.829 06:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:57.829 06:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:31:57.829 06:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:31:57.829 06:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 526058 ']' 00:31:57.829 06:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 526058 00:31:57.829 06:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 526058 ']' 00:31:57.829 06:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 526058 00:31:57.829 06:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:31:57.829 06:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:57.829 06:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 526058 00:31:57.829 06:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:57.829 06:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:57.829 06:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 526058' 00:31:57.829 killing process with pid 526058 00:31:57.829 06:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 526058 00:31:57.829 06:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 526058 00:31:57.829 06:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:57.829 06:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:57.829 06:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:57.829 06:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:31:57.829 06:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:31:57.829 06:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:57.829 06:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:31:57.829 06:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:57.829 06:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:57.829 06:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:57.829 06:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:57.829 06:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:59.838 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:59.838 00:31:59.838 real 0m18.178s 00:31:59.838 user 0m26.338s 00:31:59.838 sys 0m7.386s 00:31:59.838 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:59.838 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:59.838 ************************************ 00:31:59.838 END TEST nvmf_delete_subsystem 00:31:59.838 ************************************ 00:32:00.166 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:32:00.166 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:00.166 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:00.166 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:00.166 ************************************ 00:32:00.166 START TEST nvmf_host_management 00:32:00.166 ************************************ 00:32:00.166 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:32:00.166 * Looking for test storage... 00:32:00.166 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:00.166 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:00.166 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:32:00.166 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:00.166 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:00.166 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:00.166 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:00.166 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:00.166 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:32:00.166 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:32:00.166 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:32:00.166 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:32:00.166 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:32:00.166 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:32:00.166 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:32:00.166 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:00.166 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:32:00.166 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:32:00.166 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:00.166 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:00.166 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:32:00.166 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:32:00.166 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:00.166 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:32:00.166 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:32:00.166 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:32:00.166 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:32:00.166 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:00.166 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:32:00.166 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:32:00.166 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:00.166 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:00.166 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:32:00.166 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:00.166 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:00.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:00.166 --rc genhtml_branch_coverage=1 00:32:00.166 --rc genhtml_function_coverage=1 00:32:00.166 --rc genhtml_legend=1 00:32:00.166 --rc geninfo_all_blocks=1 00:32:00.166 --rc geninfo_unexecuted_blocks=1 00:32:00.166 00:32:00.166 ' 00:32:00.166 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:00.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:00.166 --rc genhtml_branch_coverage=1 00:32:00.166 --rc genhtml_function_coverage=1 00:32:00.166 --rc genhtml_legend=1 00:32:00.166 --rc geninfo_all_blocks=1 00:32:00.166 --rc geninfo_unexecuted_blocks=1 00:32:00.166 00:32:00.166 ' 00:32:00.166 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:00.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:00.166 --rc genhtml_branch_coverage=1 00:32:00.166 --rc genhtml_function_coverage=1 00:32:00.166 --rc genhtml_legend=1 00:32:00.166 --rc geninfo_all_blocks=1 00:32:00.166 --rc geninfo_unexecuted_blocks=1 00:32:00.166 00:32:00.166 ' 00:32:00.166 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:00.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:00.166 --rc genhtml_branch_coverage=1 00:32:00.166 --rc genhtml_function_coverage=1 00:32:00.166 --rc genhtml_legend=1 00:32:00.166 --rc geninfo_all_blocks=1 00:32:00.166 --rc geninfo_unexecuted_blocks=1 00:32:00.166 00:32:00.166 ' 00:32:00.166 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:00.166 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:32:00.166 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:00.166 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:00.166 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:00.166 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:00.166 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:00.166 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:00.166 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:00.166 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:00.166 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:00.166 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:00.166 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:32:00.166 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:32:00.166 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:00.166 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:00.166 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:00.166 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:00.166 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:00.166 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:32:00.166 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:00.166 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:00.166 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:00.166 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.166 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.166 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.166 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:32:00.167 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.167 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:32:00.167 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:00.167 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:00.167 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:00.167 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:00.167 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:00.167 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:00.167 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:00.167 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:00.167 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:00.167 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:00.167 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:00.167 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:00.167 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:32:00.167 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:00.167 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:00.167 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:00.167 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:00.167 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:00.167 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:00.167 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:00.167 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:00.167 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:00.167 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:00.167 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:32:00.167 06:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:08.531 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:08.531 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:32:08.531 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:08.531 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:08.531 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:08.531 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:08.531 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:08.531 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:32:08.531 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:08.531 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:32:08.531 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:32:08.531 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:32:08.531 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:32:08.531 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:32:08.531 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:32:08.531 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:08.531 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:08.531 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:08.531 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:08.531 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:08.531 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:08.531 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:08.531 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:08.531 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:08.531 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:08.531 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:08.531 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:08.531 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:08.531 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:08.531 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:08.531 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:08.531 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:08.531 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:08.531 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:08.531 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:08.531 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:08.531 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:08.531 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:08.531 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:08.531 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:08.531 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:08.531 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:08.531 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:08.531 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:08.531 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:08.531 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:08.531 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:08.531 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:08.531 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:08.531 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:08.531 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:08.531 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:08.531 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:08.531 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:08.531 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:08.531 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:08.531 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:08.531 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:08.532 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:08.532 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:08.532 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:08.532 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:08.532 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:08.532 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:08.532 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:08.532 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:08.532 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:08.532 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:08.532 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:08.532 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:08.532 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:08.532 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:08.532 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:08.532 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:32:08.532 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:08.532 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:08.532 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:08.532 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:08.532 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:08.532 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:08.532 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:08.532 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:08.532 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:08.532 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:08.532 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:08.532 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:08.532 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:08.532 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:08.532 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:08.532 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:08.532 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:08.532 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:08.532 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:08.532 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:08.532 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:08.532 06:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:08.532 06:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:08.532 06:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:08.532 06:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:08.532 06:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:08.532 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:08.532 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.641 ms 00:32:08.532 00:32:08.532 --- 10.0.0.2 ping statistics --- 00:32:08.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:08.532 rtt min/avg/max/mdev = 0.641/0.641/0.641/0.000 ms 00:32:08.532 06:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:08.532 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:08.532 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:32:08.532 00:32:08.532 --- 10.0.0.1 ping statistics --- 00:32:08.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:08.532 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:32:08.532 06:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:08.532 06:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:32:08.532 06:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:08.532 06:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:08.532 06:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:08.532 06:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:08.532 06:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:08.532 06:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:08.532 06:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:08.532 06:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:32:08.532 06:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:32:08.532 06:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:32:08.532 06:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:08.532 06:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:08.532 06:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:08.532 06:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=531388 00:32:08.532 06:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 531388 00:32:08.532 06:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:32:08.532 06:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 531388 ']' 00:32:08.532 06:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:08.532 06:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:08.532 06:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:08.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:08.532 06:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:08.532 06:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:08.532 [2024-12-09 06:31:02.225285] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:08.532 [2024-12-09 06:31:02.226376] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:32:08.532 [2024-12-09 06:31:02.226429] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:08.532 [2024-12-09 06:31:02.305245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:08.532 [2024-12-09 06:31:02.357307] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:08.532 [2024-12-09 06:31:02.357361] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:08.532 [2024-12-09 06:31:02.357370] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:08.532 [2024-12-09 06:31:02.357377] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:08.532 [2024-12-09 06:31:02.357383] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:08.532 [2024-12-09 06:31:02.359343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:08.532 [2024-12-09 06:31:02.359515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:08.532 [2024-12-09 06:31:02.359743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:08.532 [2024-12-09 06:31:02.359745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:08.532 [2024-12-09 06:31:02.436076] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:08.532 [2024-12-09 06:31:02.436613] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:08.532 [2024-12-09 06:31:02.437169] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:08.532 [2024-12-09 06:31:02.437243] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:08.532 [2024-12-09 06:31:02.437424] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:08.532 06:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:08.532 06:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:32:08.532 06:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:08.533 06:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:08.533 06:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:08.533 06:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:08.533 06:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:08.533 06:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:08.533 06:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:08.533 [2024-12-09 06:31:03.096762] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:08.794 06:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:08.794 06:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:32:08.794 06:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:08.794 06:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:08.794 06:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:32:08.794 06:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:32:08.794 06:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:32:08.794 06:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:08.794 06:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:08.794 Malloc0 00:32:08.794 [2024-12-09 06:31:03.176605] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:08.794 06:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:08.794 06:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:32:08.794 06:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:08.794 06:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:08.794 06:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=531586 00:32:08.794 06:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 531586 /var/tmp/bdevperf.sock 00:32:08.794 06:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 531586 ']' 00:32:08.794 06:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:08.794 06:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:08.794 06:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:08.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:08.794 06:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:32:08.794 06:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:08.794 06:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:32:08.794 06:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:08.794 06:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:32:08.794 06:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:32:08.794 06:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:08.794 06:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:08.794 { 00:32:08.794 "params": { 00:32:08.794 "name": "Nvme$subsystem", 00:32:08.794 "trtype": "$TEST_TRANSPORT", 00:32:08.794 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:08.794 "adrfam": "ipv4", 00:32:08.794 "trsvcid": "$NVMF_PORT", 00:32:08.794 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:08.794 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:08.794 "hdgst": ${hdgst:-false}, 00:32:08.794 "ddgst": ${ddgst:-false} 00:32:08.794 }, 00:32:08.794 "method": "bdev_nvme_attach_controller" 00:32:08.794 } 00:32:08.794 EOF 00:32:08.794 )") 00:32:08.794 06:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:32:08.794 06:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:32:08.794 06:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:32:08.794 06:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:08.794 "params": { 00:32:08.794 "name": "Nvme0", 00:32:08.794 "trtype": "tcp", 00:32:08.794 "traddr": "10.0.0.2", 00:32:08.794 "adrfam": "ipv4", 00:32:08.794 "trsvcid": "4420", 00:32:08.794 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:08.794 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:08.794 "hdgst": false, 00:32:08.794 "ddgst": false 00:32:08.794 }, 00:32:08.794 "method": "bdev_nvme_attach_controller" 00:32:08.794 }' 00:32:08.794 [2024-12-09 06:31:03.287521] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:32:08.794 [2024-12-09 06:31:03.287591] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid531586 ] 00:32:08.794 [2024-12-09 06:31:03.378377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:09.056 [2024-12-09 06:31:03.430914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:09.056 Running I/O for 10 seconds... 00:32:09.628 06:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:09.628 06:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:32:09.628 06:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:32:09.628 06:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:09.628 06:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:09.628 06:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:09.628 06:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:09.628 06:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:32:09.628 06:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:32:09.628 06:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:32:09.628 06:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:32:09.628 06:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:32:09.628 06:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:32:09.628 06:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:32:09.628 06:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:32:09.628 06:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:32:09.628 06:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:09.628 06:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:09.628 06:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:09.628 06:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=835 00:32:09.628 06:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 835 -ge 100 ']' 00:32:09.628 06:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:32:09.628 06:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:32:09.628 06:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:32:09.629 06:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:32:09.629 06:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:09.629 06:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:09.629 [2024-12-09 06:31:04.196386] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112df60 is same with the state(6) to be set 00:32:09.629 [2024-12-09 06:31:04.196422] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112df60 is same with the state(6) to be set 00:32:09.629 [2024-12-09 06:31:04.196428] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112df60 is same with the state(6) to be set 00:32:09.629 [2024-12-09 06:31:04.196434] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112df60 is same with the state(6) to be set 00:32:09.629 [2024-12-09 06:31:04.196439] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112df60 is same with the state(6) to be set 00:32:09.629 [2024-12-09 06:31:04.196444] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112df60 is same with the state(6) to be set 00:32:09.629 [2024-12-09 06:31:04.196452] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112df60 is same with the state(6) to be set 00:32:09.629 [2024-12-09 06:31:04.196457] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112df60 is same with the state(6) to be set 00:32:09.629 [2024-12-09 06:31:04.196462] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112df60 is same with the state(6) to be set 00:32:09.629 [2024-12-09 06:31:04.196467] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112df60 is same with the state(6) to be set 00:32:09.629 [2024-12-09 06:31:04.196472] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112df60 is same with the state(6) to be set 00:32:09.629 [2024-12-09 06:31:04.196479] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112df60 is same with the state(6) to be set 00:32:09.629 [2024-12-09 06:31:04.196483] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112df60 is same with the state(6) to be set 00:32:09.629 [2024-12-09 06:31:04.196489] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112df60 is same with the state(6) to be set 00:32:09.629 [2024-12-09 06:31:04.196493] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112df60 is same with the state(6) to be set 00:32:09.629 [2024-12-09 06:31:04.196498] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112df60 is same with the state(6) to be set 00:32:09.629 [2024-12-09 06:31:04.196503] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112df60 is same with the state(6) to be set 00:32:09.629 [2024-12-09 06:31:04.196508] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112df60 is same with the state(6) to be set 00:32:09.629 [2024-12-09 06:31:04.196512] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112df60 is same with the state(6) to be set 00:32:09.629 [2024-12-09 06:31:04.196517] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112df60 is same with the state(6) to be set 00:32:09.629 [2024-12-09 06:31:04.196522] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112df60 is same with the state(6) to be set 00:32:09.629 [2024-12-09 06:31:04.196527] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112df60 is same with the state(6) to be set 00:32:09.629 [2024-12-09 06:31:04.196532] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112df60 is same with the state(6) to be set 00:32:09.629 [2024-12-09 06:31:04.196537] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112df60 is same with the state(6) to be set 00:32:09.629 [2024-12-09 06:31:04.196541] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112df60 is same with the state(6) to be set 00:32:09.629 [2024-12-09 06:31:04.196551] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112df60 is same with the state(6) to be set 00:32:09.629 [2024-12-09 06:31:04.196557] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112df60 is same with the state(6) to be set 00:32:09.629 [2024-12-09 06:31:04.196561] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112df60 is same with the state(6) to be set 00:32:09.629 [2024-12-09 06:31:04.196566] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112df60 is same with the state(6) to be set 00:32:09.629 [2024-12-09 06:31:04.196571] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112df60 is same with the state(6) to be set 00:32:09.629 [2024-12-09 06:31:04.196576] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112df60 is same with the state(6) to be set 00:32:09.629 [2024-12-09 06:31:04.196581] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112df60 is same with the state(6) to be set 00:32:09.629 [2024-12-09 06:31:04.196586] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112df60 is same with the state(6) to be set 00:32:09.629 [2024-12-09 06:31:04.196591] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112df60 is same with the state(6) to be set 00:32:09.629 [2024-12-09 06:31:04.196596] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112df60 is same with the state(6) to be set 00:32:09.629 [2024-12-09 06:31:04.196601] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112df60 is same with the state(6) to be set 00:32:09.629 [2024-12-09 06:31:04.196605] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112df60 is same with the state(6) to be set 00:32:09.629 [2024-12-09 06:31:04.196610] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112df60 is same with the state(6) to be set 00:32:09.629 [2024-12-09 06:31:04.196615] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112df60 is same with the state(6) to be set 00:32:09.629 [2024-12-09 06:31:04.196620] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112df60 is same with the state(6) to be set 00:32:09.629 [2024-12-09 06:31:04.196624] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112df60 is same with the state(6) to be set 00:32:09.629 [2024-12-09 06:31:04.196629] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112df60 is same with the state(6) to be set 00:32:09.629 [2024-12-09 06:31:04.196634] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112df60 is same with the state(6) to be set 00:32:09.629 [2024-12-09 06:31:04.196638] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112df60 is same with the state(6) to be set 00:32:09.629 [2024-12-09 06:31:04.196644] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112df60 is same with the state(6) to be set 00:32:09.629 [2024-12-09 06:31:04.196649] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112df60 is same with the state(6) to be set 00:32:09.629 [2024-12-09 06:31:04.196654] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112df60 is same with the state(6) to be set 00:32:09.629 [2024-12-09 06:31:04.196658] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112df60 is same with the state(6) to be set 00:32:09.629 [2024-12-09 06:31:04.196663] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112df60 is same with the state(6) to be set 00:32:09.629 [2024-12-09 06:31:04.196668] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112df60 is same with the state(6) to be set 00:32:09.629 [2024-12-09 06:31:04.196673] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112df60 is same with the state(6) to be set 00:32:09.629 [2024-12-09 06:31:04.196678] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112df60 is same with the state(6) to be set 00:32:09.629 [2024-12-09 06:31:04.196684] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112df60 is same with the state(6) to be set 00:32:09.629 [2024-12-09 06:31:04.196689] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112df60 is same with the state(6) to be set 00:32:09.629 [2024-12-09 06:31:04.196694] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112df60 is same with the state(6) to be set 00:32:09.629 [2024-12-09 06:31:04.196700] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112df60 is same with the state(6) to be set 00:32:09.629 [2024-12-09 06:31:04.196705] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112df60 is same with the state(6) to be set 00:32:09.629 [2024-12-09 06:31:04.196710] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112df60 is same with the state(6) to be set 00:32:09.629 [2024-12-09 06:31:04.196715] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112df60 is same with the state(6) to be set 00:32:09.629 [2024-12-09 06:31:04.196720] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112df60 is same with the state(6) to be set 00:32:09.629 [2024-12-09 06:31:04.196724] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112df60 is same with the state(6) to be set 00:32:09.629 [2024-12-09 06:31:04.196729] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112df60 is same with the state(6) to be set 00:32:09.629 [2024-12-09 06:31:04.196734] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112df60 is same with the state(6) to be set 00:32:09.629 [2024-12-09 06:31:04.200937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:09.629 [2024-12-09 06:31:04.200972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.629 [2024-12-09 06:31:04.200983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:09.629 [2024-12-09 06:31:04.200990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.629 [2024-12-09 06:31:04.200997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:09.629 [2024-12-09 06:31:04.201004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.629 [2024-12-09 06:31:04.201012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:09.629 [2024-12-09 06:31:04.201019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.629 [2024-12-09 06:31:04.201026] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10906e0 is same with the state(6) to be set 00:32:09.629 06:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:09.629 06:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:32:09.629 06:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:09.629 06:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:09.629 [2024-12-09 06:31:04.205826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:118656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.629 [2024-12-09 06:31:04.205847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.629 [2024-12-09 06:31:04.205861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:118784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.629 [2024-12-09 06:31:04.205873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.630 [2024-12-09 06:31:04.205883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:118912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.630 [2024-12-09 06:31:04.205890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.630 [2024-12-09 06:31:04.205899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:119040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.630 [2024-12-09 06:31:04.205906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.630 [2024-12-09 06:31:04.205915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:119168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.630 [2024-12-09 06:31:04.205922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.630 [2024-12-09 06:31:04.205931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:119296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.630 [2024-12-09 06:31:04.205938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.630 [2024-12-09 06:31:04.205947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:119424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.630 [2024-12-09 06:31:04.205954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.630 [2024-12-09 06:31:04.205963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:119552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.630 [2024-12-09 06:31:04.205970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.630 [2024-12-09 06:31:04.205979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:119680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.630 [2024-12-09 06:31:04.205985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.630 [2024-12-09 06:31:04.205994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:119808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.630 [2024-12-09 06:31:04.206001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.630 [2024-12-09 06:31:04.206010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:119936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.630 [2024-12-09 06:31:04.206017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.630 [2024-12-09 06:31:04.206026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:120064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.630 [2024-12-09 06:31:04.206033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.630 [2024-12-09 06:31:04.206042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:120192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.630 [2024-12-09 06:31:04.206049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.630 [2024-12-09 06:31:04.206057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:120320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.630 [2024-12-09 06:31:04.206064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.630 [2024-12-09 06:31:04.206074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:120448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.630 [2024-12-09 06:31:04.206082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.630 [2024-12-09 06:31:04.206091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:120576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.630 [2024-12-09 06:31:04.206097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.630 [2024-12-09 06:31:04.206106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:120704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.630 [2024-12-09 06:31:04.206113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.630 [2024-12-09 06:31:04.206122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:120832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.630 [2024-12-09 06:31:04.206129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.630 [2024-12-09 06:31:04.206138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:120960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.630 [2024-12-09 06:31:04.206145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.630 [2024-12-09 06:31:04.206154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:121088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.630 [2024-12-09 06:31:04.206161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.630 [2024-12-09 06:31:04.206170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:121216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.630 [2024-12-09 06:31:04.206177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.630 [2024-12-09 06:31:04.206186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:121344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.630 [2024-12-09 06:31:04.206193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.630 [2024-12-09 06:31:04.206202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:121472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.630 [2024-12-09 06:31:04.206208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.630 [2024-12-09 06:31:04.206217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:121600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.630 [2024-12-09 06:31:04.206224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.630 [2024-12-09 06:31:04.206233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:121728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.630 [2024-12-09 06:31:04.206240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.630 [2024-12-09 06:31:04.206249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:121856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.630 [2024-12-09 06:31:04.206256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.630 [2024-12-09 06:31:04.206264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:121984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.630 [2024-12-09 06:31:04.206273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.630 [2024-12-09 06:31:04.206282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:122112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.630 [2024-12-09 06:31:04.206289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.630 [2024-12-09 06:31:04.206298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:122240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.630 [2024-12-09 06:31:04.206305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.630 [2024-12-09 06:31:04.206313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:122368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.630 [2024-12-09 06:31:04.206320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.630 [2024-12-09 06:31:04.206329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:122496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.630 [2024-12-09 06:31:04.206337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.630 [2024-12-09 06:31:04.206345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:122624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.630 [2024-12-09 06:31:04.206352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.630 [2024-12-09 06:31:04.206361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:122752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.630 [2024-12-09 06:31:04.206367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.630 [2024-12-09 06:31:04.206376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.630 [2024-12-09 06:31:04.206383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.630 [2024-12-09 06:31:04.206392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:123008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.630 [2024-12-09 06:31:04.206399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.630 [2024-12-09 06:31:04.206408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:123136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.630 [2024-12-09 06:31:04.206415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.631 [2024-12-09 06:31:04.206424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:123264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.631 [2024-12-09 06:31:04.206431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.631 [2024-12-09 06:31:04.206440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:123392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.631 [2024-12-09 06:31:04.206447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.631 [2024-12-09 06:31:04.206462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:123520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.631 [2024-12-09 06:31:04.206469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.631 [2024-12-09 06:31:04.206479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:123648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.631 [2024-12-09 06:31:04.206486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.631 [2024-12-09 06:31:04.206495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:123776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.631 [2024-12-09 06:31:04.206502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.631 [2024-12-09 06:31:04.206511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:123904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.631 [2024-12-09 06:31:04.206519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.631 [2024-12-09 06:31:04.206529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:124032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.631 [2024-12-09 06:31:04.206536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.631 [2024-12-09 06:31:04.206545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:124160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.631 [2024-12-09 06:31:04.206552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.631 [2024-12-09 06:31:04.206561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:124288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.631 [2024-12-09 06:31:04.206568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.631 [2024-12-09 06:31:04.206577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:124416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.631 [2024-12-09 06:31:04.206584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.631 [2024-12-09 06:31:04.206593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:124544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.631 [2024-12-09 06:31:04.206600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.631 [2024-12-09 06:31:04.206609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:124672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.631 [2024-12-09 06:31:04.206616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.631 [2024-12-09 06:31:04.206625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:124800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.631 [2024-12-09 06:31:04.206631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.631 [2024-12-09 06:31:04.206640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:124928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.631 [2024-12-09 06:31:04.206647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.631 [2024-12-09 06:31:04.206656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:125056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.631 [2024-12-09 06:31:04.206663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.631 [2024-12-09 06:31:04.206673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:125184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.631 [2024-12-09 06:31:04.206682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.631 [2024-12-09 06:31:04.206690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:125312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.631 [2024-12-09 06:31:04.206698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.631 [2024-12-09 06:31:04.206707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:125440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.631 [2024-12-09 06:31:04.206714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.631 [2024-12-09 06:31:04.206723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:125568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.631 [2024-12-09 06:31:04.206729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.631 [2024-12-09 06:31:04.206739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:125696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.631 [2024-12-09 06:31:04.206746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.631 [2024-12-09 06:31:04.206755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:125824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.631 [2024-12-09 06:31:04.206762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.631 [2024-12-09 06:31:04.206771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:125952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.631 [2024-12-09 06:31:04.206778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.631 [2024-12-09 06:31:04.206787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:126080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.631 [2024-12-09 06:31:04.206794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.631 [2024-12-09 06:31:04.206803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:126208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.631 [2024-12-09 06:31:04.206810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.631 [2024-12-09 06:31:04.206819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:126336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.631 [2024-12-09 06:31:04.206826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.631 [2024-12-09 06:31:04.206835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:126464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.631 [2024-12-09 06:31:04.206842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.631 [2024-12-09 06:31:04.206851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:126592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.631 [2024-12-09 06:31:04.206858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.631 [2024-12-09 06:31:04.206867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:126720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.631 [2024-12-09 06:31:04.206874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.631 [2024-12-09 06:31:04.208030] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:09.631 task offset: 118656 on job bdev=Nvme0n1 fails 00:32:09.631 00:32:09.631 Latency(us) 00:32:09.631 [2024-12-09T05:31:04.218Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:09.631 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:09.631 Job: Nvme0n1 ended in about 0.61 seconds with error 00:32:09.631 Verification LBA range: start 0x0 length 0x400 00:32:09.631 Nvme0n1 : 0.61 1514.48 94.66 104.56 0.00 38662.10 1430.45 37506.76 00:32:09.631 [2024-12-09T05:31:04.218Z] =================================================================================================================== 00:32:09.631 [2024-12-09T05:31:04.218Z] Total : 1514.48 94.66 104.56 0.00 38662.10 1430.45 37506.76 00:32:09.631 [2024-12-09 06:31:04.209873] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:32:09.631 [2024-12-09 06:31:04.209894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10906e0 (9): Bad file descriptor 00:32:09.893 06:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:09.893 06:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:32:09.893 [2024-12-09 06:31:04.214216] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:32:10.833 06:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 531586 00:32:10.833 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (531586) - No such process 00:32:10.833 06:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:32:10.833 06:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:32:10.833 06:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:32:10.833 06:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:32:10.833 06:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:32:10.833 06:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:32:10.833 06:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:10.833 06:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:10.833 { 00:32:10.833 "params": { 00:32:10.833 "name": "Nvme$subsystem", 00:32:10.833 "trtype": "$TEST_TRANSPORT", 00:32:10.833 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:10.833 "adrfam": "ipv4", 00:32:10.833 "trsvcid": "$NVMF_PORT", 00:32:10.833 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:10.833 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:10.833 "hdgst": ${hdgst:-false}, 00:32:10.833 "ddgst": ${ddgst:-false} 00:32:10.833 }, 00:32:10.833 "method": "bdev_nvme_attach_controller" 00:32:10.833 } 00:32:10.833 EOF 00:32:10.833 )") 00:32:10.833 06:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:32:10.833 06:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:32:10.833 06:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:32:10.833 06:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:10.833 "params": { 00:32:10.833 "name": "Nvme0", 00:32:10.833 "trtype": "tcp", 00:32:10.833 "traddr": "10.0.0.2", 00:32:10.833 "adrfam": "ipv4", 00:32:10.833 "trsvcid": "4420", 00:32:10.833 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:10.833 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:10.833 "hdgst": false, 00:32:10.833 "ddgst": false 00:32:10.833 }, 00:32:10.833 "method": "bdev_nvme_attach_controller" 00:32:10.833 }' 00:32:10.833 [2024-12-09 06:31:05.271575] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:32:10.833 [2024-12-09 06:31:05.271632] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid531908 ] 00:32:10.833 [2024-12-09 06:31:05.358563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:10.833 [2024-12-09 06:31:05.392403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:11.093 Running I/O for 1 seconds... 00:32:12.030 1982.00 IOPS, 123.88 MiB/s 00:32:12.030 Latency(us) 00:32:12.030 [2024-12-09T05:31:06.617Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:12.030 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:12.030 Verification LBA range: start 0x0 length 0x400 00:32:12.030 Nvme0n1 : 1.03 1986.07 124.13 0.00 0.00 31660.26 5822.62 29844.09 00:32:12.030 [2024-12-09T05:31:06.617Z] =================================================================================================================== 00:32:12.030 [2024-12-09T05:31:06.617Z] Total : 1986.07 124.13 0.00 0.00 31660.26 5822.62 29844.09 00:32:12.290 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:32:12.290 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:32:12.290 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:32:12.290 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:32:12.290 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:32:12.290 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:12.290 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:32:12.290 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:12.290 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:32:12.290 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:12.290 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:12.290 rmmod nvme_tcp 00:32:12.290 rmmod nvme_fabrics 00:32:12.290 rmmod nvme_keyring 00:32:12.290 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:12.290 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:32:12.290 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:32:12.290 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 531388 ']' 00:32:12.290 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 531388 00:32:12.290 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 531388 ']' 00:32:12.290 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 531388 00:32:12.290 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:32:12.290 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:12.290 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 531388 00:32:12.290 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:12.290 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:12.290 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 531388' 00:32:12.290 killing process with pid 531388 00:32:12.290 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 531388 00:32:12.290 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 531388 00:32:12.550 [2024-12-09 06:31:06.955783] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:32:12.550 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:12.550 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:12.550 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:12.550 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:32:12.550 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:32:12.550 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:12.550 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:32:12.550 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:12.550 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:12.550 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:12.550 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:12.550 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:15.097 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:15.097 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:32:15.097 00:32:15.097 real 0m14.601s 00:32:15.097 user 0m19.119s 00:32:15.097 sys 0m7.423s 00:32:15.097 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:15.097 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:15.097 ************************************ 00:32:15.097 END TEST nvmf_host_management 00:32:15.097 ************************************ 00:32:15.097 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:32:15.097 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:15.097 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:15.097 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:15.097 ************************************ 00:32:15.097 START TEST nvmf_lvol 00:32:15.097 ************************************ 00:32:15.097 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:32:15.097 * Looking for test storage... 00:32:15.097 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:15.098 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:15.098 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:32:15.098 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:15.098 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:15.098 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:15.098 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:15.098 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:15.098 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:32:15.098 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:32:15.098 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:32:15.098 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:32:15.098 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:32:15.098 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:32:15.098 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:32:15.098 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:15.098 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:32:15.098 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:32:15.098 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:15.098 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:15.098 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:32:15.098 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:32:15.098 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:15.098 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:32:15.098 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:32:15.098 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:32:15.098 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:32:15.098 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:15.098 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:32:15.098 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:32:15.098 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:15.098 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:15.098 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:32:15.098 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:15.098 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:15.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:15.098 --rc genhtml_branch_coverage=1 00:32:15.098 --rc genhtml_function_coverage=1 00:32:15.098 --rc genhtml_legend=1 00:32:15.098 --rc geninfo_all_blocks=1 00:32:15.098 --rc geninfo_unexecuted_blocks=1 00:32:15.098 00:32:15.098 ' 00:32:15.098 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:15.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:15.098 --rc genhtml_branch_coverage=1 00:32:15.098 --rc genhtml_function_coverage=1 00:32:15.098 --rc genhtml_legend=1 00:32:15.098 --rc geninfo_all_blocks=1 00:32:15.098 --rc geninfo_unexecuted_blocks=1 00:32:15.098 00:32:15.098 ' 00:32:15.098 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:15.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:15.098 --rc genhtml_branch_coverage=1 00:32:15.098 --rc genhtml_function_coverage=1 00:32:15.098 --rc genhtml_legend=1 00:32:15.098 --rc geninfo_all_blocks=1 00:32:15.098 --rc geninfo_unexecuted_blocks=1 00:32:15.098 00:32:15.098 ' 00:32:15.098 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:15.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:15.098 --rc genhtml_branch_coverage=1 00:32:15.098 --rc genhtml_function_coverage=1 00:32:15.098 --rc genhtml_legend=1 00:32:15.098 --rc geninfo_all_blocks=1 00:32:15.098 --rc geninfo_unexecuted_blocks=1 00:32:15.098 00:32:15.098 ' 00:32:15.098 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:15.098 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:32:15.098 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:15.098 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:15.098 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:15.098 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:15.098 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:15.098 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:15.098 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:15.098 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:15.098 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:15.098 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:15.098 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:32:15.098 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:32:15.098 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:15.098 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:15.098 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:15.098 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:15.098 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:15.098 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:32:15.098 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:15.098 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:15.098 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:15.098 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:15.098 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:15.098 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:15.098 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:32:15.098 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:15.098 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:32:15.098 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:15.098 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:15.098 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:15.098 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:15.098 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:15.098 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:15.099 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:15.099 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:15.099 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:15.099 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:15.099 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:15.099 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:15.099 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:32:15.099 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:32:15.099 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:15.099 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:32:15.099 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:15.099 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:15.099 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:15.099 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:15.099 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:15.099 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:15.099 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:15.099 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:15.099 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:15.099 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:15.099 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:32:15.099 06:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:23.239 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:23.239 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:32:23.239 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:23.239 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:23.239 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:23.239 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:23.239 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:23.239 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:32:23.239 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:23.239 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:32:23.239 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:32:23.239 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:32:23.239 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:32:23.239 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:32:23.239 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:32:23.239 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:23.239 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:23.239 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:23.239 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:23.239 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:23.239 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:23.239 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:23.239 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:23.239 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:23.239 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:23.239 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:23.239 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:23.239 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:23.239 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:23.239 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:23.239 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:23.239 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:23.239 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:23.239 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:23.239 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:23.239 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:23.239 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:23.239 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:23.239 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:23.239 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:23.239 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:23.239 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:23.239 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:23.239 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:23.239 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:23.239 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:23.239 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:23.239 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:23.239 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:23.239 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:23.239 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:23.239 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:23.239 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:23.239 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:23.239 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:23.239 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:23.239 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:23.239 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:23.239 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:23.239 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:23.239 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:23.239 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:23.239 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:23.239 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:23.239 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:23.239 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:23.239 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:23.239 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:23.239 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:23.239 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:23.239 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:23.239 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:23.239 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:23.240 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:32:23.240 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:23.240 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:23.240 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:23.240 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:23.240 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:23.240 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:23.240 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:23.240 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:23.240 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:23.240 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:23.240 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:23.240 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:23.240 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:23.240 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:23.240 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:23.240 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:23.240 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:23.240 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:23.240 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:23.240 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:23.240 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:23.240 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:23.240 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:23.240 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:23.240 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:23.240 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:23.240 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:23.240 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.528 ms 00:32:23.240 00:32:23.240 --- 10.0.0.2 ping statistics --- 00:32:23.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:23.240 rtt min/avg/max/mdev = 0.528/0.528/0.528/0.000 ms 00:32:23.240 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:23.240 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:23.240 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.258 ms 00:32:23.240 00:32:23.240 --- 10.0.0.1 ping statistics --- 00:32:23.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:23.240 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:32:23.240 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:23.240 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:32:23.240 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:23.240 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:23.240 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:23.240 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:23.240 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:23.240 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:23.240 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:23.240 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:32:23.240 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:23.240 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:23.240 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:23.240 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=536120 00:32:23.240 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 536120 00:32:23.240 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:32:23.240 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 536120 ']' 00:32:23.240 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:23.240 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:23.240 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:23.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:23.240 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:23.240 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:23.240 [2024-12-09 06:31:16.949240] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:23.240 [2024-12-09 06:31:16.950332] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:32:23.240 [2024-12-09 06:31:16.950381] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:23.240 [2024-12-09 06:31:17.047659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:23.240 [2024-12-09 06:31:17.098209] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:23.240 [2024-12-09 06:31:17.098266] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:23.240 [2024-12-09 06:31:17.098274] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:23.240 [2024-12-09 06:31:17.098281] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:23.240 [2024-12-09 06:31:17.098287] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:23.240 [2024-12-09 06:31:17.100170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:23.240 [2024-12-09 06:31:17.100292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:23.240 [2024-12-09 06:31:17.100294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:23.240 [2024-12-09 06:31:17.176479] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:23.240 [2024-12-09 06:31:17.176537] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:23.240 [2024-12-09 06:31:17.177194] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:23.240 [2024-12-09 06:31:17.177417] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:23.240 06:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:23.240 06:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:32:23.240 06:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:23.240 06:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:23.240 06:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:23.240 06:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:23.240 06:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:23.501 [2024-12-09 06:31:17.985305] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:23.501 06:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:23.762 06:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:32:23.762 06:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:24.022 06:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:32:24.022 06:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:32:24.283 06:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:32:24.283 06:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=ecfc973f-2483-43b6-9080-7941c9440b58 00:32:24.283 06:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ecfc973f-2483-43b6-9080-7941c9440b58 lvol 20 00:32:24.544 06:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=3c5594dd-5071-4f0e-a9f2-7ccffe0af772 00:32:24.544 06:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:24.805 06:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3c5594dd-5071-4f0e-a9f2-7ccffe0af772 00:32:25.067 06:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:25.067 [2024-12-09 06:31:19.589218] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:25.067 06:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:25.328 06:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=536757 00:32:25.328 06:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:32:25.328 06:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:32:26.271 06:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 3c5594dd-5071-4f0e-a9f2-7ccffe0af772 MY_SNAPSHOT 00:32:26.532 06:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=3d470d5b-43c7-4073-94e1-a21ef62593cb 00:32:26.532 06:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 3c5594dd-5071-4f0e-a9f2-7ccffe0af772 30 00:32:26.792 06:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 3d470d5b-43c7-4073-94e1-a21ef62593cb MY_CLONE 00:32:27.052 06:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=46ccda77-f4ff-4d78-b60a-7cf0ad7dd4d9 00:32:27.052 06:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 46ccda77-f4ff-4d78-b60a-7cf0ad7dd4d9 00:32:27.623 06:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 536757 00:32:35.754 Initializing NVMe Controllers 00:32:35.754 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:32:35.754 Controller IO queue size 128, less than required. 00:32:35.754 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:35.754 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:32:35.754 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:32:35.754 Initialization complete. Launching workers. 00:32:35.754 ======================================================== 00:32:35.754 Latency(us) 00:32:35.754 Device Information : IOPS MiB/s Average min max 00:32:35.754 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16580.60 64.77 7722.57 2362.29 48648.09 00:32:35.754 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15379.80 60.08 8326.18 1895.56 50869.47 00:32:35.754 ======================================================== 00:32:35.754 Total : 31960.40 124.85 8013.04 1895.56 50869.47 00:32:35.754 00:32:35.754 06:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:36.015 06:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3c5594dd-5071-4f0e-a9f2-7ccffe0af772 00:32:36.015 06:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ecfc973f-2483-43b6-9080-7941c9440b58 00:32:36.276 06:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:32:36.276 06:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:32:36.276 06:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:32:36.276 06:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:36.276 06:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:32:36.276 06:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:36.276 06:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:32:36.276 06:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:36.276 06:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:36.276 rmmod nvme_tcp 00:32:36.276 rmmod nvme_fabrics 00:32:36.276 rmmod nvme_keyring 00:32:36.276 06:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:36.276 06:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:32:36.276 06:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:32:36.276 06:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 536120 ']' 00:32:36.276 06:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 536120 00:32:36.276 06:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 536120 ']' 00:32:36.276 06:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 536120 00:32:36.276 06:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:32:36.276 06:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:36.277 06:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 536120 00:32:36.537 06:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:36.537 06:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:36.537 06:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 536120' 00:32:36.537 killing process with pid 536120 00:32:36.537 06:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 536120 00:32:36.537 06:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 536120 00:32:36.537 06:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:36.537 06:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:36.538 06:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:36.538 06:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:32:36.538 06:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:32:36.538 06:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:36.538 06:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:32:36.538 06:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:36.538 06:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:36.538 06:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:36.538 06:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:36.538 06:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:39.082 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:39.082 00:32:39.082 real 0m23.931s 00:32:39.082 user 0m56.063s 00:32:39.082 sys 0m10.738s 00:32:39.082 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:39.082 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:39.082 ************************************ 00:32:39.082 END TEST nvmf_lvol 00:32:39.082 ************************************ 00:32:39.082 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:32:39.082 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:39.082 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:39.082 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:39.082 ************************************ 00:32:39.082 START TEST nvmf_lvs_grow 00:32:39.082 ************************************ 00:32:39.082 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:32:39.082 * Looking for test storage... 00:32:39.082 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:39.082 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:39.082 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:32:39.082 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:39.082 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:39.082 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:39.082 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:39.082 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:39.082 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:32:39.082 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:32:39.082 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:32:39.082 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:32:39.082 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:32:39.082 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:32:39.082 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:32:39.082 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:39.082 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:32:39.082 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:32:39.082 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:39.082 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:39.082 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:32:39.082 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:32:39.082 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:39.082 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:32:39.082 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:32:39.082 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:32:39.082 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:32:39.082 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:39.082 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:32:39.082 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:32:39.082 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:39.082 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:39.082 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:32:39.082 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:39.082 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:39.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:39.082 --rc genhtml_branch_coverage=1 00:32:39.082 --rc genhtml_function_coverage=1 00:32:39.082 --rc genhtml_legend=1 00:32:39.082 --rc geninfo_all_blocks=1 00:32:39.082 --rc geninfo_unexecuted_blocks=1 00:32:39.082 00:32:39.082 ' 00:32:39.082 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:39.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:39.082 --rc genhtml_branch_coverage=1 00:32:39.082 --rc genhtml_function_coverage=1 00:32:39.082 --rc genhtml_legend=1 00:32:39.082 --rc geninfo_all_blocks=1 00:32:39.082 --rc geninfo_unexecuted_blocks=1 00:32:39.082 00:32:39.082 ' 00:32:39.082 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:39.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:39.082 --rc genhtml_branch_coverage=1 00:32:39.082 --rc genhtml_function_coverage=1 00:32:39.082 --rc genhtml_legend=1 00:32:39.082 --rc geninfo_all_blocks=1 00:32:39.082 --rc geninfo_unexecuted_blocks=1 00:32:39.082 00:32:39.082 ' 00:32:39.082 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:39.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:39.083 --rc genhtml_branch_coverage=1 00:32:39.083 --rc genhtml_function_coverage=1 00:32:39.083 --rc genhtml_legend=1 00:32:39.083 --rc geninfo_all_blocks=1 00:32:39.083 --rc geninfo_unexecuted_blocks=1 00:32:39.083 00:32:39.083 ' 00:32:39.083 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:39.083 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:32:39.083 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:39.083 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:39.083 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:39.083 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:39.083 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:39.083 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:39.083 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:39.083 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:39.083 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:39.083 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:39.083 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:32:39.083 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:32:39.083 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:39.083 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:39.083 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:39.083 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:39.083 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:39.083 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:32:39.083 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:39.083 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:39.083 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:39.083 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.083 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.083 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.083 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:32:39.083 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.083 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:32:39.083 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:39.083 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:39.083 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:39.083 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:39.083 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:39.083 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:39.083 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:39.083 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:39.083 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:39.083 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:39.083 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:39.083 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:39.083 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:32:39.083 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:39.083 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:39.083 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:39.083 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:39.083 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:39.083 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:39.083 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:39.083 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:39.083 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:39.083 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:39.083 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:32:39.083 06:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:47.225 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:47.225 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:32:47.225 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:47.225 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:47.225 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:47.225 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:47.225 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:47.225 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:32:47.225 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:47.225 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:32:47.225 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:32:47.225 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:32:47.225 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:32:47.225 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:32:47.225 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:32:47.225 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:47.225 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:47.225 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:47.225 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:47.225 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:47.225 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:47.225 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:47.225 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:47.225 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:47.225 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:47.225 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:47.225 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:47.225 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:47.225 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:47.225 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:47.225 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:47.225 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:47.225 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:47.225 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:47.225 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:47.225 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:47.225 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:47.225 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:47.225 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:47.225 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:47.225 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:47.225 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:47.225 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:47.225 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:47.225 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:47.225 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:47.225 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:47.225 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:47.225 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:47.225 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:47.225 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:47.225 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:47.225 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:47.226 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:47.226 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:47.226 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:47.226 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:47.226 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:47.226 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:47.226 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:47.226 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:47.226 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:47.226 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:47.226 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:47.226 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:47.226 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:47.226 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:47.226 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:47.226 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:47.226 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:47.226 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:47.226 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:47.226 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:47.226 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:32:47.226 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:47.226 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:47.226 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:47.226 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:47.226 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:47.226 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:47.226 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:47.226 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:47.226 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:47.226 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:47.226 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:47.226 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:47.226 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:47.226 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:47.226 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:47.226 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:47.226 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:47.226 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:47.226 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:47.226 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:47.226 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:47.226 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:47.226 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:47.226 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:47.226 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:47.226 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:47.226 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:47.226 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.530 ms 00:32:47.226 00:32:47.226 --- 10.0.0.2 ping statistics --- 00:32:47.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:47.226 rtt min/avg/max/mdev = 0.530/0.530/0.530/0.000 ms 00:32:47.226 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:47.226 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:47.226 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:32:47.226 00:32:47.226 --- 10.0.0.1 ping statistics --- 00:32:47.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:47.226 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:32:47.226 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:47.226 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:32:47.226 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:47.226 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:47.226 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:47.226 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:47.226 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:47.226 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:47.226 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:47.226 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:32:47.226 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:47.226 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:47.226 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:47.226 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=542497 00:32:47.226 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 542497 00:32:47.226 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:32:47.226 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 542497 ']' 00:32:47.226 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:47.226 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:47.226 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:47.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:47.226 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:47.226 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:47.226 [2024-12-09 06:31:41.046207] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:47.226 [2024-12-09 06:31:41.047236] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:32:47.226 [2024-12-09 06:31:41.047279] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:47.226 [2024-12-09 06:31:41.143245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:47.226 [2024-12-09 06:31:41.192443] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:47.226 [2024-12-09 06:31:41.192513] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:47.226 [2024-12-09 06:31:41.192522] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:47.226 [2024-12-09 06:31:41.192534] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:47.226 [2024-12-09 06:31:41.192540] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:47.226 [2024-12-09 06:31:41.193263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:47.226 [2024-12-09 06:31:41.268584] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:47.226 [2024-12-09 06:31:41.268846] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:47.488 06:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:47.488 06:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:32:47.488 06:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:47.488 06:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:47.488 06:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:47.488 06:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:47.488 06:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:47.750 [2024-12-09 06:31:42.086135] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:47.750 06:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:32:47.750 06:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:47.750 06:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:47.750 06:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:47.750 ************************************ 00:32:47.750 START TEST lvs_grow_clean 00:32:47.750 ************************************ 00:32:47.750 06:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:32:47.750 06:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:32:47.750 06:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:32:47.750 06:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:32:47.750 06:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:32:47.750 06:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:32:47.750 06:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:32:47.750 06:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:47.750 06:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:47.750 06:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:48.011 06:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:32:48.011 06:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:32:48.011 06:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=6be33feb-c0d2-45ba-aa83-7f86f8e5a521 00:32:48.011 06:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6be33feb-c0d2-45ba-aa83-7f86f8e5a521 00:32:48.011 06:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:32:48.272 06:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:32:48.272 06:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:32:48.272 06:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6be33feb-c0d2-45ba-aa83-7f86f8e5a521 lvol 150 00:32:48.532 06:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=553d4f53-47c0-403b-a41e-8b56f5949b87 00:32:48.532 06:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:48.532 06:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:32:48.532 [2024-12-09 06:31:43.101823] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:32:48.532 [2024-12-09 06:31:43.101990] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:32:48.532 true 00:32:48.793 06:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6be33feb-c0d2-45ba-aa83-7f86f8e5a521 00:32:48.793 06:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:32:48.793 06:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:32:48.793 06:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:49.054 06:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 553d4f53-47c0-403b-a41e-8b56f5949b87 00:32:49.314 06:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:49.314 [2024-12-09 06:31:43.842511] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:49.314 06:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:49.575 06:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=543095 00:32:49.575 06:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:49.575 06:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 543095 /var/tmp/bdevperf.sock 00:32:49.575 06:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 543095 ']' 00:32:49.575 06:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:49.575 06:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:49.575 06:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:49.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:49.575 06:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:49.575 06:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:32:49.575 06:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:32:49.575 [2024-12-09 06:31:44.085422] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:32:49.575 [2024-12-09 06:31:44.085502] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid543095 ] 00:32:49.575 [2024-12-09 06:31:44.157550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:49.835 [2024-12-09 06:31:44.207618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:50.406 06:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:50.406 06:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:32:50.406 06:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:32:50.665 Nvme0n1 00:32:50.666 06:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:32:50.926 [ 00:32:50.926 { 00:32:50.926 "name": "Nvme0n1", 00:32:50.926 "aliases": [ 00:32:50.926 "553d4f53-47c0-403b-a41e-8b56f5949b87" 00:32:50.926 ], 00:32:50.926 "product_name": "NVMe disk", 00:32:50.926 "block_size": 4096, 00:32:50.926 "num_blocks": 38912, 00:32:50.926 "uuid": "553d4f53-47c0-403b-a41e-8b56f5949b87", 00:32:50.926 "numa_id": 0, 00:32:50.926 "assigned_rate_limits": { 00:32:50.926 "rw_ios_per_sec": 0, 00:32:50.926 "rw_mbytes_per_sec": 0, 00:32:50.926 "r_mbytes_per_sec": 0, 00:32:50.926 "w_mbytes_per_sec": 0 00:32:50.926 }, 00:32:50.926 "claimed": false, 00:32:50.926 "zoned": false, 00:32:50.926 "supported_io_types": { 00:32:50.926 "read": true, 00:32:50.926 "write": true, 00:32:50.926 "unmap": true, 00:32:50.926 "flush": true, 00:32:50.926 "reset": true, 00:32:50.926 "nvme_admin": true, 00:32:50.926 "nvme_io": true, 00:32:50.926 "nvme_io_md": false, 00:32:50.926 "write_zeroes": true, 00:32:50.926 "zcopy": false, 00:32:50.926 "get_zone_info": false, 00:32:50.926 "zone_management": false, 00:32:50.926 "zone_append": false, 00:32:50.926 "compare": true, 00:32:50.926 "compare_and_write": true, 00:32:50.926 "abort": true, 00:32:50.926 "seek_hole": false, 00:32:50.926 "seek_data": false, 00:32:50.926 "copy": true, 00:32:50.926 "nvme_iov_md": false 00:32:50.926 }, 00:32:50.926 "memory_domains": [ 00:32:50.926 { 00:32:50.926 "dma_device_id": "system", 00:32:50.926 "dma_device_type": 1 00:32:50.926 } 00:32:50.926 ], 00:32:50.926 "driver_specific": { 00:32:50.926 "nvme": [ 00:32:50.926 { 00:32:50.926 "trid": { 00:32:50.926 "trtype": "TCP", 00:32:50.926 "adrfam": "IPv4", 00:32:50.926 "traddr": "10.0.0.2", 00:32:50.926 "trsvcid": "4420", 00:32:50.926 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:32:50.926 }, 00:32:50.926 "ctrlr_data": { 00:32:50.926 "cntlid": 1, 00:32:50.926 "vendor_id": "0x8086", 00:32:50.926 "model_number": "SPDK bdev Controller", 00:32:50.926 "serial_number": "SPDK0", 00:32:50.926 "firmware_revision": "25.01", 00:32:50.926 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:50.926 "oacs": { 00:32:50.926 "security": 0, 00:32:50.926 "format": 0, 00:32:50.926 "firmware": 0, 00:32:50.926 "ns_manage": 0 00:32:50.926 }, 00:32:50.926 "multi_ctrlr": true, 00:32:50.926 "ana_reporting": false 00:32:50.926 }, 00:32:50.926 "vs": { 00:32:50.926 "nvme_version": "1.3" 00:32:50.926 }, 00:32:50.926 "ns_data": { 00:32:50.926 "id": 1, 00:32:50.926 "can_share": true 00:32:50.926 } 00:32:50.926 } 00:32:50.926 ], 00:32:50.926 "mp_policy": "active_passive" 00:32:50.926 } 00:32:50.926 } 00:32:50.926 ] 00:32:50.926 06:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=543170 00:32:50.926 06:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:32:50.926 06:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:50.926 Running I/O for 10 seconds... 00:32:52.308 Latency(us) 00:32:52.308 [2024-12-09T05:31:46.895Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:52.308 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:52.309 Nvme0n1 : 1.00 18161.00 70.94 0.00 0.00 0.00 0.00 0.00 00:32:52.309 [2024-12-09T05:31:46.896Z] =================================================================================================================== 00:32:52.309 [2024-12-09T05:31:46.896Z] Total : 18161.00 70.94 0.00 0.00 0.00 0.00 0.00 00:32:52.309 00:32:52.877 06:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 6be33feb-c0d2-45ba-aa83-7f86f8e5a521 00:32:53.137 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:53.137 Nvme0n1 : 2.00 18859.50 73.67 0.00 0.00 0.00 0.00 0.00 00:32:53.137 [2024-12-09T05:31:47.724Z] =================================================================================================================== 00:32:53.137 [2024-12-09T05:31:47.724Z] Total : 18859.50 73.67 0.00 0.00 0.00 0.00 0.00 00:32:53.137 00:32:53.137 true 00:32:53.137 06:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6be33feb-c0d2-45ba-aa83-7f86f8e5a521 00:32:53.137 06:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:32:53.396 06:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:32:53.397 06:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:32:53.397 06:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 543170 00:32:53.967 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:53.967 Nvme0n1 : 3.00 19092.33 74.58 0.00 0.00 0.00 0.00 0.00 00:32:53.967 [2024-12-09T05:31:48.554Z] =================================================================================================================== 00:32:53.967 [2024-12-09T05:31:48.554Z] Total : 19092.33 74.58 0.00 0.00 0.00 0.00 0.00 00:32:53.967 00:32:55.347 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:55.347 Nvme0n1 : 4.00 19289.00 75.35 0.00 0.00 0.00 0.00 0.00 00:32:55.347 [2024-12-09T05:31:49.934Z] =================================================================================================================== 00:32:55.347 [2024-12-09T05:31:49.934Z] Total : 19289.00 75.35 0.00 0.00 0.00 0.00 0.00 00:32:55.347 00:32:55.917 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:55.917 Nvme0n1 : 5.00 20398.20 79.68 0.00 0.00 0.00 0.00 0.00 00:32:55.917 [2024-12-09T05:31:50.504Z] =================================================================================================================== 00:32:55.917 [2024-12-09T05:31:50.504Z] Total : 20398.20 79.68 0.00 0.00 0.00 0.00 0.00 00:32:55.917 00:32:57.303 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:57.303 Nvme0n1 : 6.00 21147.17 82.61 0.00 0.00 0.00 0.00 0.00 00:32:57.303 [2024-12-09T05:31:51.890Z] =================================================================================================================== 00:32:57.303 [2024-12-09T05:31:51.890Z] Total : 21147.17 82.61 0.00 0.00 0.00 0.00 0.00 00:32:57.303 00:32:58.246 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:58.246 Nvme0n1 : 7.00 21682.14 84.70 0.00 0.00 0.00 0.00 0.00 00:32:58.246 [2024-12-09T05:31:52.833Z] =================================================================================================================== 00:32:58.246 [2024-12-09T05:31:52.833Z] Total : 21682.14 84.70 0.00 0.00 0.00 0.00 0.00 00:32:58.246 00:32:59.190 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:59.190 Nvme0n1 : 8.00 22083.50 86.26 0.00 0.00 0.00 0.00 0.00 00:32:59.190 [2024-12-09T05:31:53.777Z] =================================================================================================================== 00:32:59.190 [2024-12-09T05:31:53.777Z] Total : 22083.50 86.26 0.00 0.00 0.00 0.00 0.00 00:32:59.190 00:33:00.131 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:00.131 Nvme0n1 : 9.00 22404.44 87.52 0.00 0.00 0.00 0.00 0.00 00:33:00.131 [2024-12-09T05:31:54.718Z] =================================================================================================================== 00:33:00.131 [2024-12-09T05:31:54.718Z] Total : 22404.44 87.52 0.00 0.00 0.00 0.00 0.00 00:33:00.131 00:33:01.080 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:01.080 Nvme0n1 : 10.00 22653.20 88.49 0.00 0.00 0.00 0.00 0.00 00:33:01.080 [2024-12-09T05:31:55.667Z] =================================================================================================================== 00:33:01.080 [2024-12-09T05:31:55.667Z] Total : 22653.20 88.49 0.00 0.00 0.00 0.00 0.00 00:33:01.080 00:33:01.080 00:33:01.080 Latency(us) 00:33:01.080 [2024-12-09T05:31:55.667Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:01.080 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:01.080 Nvme0n1 : 10.00 22657.47 88.51 0.00 0.00 5646.63 3276.80 32263.88 00:33:01.080 [2024-12-09T05:31:55.667Z] =================================================================================================================== 00:33:01.080 [2024-12-09T05:31:55.667Z] Total : 22657.47 88.51 0.00 0.00 5646.63 3276.80 32263.88 00:33:01.080 { 00:33:01.080 "results": [ 00:33:01.080 { 00:33:01.080 "job": "Nvme0n1", 00:33:01.080 "core_mask": "0x2", 00:33:01.080 "workload": "randwrite", 00:33:01.080 "status": "finished", 00:33:01.080 "queue_depth": 128, 00:33:01.080 "io_size": 4096, 00:33:01.080 "runtime": 10.003766, 00:33:01.080 "iops": 22657.46719785329, 00:33:01.080 "mibps": 88.50573124161441, 00:33:01.080 "io_failed": 0, 00:33:01.080 "io_timeout": 0, 00:33:01.080 "avg_latency_us": 5646.634520997223, 00:33:01.080 "min_latency_us": 3276.8, 00:33:01.080 "max_latency_us": 32263.876923076925 00:33:01.080 } 00:33:01.080 ], 00:33:01.080 "core_count": 1 00:33:01.080 } 00:33:01.080 06:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 543095 00:33:01.080 06:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 543095 ']' 00:33:01.080 06:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 543095 00:33:01.080 06:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:33:01.080 06:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:01.080 06:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 543095 00:33:01.080 06:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:01.080 06:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:01.080 06:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 543095' 00:33:01.080 killing process with pid 543095 00:33:01.080 06:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 543095 00:33:01.080 Received shutdown signal, test time was about 10.000000 seconds 00:33:01.080 00:33:01.080 Latency(us) 00:33:01.080 [2024-12-09T05:31:55.667Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:01.080 [2024-12-09T05:31:55.667Z] =================================================================================================================== 00:33:01.080 [2024-12-09T05:31:55.667Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:01.080 06:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 543095 00:33:01.340 06:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:01.340 06:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:01.601 06:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6be33feb-c0d2-45ba-aa83-7f86f8e5a521 00:33:01.601 06:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:33:01.861 06:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:33:01.861 06:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:33:01.861 06:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:33:01.861 [2024-12-09 06:31:56.377908] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:33:01.862 06:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6be33feb-c0d2-45ba-aa83-7f86f8e5a521 00:33:01.862 06:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:33:01.862 06:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6be33feb-c0d2-45ba-aa83-7f86f8e5a521 00:33:01.862 06:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:01.862 06:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:01.862 06:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:01.862 06:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:01.862 06:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:01.862 06:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:01.862 06:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:01.862 06:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:33:01.862 06:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6be33feb-c0d2-45ba-aa83-7f86f8e5a521 00:33:02.121 request: 00:33:02.121 { 00:33:02.121 "uuid": "6be33feb-c0d2-45ba-aa83-7f86f8e5a521", 00:33:02.121 "method": "bdev_lvol_get_lvstores", 00:33:02.121 "req_id": 1 00:33:02.121 } 00:33:02.121 Got JSON-RPC error response 00:33:02.121 response: 00:33:02.121 { 00:33:02.121 "code": -19, 00:33:02.121 "message": "No such device" 00:33:02.121 } 00:33:02.121 06:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:33:02.121 06:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:02.121 06:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:02.121 06:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:02.121 06:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:02.381 aio_bdev 00:33:02.381 06:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 553d4f53-47c0-403b-a41e-8b56f5949b87 00:33:02.381 06:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=553d4f53-47c0-403b-a41e-8b56f5949b87 00:33:02.381 06:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:02.381 06:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:33:02.381 06:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:02.381 06:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:02.381 06:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:33:02.641 06:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 553d4f53-47c0-403b-a41e-8b56f5949b87 -t 2000 00:33:02.641 [ 00:33:02.641 { 00:33:02.641 "name": "553d4f53-47c0-403b-a41e-8b56f5949b87", 00:33:02.641 "aliases": [ 00:33:02.641 "lvs/lvol" 00:33:02.641 ], 00:33:02.641 "product_name": "Logical Volume", 00:33:02.641 "block_size": 4096, 00:33:02.641 "num_blocks": 38912, 00:33:02.641 "uuid": "553d4f53-47c0-403b-a41e-8b56f5949b87", 00:33:02.641 "assigned_rate_limits": { 00:33:02.641 "rw_ios_per_sec": 0, 00:33:02.641 "rw_mbytes_per_sec": 0, 00:33:02.641 "r_mbytes_per_sec": 0, 00:33:02.641 "w_mbytes_per_sec": 0 00:33:02.641 }, 00:33:02.641 "claimed": false, 00:33:02.641 "zoned": false, 00:33:02.642 "supported_io_types": { 00:33:02.642 "read": true, 00:33:02.642 "write": true, 00:33:02.642 "unmap": true, 00:33:02.642 "flush": false, 00:33:02.642 "reset": true, 00:33:02.642 "nvme_admin": false, 00:33:02.642 "nvme_io": false, 00:33:02.642 "nvme_io_md": false, 00:33:02.642 "write_zeroes": true, 00:33:02.642 "zcopy": false, 00:33:02.642 "get_zone_info": false, 00:33:02.642 "zone_management": false, 00:33:02.642 "zone_append": false, 00:33:02.642 "compare": false, 00:33:02.642 "compare_and_write": false, 00:33:02.642 "abort": false, 00:33:02.642 "seek_hole": true, 00:33:02.642 "seek_data": true, 00:33:02.642 "copy": false, 00:33:02.642 "nvme_iov_md": false 00:33:02.642 }, 00:33:02.642 "driver_specific": { 00:33:02.642 "lvol": { 00:33:02.642 "lvol_store_uuid": "6be33feb-c0d2-45ba-aa83-7f86f8e5a521", 00:33:02.642 "base_bdev": "aio_bdev", 00:33:02.642 "thin_provision": false, 00:33:02.642 "num_allocated_clusters": 38, 00:33:02.642 "snapshot": false, 00:33:02.642 "clone": false, 00:33:02.642 "esnap_clone": false 00:33:02.642 } 00:33:02.642 } 00:33:02.642 } 00:33:02.642 ] 00:33:02.642 06:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:33:02.642 06:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6be33feb-c0d2-45ba-aa83-7f86f8e5a521 00:33:02.642 06:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:33:02.902 06:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:33:02.902 06:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:33:02.902 06:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6be33feb-c0d2-45ba-aa83-7f86f8e5a521 00:33:03.162 06:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:33:03.162 06:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 553d4f53-47c0-403b-a41e-8b56f5949b87 00:33:03.162 06:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6be33feb-c0d2-45ba-aa83-7f86f8e5a521 00:33:03.422 06:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:33:03.682 06:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:03.682 00:33:03.682 real 0m15.904s 00:33:03.682 user 0m15.583s 00:33:03.682 sys 0m1.452s 00:33:03.682 06:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:03.682 06:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:33:03.682 ************************************ 00:33:03.682 END TEST lvs_grow_clean 00:33:03.682 ************************************ 00:33:03.682 06:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:33:03.682 06:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:03.682 06:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:03.682 06:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:03.682 ************************************ 00:33:03.682 START TEST lvs_grow_dirty 00:33:03.682 ************************************ 00:33:03.682 06:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:33:03.682 06:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:33:03.682 06:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:33:03.682 06:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:33:03.682 06:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:33:03.682 06:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:33:03.682 06:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:33:03.682 06:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:03.682 06:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:03.682 06:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:03.942 06:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:33:03.942 06:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:33:04.202 06:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=9464a5a6-ccc6-4898-91a1-55893e6e6063 00:33:04.202 06:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9464a5a6-ccc6-4898-91a1-55893e6e6063 00:33:04.202 06:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:33:04.202 06:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:33:04.202 06:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:33:04.202 06:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9464a5a6-ccc6-4898-91a1-55893e6e6063 lvol 150 00:33:04.461 06:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=d530c4ab-c437-4035-859c-8b1d7b3c4e87 00:33:04.461 06:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:04.461 06:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:33:04.721 [2024-12-09 06:31:59.097803] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:33:04.721 [2024-12-09 06:31:59.097976] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:33:04.721 true 00:33:04.721 06:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:33:04.721 06:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9464a5a6-ccc6-4898-91a1-55893e6e6063 00:33:04.721 06:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:33:04.721 06:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:33:04.982 06:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d530c4ab-c437-4035-859c-8b1d7b3c4e87 00:33:05.242 06:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:05.242 [2024-12-09 06:31:59.758389] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:05.242 06:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:05.501 06:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=545638 00:33:05.501 06:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:05.501 06:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 545638 /var/tmp/bdevperf.sock 00:33:05.501 06:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 545638 ']' 00:33:05.501 06:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:05.501 06:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:05.501 06:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:05.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:05.501 06:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:33:05.501 06:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:05.501 06:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:05.501 [2024-12-09 06:31:59.972006] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:33:05.501 [2024-12-09 06:31:59.972058] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid545638 ] 00:33:05.501 [2024-12-09 06:32:00.032902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:05.501 [2024-12-09 06:32:00.064634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:05.762 06:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:05.762 06:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:33:05.762 06:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:33:06.023 Nvme0n1 00:33:06.023 06:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:33:06.023 [ 00:33:06.023 { 00:33:06.023 "name": "Nvme0n1", 00:33:06.023 "aliases": [ 00:33:06.023 "d530c4ab-c437-4035-859c-8b1d7b3c4e87" 00:33:06.023 ], 00:33:06.023 "product_name": "NVMe disk", 00:33:06.023 "block_size": 4096, 00:33:06.023 "num_blocks": 38912, 00:33:06.023 "uuid": "d530c4ab-c437-4035-859c-8b1d7b3c4e87", 00:33:06.023 "numa_id": 0, 00:33:06.023 "assigned_rate_limits": { 00:33:06.023 "rw_ios_per_sec": 0, 00:33:06.023 "rw_mbytes_per_sec": 0, 00:33:06.023 "r_mbytes_per_sec": 0, 00:33:06.023 "w_mbytes_per_sec": 0 00:33:06.023 }, 00:33:06.023 "claimed": false, 00:33:06.023 "zoned": false, 00:33:06.023 "supported_io_types": { 00:33:06.023 "read": true, 00:33:06.023 "write": true, 00:33:06.023 "unmap": true, 00:33:06.023 "flush": true, 00:33:06.023 "reset": true, 00:33:06.023 "nvme_admin": true, 00:33:06.023 "nvme_io": true, 00:33:06.023 "nvme_io_md": false, 00:33:06.023 "write_zeroes": true, 00:33:06.023 "zcopy": false, 00:33:06.023 "get_zone_info": false, 00:33:06.023 "zone_management": false, 00:33:06.023 "zone_append": false, 00:33:06.023 "compare": true, 00:33:06.023 "compare_and_write": true, 00:33:06.023 "abort": true, 00:33:06.023 "seek_hole": false, 00:33:06.023 "seek_data": false, 00:33:06.023 "copy": true, 00:33:06.023 "nvme_iov_md": false 00:33:06.023 }, 00:33:06.023 "memory_domains": [ 00:33:06.023 { 00:33:06.023 "dma_device_id": "system", 00:33:06.023 "dma_device_type": 1 00:33:06.023 } 00:33:06.023 ], 00:33:06.023 "driver_specific": { 00:33:06.023 "nvme": [ 00:33:06.023 { 00:33:06.023 "trid": { 00:33:06.023 "trtype": "TCP", 00:33:06.023 "adrfam": "IPv4", 00:33:06.023 "traddr": "10.0.0.2", 00:33:06.023 "trsvcid": "4420", 00:33:06.023 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:33:06.023 }, 00:33:06.023 "ctrlr_data": { 00:33:06.023 "cntlid": 1, 00:33:06.023 "vendor_id": "0x8086", 00:33:06.023 "model_number": "SPDK bdev Controller", 00:33:06.023 "serial_number": "SPDK0", 00:33:06.023 "firmware_revision": "25.01", 00:33:06.023 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:06.023 "oacs": { 00:33:06.023 "security": 0, 00:33:06.023 "format": 0, 00:33:06.023 "firmware": 0, 00:33:06.023 "ns_manage": 0 00:33:06.023 }, 00:33:06.023 "multi_ctrlr": true, 00:33:06.023 "ana_reporting": false 00:33:06.023 }, 00:33:06.023 "vs": { 00:33:06.023 "nvme_version": "1.3" 00:33:06.023 }, 00:33:06.024 "ns_data": { 00:33:06.024 "id": 1, 00:33:06.024 "can_share": true 00:33:06.024 } 00:33:06.024 } 00:33:06.024 ], 00:33:06.024 "mp_policy": "active_passive" 00:33:06.024 } 00:33:06.024 } 00:33:06.024 ] 00:33:06.024 06:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=545720 00:33:06.024 06:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:33:06.024 06:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:06.284 Running I/O for 10 seconds... 00:33:07.225 Latency(us) 00:33:07.225 [2024-12-09T05:32:01.812Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:07.225 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:07.225 Nvme0n1 : 1.00 18923.00 73.92 0.00 0.00 0.00 0.00 0.00 00:33:07.225 [2024-12-09T05:32:01.812Z] =================================================================================================================== 00:33:07.225 [2024-12-09T05:32:01.812Z] Total : 18923.00 73.92 0.00 0.00 0.00 0.00 0.00 00:33:07.225 00:33:08.165 06:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 9464a5a6-ccc6-4898-91a1-55893e6e6063 00:33:08.165 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:08.165 Nvme0n1 : 2.00 19240.50 75.16 0.00 0.00 0.00 0.00 0.00 00:33:08.165 [2024-12-09T05:32:02.752Z] =================================================================================================================== 00:33:08.165 [2024-12-09T05:32:02.752Z] Total : 19240.50 75.16 0.00 0.00 0.00 0.00 0.00 00:33:08.165 00:33:08.165 true 00:33:08.165 06:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9464a5a6-ccc6-4898-91a1-55893e6e6063 00:33:08.165 06:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:33:08.426 06:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:33:08.426 06:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:33:08.426 06:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 545720 00:33:09.364 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:09.364 Nvme0n1 : 3.00 19352.00 75.59 0.00 0.00 0.00 0.00 0.00 00:33:09.364 [2024-12-09T05:32:03.951Z] =================================================================================================================== 00:33:09.364 [2024-12-09T05:32:03.951Z] Total : 19352.00 75.59 0.00 0.00 0.00 0.00 0.00 00:33:09.364 00:33:10.305 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:10.305 Nvme0n1 : 4.00 19435.25 75.92 0.00 0.00 0.00 0.00 0.00 00:33:10.305 [2024-12-09T05:32:04.892Z] =================================================================================================================== 00:33:10.305 [2024-12-09T05:32:04.892Z] Total : 19435.25 75.92 0.00 0.00 0.00 0.00 0.00 00:33:10.305 00:33:11.244 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:11.244 Nvme0n1 : 5.00 20450.40 79.88 0.00 0.00 0.00 0.00 0.00 00:33:11.244 [2024-12-09T05:32:05.831Z] =================================================================================================================== 00:33:11.244 [2024-12-09T05:32:05.831Z] Total : 20450.40 79.88 0.00 0.00 0.00 0.00 0.00 00:33:11.244 00:33:12.186 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:12.186 Nvme0n1 : 6.00 21190.67 82.78 0.00 0.00 0.00 0.00 0.00 00:33:12.186 [2024-12-09T05:32:06.773Z] =================================================================================================================== 00:33:12.186 [2024-12-09T05:32:06.773Z] Total : 21190.67 82.78 0.00 0.00 0.00 0.00 0.00 00:33:12.186 00:33:13.177 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:13.177 Nvme0n1 : 7.00 21719.43 84.84 0.00 0.00 0.00 0.00 0.00 00:33:13.177 [2024-12-09T05:32:07.764Z] =================================================================================================================== 00:33:13.177 [2024-12-09T05:32:07.764Z] Total : 21719.43 84.84 0.00 0.00 0.00 0.00 0.00 00:33:13.177 00:33:14.114 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:14.114 Nvme0n1 : 8.00 22116.00 86.39 0.00 0.00 0.00 0.00 0.00 00:33:14.114 [2024-12-09T05:32:08.701Z] =================================================================================================================== 00:33:14.114 [2024-12-09T05:32:08.701Z] Total : 22116.00 86.39 0.00 0.00 0.00 0.00 0.00 00:33:14.114 00:33:15.498 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:15.498 Nvme0n1 : 9.00 22424.44 87.60 0.00 0.00 0.00 0.00 0.00 00:33:15.498 [2024-12-09T05:32:10.085Z] =================================================================================================================== 00:33:15.498 [2024-12-09T05:32:10.085Z] Total : 22424.44 87.60 0.00 0.00 0.00 0.00 0.00 00:33:15.498 00:33:16.440 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:16.440 Nvme0n1 : 10.00 22671.30 88.56 0.00 0.00 0.00 0.00 0.00 00:33:16.440 [2024-12-09T05:32:11.027Z] =================================================================================================================== 00:33:16.440 [2024-12-09T05:32:11.027Z] Total : 22671.30 88.56 0.00 0.00 0.00 0.00 0.00 00:33:16.440 00:33:16.440 00:33:16.440 Latency(us) 00:33:16.440 [2024-12-09T05:32:11.027Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:16.440 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:16.440 Nvme0n1 : 10.00 22673.92 88.57 0.00 0.00 5642.51 3087.75 32062.23 00:33:16.440 [2024-12-09T05:32:11.027Z] =================================================================================================================== 00:33:16.440 [2024-12-09T05:32:11.027Z] Total : 22673.92 88.57 0.00 0.00 5642.51 3087.75 32062.23 00:33:16.440 { 00:33:16.440 "results": [ 00:33:16.440 { 00:33:16.440 "job": "Nvme0n1", 00:33:16.440 "core_mask": "0x2", 00:33:16.440 "workload": "randwrite", 00:33:16.440 "status": "finished", 00:33:16.440 "queue_depth": 128, 00:33:16.440 "io_size": 4096, 00:33:16.440 "runtime": 10.004488, 00:33:16.440 "iops": 22673.923942934412, 00:33:16.440 "mibps": 88.57001540208755, 00:33:16.440 "io_failed": 0, 00:33:16.440 "io_timeout": 0, 00:33:16.440 "avg_latency_us": 5642.512006830946, 00:33:16.440 "min_latency_us": 3087.753846153846, 00:33:16.440 "max_latency_us": 32062.227692307693 00:33:16.440 } 00:33:16.440 ], 00:33:16.440 "core_count": 1 00:33:16.440 } 00:33:16.440 06:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 545638 00:33:16.440 06:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 545638 ']' 00:33:16.440 06:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 545638 00:33:16.440 06:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:33:16.440 06:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:16.440 06:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 545638 00:33:16.440 06:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:16.440 06:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:16.440 06:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 545638' 00:33:16.440 killing process with pid 545638 00:33:16.440 06:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 545638 00:33:16.440 Received shutdown signal, test time was about 10.000000 seconds 00:33:16.440 00:33:16.440 Latency(us) 00:33:16.440 [2024-12-09T05:32:11.027Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:16.440 [2024-12-09T05:32:11.027Z] =================================================================================================================== 00:33:16.440 [2024-12-09T05:32:11.027Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:16.440 06:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 545638 00:33:16.440 06:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:16.440 06:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:16.701 06:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9464a5a6-ccc6-4898-91a1-55893e6e6063 00:33:16.701 06:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:33:16.961 06:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:33:16.961 06:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:33:16.961 06:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 542497 00:33:16.961 06:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 542497 00:33:16.961 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 542497 Killed "${NVMF_APP[@]}" "$@" 00:33:16.961 06:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:33:16.961 06:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:33:16.961 06:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:16.961 06:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:16.961 06:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:16.962 06:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=547475 00:33:16.962 06:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 547475 00:33:16.962 06:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 547475 ']' 00:33:16.962 06:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:33:16.962 06:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:16.962 06:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:16.962 06:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:16.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:16.962 06:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:16.962 06:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:16.962 [2024-12-09 06:32:11.430832] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:16.962 [2024-12-09 06:32:11.431766] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:33:16.962 [2024-12-09 06:32:11.431808] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:16.962 [2024-12-09 06:32:11.520809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:17.223 [2024-12-09 06:32:11.551226] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:17.223 [2024-12-09 06:32:11.551260] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:17.223 [2024-12-09 06:32:11.551266] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:17.223 [2024-12-09 06:32:11.551271] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:17.223 [2024-12-09 06:32:11.551275] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:17.223 [2024-12-09 06:32:11.551722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:17.223 [2024-12-09 06:32:11.602052] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:17.223 [2024-12-09 06:32:11.602238] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:17.223 06:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:17.223 06:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:33:17.223 06:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:17.223 06:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:17.223 06:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:17.223 06:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:17.223 06:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:17.484 [2024-12-09 06:32:11.841961] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:33:17.484 [2024-12-09 06:32:11.842183] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:33:17.484 [2024-12-09 06:32:11.842274] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:33:17.484 06:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:33:17.484 06:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev d530c4ab-c437-4035-859c-8b1d7b3c4e87 00:33:17.484 06:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=d530c4ab-c437-4035-859c-8b1d7b3c4e87 00:33:17.484 06:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:17.484 06:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:33:17.484 06:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:17.484 06:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:17.484 06:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:33:17.484 06:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d530c4ab-c437-4035-859c-8b1d7b3c4e87 -t 2000 00:33:17.746 [ 00:33:17.746 { 00:33:17.746 "name": "d530c4ab-c437-4035-859c-8b1d7b3c4e87", 00:33:17.746 "aliases": [ 00:33:17.746 "lvs/lvol" 00:33:17.746 ], 00:33:17.746 "product_name": "Logical Volume", 00:33:17.746 "block_size": 4096, 00:33:17.746 "num_blocks": 38912, 00:33:17.746 "uuid": "d530c4ab-c437-4035-859c-8b1d7b3c4e87", 00:33:17.746 "assigned_rate_limits": { 00:33:17.746 "rw_ios_per_sec": 0, 00:33:17.746 "rw_mbytes_per_sec": 0, 00:33:17.746 "r_mbytes_per_sec": 0, 00:33:17.746 "w_mbytes_per_sec": 0 00:33:17.746 }, 00:33:17.746 "claimed": false, 00:33:17.746 "zoned": false, 00:33:17.746 "supported_io_types": { 00:33:17.746 "read": true, 00:33:17.746 "write": true, 00:33:17.746 "unmap": true, 00:33:17.746 "flush": false, 00:33:17.746 "reset": true, 00:33:17.746 "nvme_admin": false, 00:33:17.746 "nvme_io": false, 00:33:17.746 "nvme_io_md": false, 00:33:17.746 "write_zeroes": true, 00:33:17.746 "zcopy": false, 00:33:17.746 "get_zone_info": false, 00:33:17.746 "zone_management": false, 00:33:17.746 "zone_append": false, 00:33:17.746 "compare": false, 00:33:17.746 "compare_and_write": false, 00:33:17.746 "abort": false, 00:33:17.746 "seek_hole": true, 00:33:17.746 "seek_data": true, 00:33:17.746 "copy": false, 00:33:17.746 "nvme_iov_md": false 00:33:17.746 }, 00:33:17.746 "driver_specific": { 00:33:17.746 "lvol": { 00:33:17.746 "lvol_store_uuid": "9464a5a6-ccc6-4898-91a1-55893e6e6063", 00:33:17.746 "base_bdev": "aio_bdev", 00:33:17.746 "thin_provision": false, 00:33:17.746 "num_allocated_clusters": 38, 00:33:17.746 "snapshot": false, 00:33:17.746 "clone": false, 00:33:17.746 "esnap_clone": false 00:33:17.746 } 00:33:17.746 } 00:33:17.746 } 00:33:17.746 ] 00:33:17.746 06:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:33:17.746 06:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:33:17.746 06:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9464a5a6-ccc6-4898-91a1-55893e6e6063 00:33:18.006 06:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:33:18.006 06:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9464a5a6-ccc6-4898-91a1-55893e6e6063 00:33:18.006 06:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:33:18.006 06:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:33:18.006 06:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:33:18.267 [2024-12-09 06:32:12.692201] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:33:18.267 06:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9464a5a6-ccc6-4898-91a1-55893e6e6063 00:33:18.267 06:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:33:18.267 06:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9464a5a6-ccc6-4898-91a1-55893e6e6063 00:33:18.267 06:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:18.267 06:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:18.268 06:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:18.268 06:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:18.268 06:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:18.268 06:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:18.268 06:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:18.268 06:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:33:18.268 06:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9464a5a6-ccc6-4898-91a1-55893e6e6063 00:33:18.528 request: 00:33:18.528 { 00:33:18.528 "uuid": "9464a5a6-ccc6-4898-91a1-55893e6e6063", 00:33:18.528 "method": "bdev_lvol_get_lvstores", 00:33:18.528 "req_id": 1 00:33:18.528 } 00:33:18.528 Got JSON-RPC error response 00:33:18.528 response: 00:33:18.528 { 00:33:18.528 "code": -19, 00:33:18.528 "message": "No such device" 00:33:18.528 } 00:33:18.528 06:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:33:18.528 06:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:18.528 06:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:18.528 06:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:18.528 06:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:18.528 aio_bdev 00:33:18.528 06:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev d530c4ab-c437-4035-859c-8b1d7b3c4e87 00:33:18.528 06:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=d530c4ab-c437-4035-859c-8b1d7b3c4e87 00:33:18.529 06:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:18.529 06:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:33:18.529 06:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:18.529 06:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:18.529 06:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:33:18.789 06:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d530c4ab-c437-4035-859c-8b1d7b3c4e87 -t 2000 00:33:19.051 [ 00:33:19.051 { 00:33:19.051 "name": "d530c4ab-c437-4035-859c-8b1d7b3c4e87", 00:33:19.051 "aliases": [ 00:33:19.051 "lvs/lvol" 00:33:19.051 ], 00:33:19.051 "product_name": "Logical Volume", 00:33:19.051 "block_size": 4096, 00:33:19.051 "num_blocks": 38912, 00:33:19.051 "uuid": "d530c4ab-c437-4035-859c-8b1d7b3c4e87", 00:33:19.051 "assigned_rate_limits": { 00:33:19.051 "rw_ios_per_sec": 0, 00:33:19.051 "rw_mbytes_per_sec": 0, 00:33:19.051 "r_mbytes_per_sec": 0, 00:33:19.051 "w_mbytes_per_sec": 0 00:33:19.051 }, 00:33:19.051 "claimed": false, 00:33:19.051 "zoned": false, 00:33:19.051 "supported_io_types": { 00:33:19.051 "read": true, 00:33:19.051 "write": true, 00:33:19.051 "unmap": true, 00:33:19.051 "flush": false, 00:33:19.051 "reset": true, 00:33:19.051 "nvme_admin": false, 00:33:19.051 "nvme_io": false, 00:33:19.051 "nvme_io_md": false, 00:33:19.051 "write_zeroes": true, 00:33:19.051 "zcopy": false, 00:33:19.051 "get_zone_info": false, 00:33:19.051 "zone_management": false, 00:33:19.051 "zone_append": false, 00:33:19.051 "compare": false, 00:33:19.051 "compare_and_write": false, 00:33:19.051 "abort": false, 00:33:19.051 "seek_hole": true, 00:33:19.051 "seek_data": true, 00:33:19.051 "copy": false, 00:33:19.051 "nvme_iov_md": false 00:33:19.051 }, 00:33:19.051 "driver_specific": { 00:33:19.051 "lvol": { 00:33:19.051 "lvol_store_uuid": "9464a5a6-ccc6-4898-91a1-55893e6e6063", 00:33:19.051 "base_bdev": "aio_bdev", 00:33:19.051 "thin_provision": false, 00:33:19.051 "num_allocated_clusters": 38, 00:33:19.051 "snapshot": false, 00:33:19.051 "clone": false, 00:33:19.051 "esnap_clone": false 00:33:19.051 } 00:33:19.051 } 00:33:19.051 } 00:33:19.051 ] 00:33:19.051 06:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:33:19.051 06:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9464a5a6-ccc6-4898-91a1-55893e6e6063 00:33:19.051 06:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:33:19.051 06:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:33:19.051 06:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9464a5a6-ccc6-4898-91a1-55893e6e6063 00:33:19.051 06:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:33:19.311 06:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:33:19.311 06:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d530c4ab-c437-4035-859c-8b1d7b3c4e87 00:33:19.572 06:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9464a5a6-ccc6-4898-91a1-55893e6e6063 00:33:19.572 06:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:33:19.832 06:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:19.832 00:33:19.832 real 0m16.160s 00:33:19.832 user 0m34.426s 00:33:19.832 sys 0m2.936s 00:33:19.832 06:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:19.832 06:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:19.832 ************************************ 00:33:19.832 END TEST lvs_grow_dirty 00:33:19.832 ************************************ 00:33:19.832 06:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:33:19.832 06:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:33:19.832 06:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:33:19.832 06:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:33:19.832 06:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:33:19.832 06:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:33:19.832 06:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:33:19.832 06:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:33:19.832 06:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:33:19.832 nvmf_trace.0 00:33:19.832 06:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:33:19.832 06:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:33:19.832 06:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:19.832 06:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:33:19.832 06:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:19.832 06:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:33:19.832 06:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:19.832 06:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:19.832 rmmod nvme_tcp 00:33:20.091 rmmod nvme_fabrics 00:33:20.091 rmmod nvme_keyring 00:33:20.092 06:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:20.092 06:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:33:20.092 06:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:33:20.092 06:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 547475 ']' 00:33:20.092 06:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 547475 00:33:20.092 06:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 547475 ']' 00:33:20.092 06:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 547475 00:33:20.092 06:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:33:20.092 06:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:20.092 06:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 547475 00:33:20.092 06:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:20.092 06:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:20.092 06:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 547475' 00:33:20.092 killing process with pid 547475 00:33:20.092 06:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 547475 00:33:20.092 06:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 547475 00:33:20.092 06:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:20.092 06:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:20.092 06:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:20.092 06:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:33:20.092 06:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:33:20.092 06:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:20.092 06:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:33:20.092 06:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:20.092 06:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:20.092 06:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:20.092 06:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:20.092 06:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:22.638 06:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:22.638 00:33:22.638 real 0m43.570s 00:33:22.638 user 0m53.088s 00:33:22.638 sys 0m10.527s 00:33:22.638 06:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:22.638 06:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:22.638 ************************************ 00:33:22.638 END TEST nvmf_lvs_grow 00:33:22.638 ************************************ 00:33:22.638 06:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:33:22.638 06:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:22.638 06:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:22.638 06:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:22.638 ************************************ 00:33:22.638 START TEST nvmf_bdev_io_wait 00:33:22.638 ************************************ 00:33:22.638 06:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:33:22.638 * Looking for test storage... 00:33:22.638 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:22.638 06:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:22.638 06:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:33:22.638 06:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:22.638 06:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:22.638 06:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:22.638 06:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:22.638 06:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:22.638 06:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:33:22.638 06:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:33:22.638 06:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:33:22.638 06:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:33:22.638 06:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:33:22.638 06:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:33:22.638 06:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:33:22.639 06:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:22.639 06:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:33:22.639 06:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:33:22.639 06:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:22.639 06:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:22.639 06:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:33:22.639 06:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:33:22.639 06:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:22.639 06:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:33:22.639 06:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:33:22.639 06:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:33:22.639 06:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:33:22.639 06:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:22.639 06:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:33:22.639 06:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:33:22.639 06:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:22.639 06:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:22.639 06:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:33:22.639 06:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:22.639 06:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:22.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:22.639 --rc genhtml_branch_coverage=1 00:33:22.639 --rc genhtml_function_coverage=1 00:33:22.639 --rc genhtml_legend=1 00:33:22.639 --rc geninfo_all_blocks=1 00:33:22.639 --rc geninfo_unexecuted_blocks=1 00:33:22.639 00:33:22.639 ' 00:33:22.639 06:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:22.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:22.639 --rc genhtml_branch_coverage=1 00:33:22.639 --rc genhtml_function_coverage=1 00:33:22.639 --rc genhtml_legend=1 00:33:22.639 --rc geninfo_all_blocks=1 00:33:22.639 --rc geninfo_unexecuted_blocks=1 00:33:22.639 00:33:22.639 ' 00:33:22.639 06:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:22.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:22.639 --rc genhtml_branch_coverage=1 00:33:22.639 --rc genhtml_function_coverage=1 00:33:22.639 --rc genhtml_legend=1 00:33:22.639 --rc geninfo_all_blocks=1 00:33:22.639 --rc geninfo_unexecuted_blocks=1 00:33:22.639 00:33:22.639 ' 00:33:22.639 06:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:22.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:22.639 --rc genhtml_branch_coverage=1 00:33:22.639 --rc genhtml_function_coverage=1 00:33:22.639 --rc genhtml_legend=1 00:33:22.639 --rc geninfo_all_blocks=1 00:33:22.639 --rc geninfo_unexecuted_blocks=1 00:33:22.639 00:33:22.639 ' 00:33:22.639 06:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:22.639 06:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:33:22.639 06:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:22.639 06:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:22.639 06:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:22.639 06:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:22.639 06:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:22.639 06:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:22.639 06:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:22.639 06:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:22.639 06:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:22.639 06:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:22.639 06:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:33:22.639 06:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:33:22.639 06:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:22.639 06:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:22.639 06:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:22.639 06:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:22.639 06:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:22.639 06:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:33:22.639 06:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:22.639 06:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:22.639 06:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:22.639 06:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:22.639 06:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:22.639 06:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:22.639 06:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:33:22.639 06:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:22.639 06:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:33:22.639 06:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:22.640 06:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:22.640 06:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:22.640 06:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:22.640 06:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:22.640 06:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:22.640 06:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:22.640 06:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:22.640 06:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:22.640 06:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:22.640 06:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:22.640 06:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:22.640 06:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:33:22.640 06:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:22.640 06:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:22.640 06:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:22.640 06:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:22.640 06:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:22.640 06:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:22.640 06:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:22.640 06:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:22.640 06:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:22.640 06:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:22.640 06:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:33:22.640 06:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:30.779 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:30.779 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:33:30.779 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:30.779 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:30.779 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:30.779 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:30.779 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:30.779 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:33:30.779 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:30.779 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:33:30.779 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:33:30.779 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:33:30.779 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:33:30.779 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:33:30.779 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:33:30.779 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:30.779 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:30.779 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:30.779 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:30.779 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:30.779 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:30.779 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:30.779 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:30.779 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:30.780 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:30.780 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:30.780 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:30.780 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:30.780 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:30.780 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:30.780 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:30.780 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:30.780 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:30.780 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:30.780 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:30.780 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:30.780 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:30.780 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:30.780 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:30.780 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:30.780 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:30.780 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:30.780 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:30.780 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:30.780 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:30.780 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:30.780 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:30.780 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:30.780 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:30.780 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:30.780 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:30.780 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:30.780 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:30.780 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:30.780 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:30.780 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:30.780 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:30.780 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:30.780 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:30.780 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:30.780 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:30.780 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:30.780 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:30.780 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:30.780 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:30.780 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:30.780 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:30.780 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:30.780 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:30.780 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:30.780 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:30.780 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:30.780 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:30.780 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:33:30.780 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:30.780 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:30.780 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:30.780 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:30.780 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:30.780 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:30.780 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:30.780 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:30.780 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:30.780 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:30.780 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:30.780 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:30.780 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:30.780 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:30.780 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:30.780 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:30.780 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:30.780 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:30.780 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:30.780 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:30.780 06:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:30.780 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:30.780 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:30.780 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:30.780 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:30.780 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:30.780 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:30.780 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.592 ms 00:33:30.780 00:33:30.780 --- 10.0.0.2 ping statistics --- 00:33:30.780 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:30.780 rtt min/avg/max/mdev = 0.592/0.592/0.592/0.000 ms 00:33:30.780 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:30.780 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:30.780 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.302 ms 00:33:30.780 00:33:30.780 --- 10.0.0.1 ping statistics --- 00:33:30.780 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:30.780 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:33:30.780 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:30.780 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:33:30.780 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:30.780 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:30.780 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:30.780 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:30.780 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:30.780 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:30.780 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:30.780 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:33:30.780 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:30.780 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:30.780 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:30.780 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=552040 00:33:30.780 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 552040 00:33:30.780 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:33:30.780 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 552040 ']' 00:33:30.780 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:30.780 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:30.780 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:30.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:30.780 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:30.780 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:30.780 [2024-12-09 06:32:24.244037] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:30.780 [2024-12-09 06:32:24.245123] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:33:30.780 [2024-12-09 06:32:24.245174] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:30.780 [2024-12-09 06:32:24.340607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:30.780 [2024-12-09 06:32:24.393725] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:30.780 [2024-12-09 06:32:24.393779] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:30.780 [2024-12-09 06:32:24.393788] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:30.780 [2024-12-09 06:32:24.393794] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:30.780 [2024-12-09 06:32:24.393800] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:30.780 [2024-12-09 06:32:24.396010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:30.780 [2024-12-09 06:32:24.396164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:30.780 [2024-12-09 06:32:24.396314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:30.780 [2024-12-09 06:32:24.396314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:30.780 [2024-12-09 06:32:24.396652] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:30.780 06:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:30.780 06:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:33:30.780 06:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:30.780 06:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:30.780 06:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:30.780 06:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:30.780 06:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:33:30.780 06:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:30.780 06:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:30.780 06:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:30.780 06:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:33:30.780 06:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:30.780 06:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:30.780 [2024-12-09 06:32:25.175564] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:30.780 [2024-12-09 06:32:25.175661] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:30.780 [2024-12-09 06:32:25.176767] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:30.780 [2024-12-09 06:32:25.176854] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:30.780 06:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:30.780 06:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:30.780 06:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:30.780 06:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:30.780 [2024-12-09 06:32:25.188843] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:30.780 06:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:30.780 06:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:30.780 06:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:30.780 06:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:30.780 Malloc0 00:33:30.780 06:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:30.780 06:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:30.780 06:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:30.780 06:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:30.780 06:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:30.780 06:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:30.780 06:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:30.780 06:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:30.780 06:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:30.780 06:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:30.780 06:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:30.780 06:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:30.780 [2024-12-09 06:32:25.257249] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:30.780 06:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:30.780 06:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=552219 00:33:30.780 06:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=552222 00:33:30.780 06:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:33:30.780 06:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:33:30.780 06:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:30.780 06:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:30.780 06:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:30.780 06:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:30.780 { 00:33:30.780 "params": { 00:33:30.780 "name": "Nvme$subsystem", 00:33:30.780 "trtype": "$TEST_TRANSPORT", 00:33:30.780 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:30.780 "adrfam": "ipv4", 00:33:30.780 "trsvcid": "$NVMF_PORT", 00:33:30.780 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:30.780 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:30.780 "hdgst": ${hdgst:-false}, 00:33:30.780 "ddgst": ${ddgst:-false} 00:33:30.780 }, 00:33:30.780 "method": "bdev_nvme_attach_controller" 00:33:30.780 } 00:33:30.780 EOF 00:33:30.780 )") 00:33:30.780 06:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=552225 00:33:30.780 06:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:33:30.780 06:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:33:30.780 06:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:30.780 06:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:30.780 06:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:30.780 06:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=552228 00:33:30.780 06:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:30.780 { 00:33:30.780 "params": { 00:33:30.780 "name": "Nvme$subsystem", 00:33:30.780 "trtype": "$TEST_TRANSPORT", 00:33:30.780 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:30.780 "adrfam": "ipv4", 00:33:30.781 "trsvcid": "$NVMF_PORT", 00:33:30.781 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:30.781 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:30.781 "hdgst": ${hdgst:-false}, 00:33:30.781 "ddgst": ${ddgst:-false} 00:33:30.781 }, 00:33:30.781 "method": "bdev_nvme_attach_controller" 00:33:30.781 } 00:33:30.781 EOF 00:33:30.781 )") 00:33:30.781 06:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:33:30.781 06:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:33:30.781 06:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:33:30.781 06:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:30.781 06:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:30.781 06:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:30.781 06:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:30.781 06:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:30.781 { 00:33:30.781 "params": { 00:33:30.781 "name": "Nvme$subsystem", 00:33:30.781 "trtype": "$TEST_TRANSPORT", 00:33:30.781 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:30.781 "adrfam": "ipv4", 00:33:30.781 "trsvcid": "$NVMF_PORT", 00:33:30.781 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:30.781 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:30.781 "hdgst": ${hdgst:-false}, 00:33:30.781 "ddgst": ${ddgst:-false} 00:33:30.781 }, 00:33:30.781 "method": "bdev_nvme_attach_controller" 00:33:30.781 } 00:33:30.781 EOF 00:33:30.781 )") 00:33:30.781 06:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:33:30.781 06:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:33:30.781 06:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:30.781 06:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:30.781 06:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:30.781 06:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:30.781 06:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:30.781 { 00:33:30.781 "params": { 00:33:30.781 "name": "Nvme$subsystem", 00:33:30.781 "trtype": "$TEST_TRANSPORT", 00:33:30.781 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:30.781 "adrfam": "ipv4", 00:33:30.781 "trsvcid": "$NVMF_PORT", 00:33:30.781 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:30.781 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:30.781 "hdgst": ${hdgst:-false}, 00:33:30.781 "ddgst": ${ddgst:-false} 00:33:30.781 }, 00:33:30.781 "method": "bdev_nvme_attach_controller" 00:33:30.781 } 00:33:30.781 EOF 00:33:30.781 )") 00:33:30.781 06:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:30.781 06:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 552219 00:33:30.781 06:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:30.781 06:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:30.781 06:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:30.781 06:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:30.781 06:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:30.781 06:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:30.781 "params": { 00:33:30.781 "name": "Nvme1", 00:33:30.781 "trtype": "tcp", 00:33:30.781 "traddr": "10.0.0.2", 00:33:30.781 "adrfam": "ipv4", 00:33:30.781 "trsvcid": "4420", 00:33:30.781 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:30.781 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:30.781 "hdgst": false, 00:33:30.781 "ddgst": false 00:33:30.781 }, 00:33:30.781 "method": "bdev_nvme_attach_controller" 00:33:30.781 }' 00:33:30.781 06:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:30.781 06:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:30.781 06:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:30.781 "params": { 00:33:30.781 "name": "Nvme1", 00:33:30.781 "trtype": "tcp", 00:33:30.781 "traddr": "10.0.0.2", 00:33:30.781 "adrfam": "ipv4", 00:33:30.781 "trsvcid": "4420", 00:33:30.781 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:30.781 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:30.781 "hdgst": false, 00:33:30.781 "ddgst": false 00:33:30.781 }, 00:33:30.781 "method": "bdev_nvme_attach_controller" 00:33:30.781 }' 00:33:30.781 06:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:30.781 06:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:30.781 "params": { 00:33:30.781 "name": "Nvme1", 00:33:30.781 "trtype": "tcp", 00:33:30.781 "traddr": "10.0.0.2", 00:33:30.781 "adrfam": "ipv4", 00:33:30.781 "trsvcid": "4420", 00:33:30.781 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:30.781 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:30.781 "hdgst": false, 00:33:30.781 "ddgst": false 00:33:30.781 }, 00:33:30.781 "method": "bdev_nvme_attach_controller" 00:33:30.781 }' 00:33:30.781 06:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:30.781 06:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:30.781 "params": { 00:33:30.781 "name": "Nvme1", 00:33:30.781 "trtype": "tcp", 00:33:30.781 "traddr": "10.0.0.2", 00:33:30.781 "adrfam": "ipv4", 00:33:30.781 "trsvcid": "4420", 00:33:30.781 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:30.781 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:30.781 "hdgst": false, 00:33:30.781 "ddgst": false 00:33:30.781 }, 00:33:30.781 "method": "bdev_nvme_attach_controller" 00:33:30.781 }' 00:33:30.781 [2024-12-09 06:32:25.314266] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:33:30.781 [2024-12-09 06:32:25.314335] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:33:30.781 [2024-12-09 06:32:25.316340] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:33:30.781 [2024-12-09 06:32:25.316341] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:33:30.781 [2024-12-09 06:32:25.316416] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-09 06:32:25.316417] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:33:30.781 --proc-type=auto ] 00:33:30.781 [2024-12-09 06:32:25.318665] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:33:30.781 [2024-12-09 06:32:25.318719] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:33:31.041 [2024-12-09 06:32:25.517723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:31.041 [2024-12-09 06:32:25.560215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:33:31.041 [2024-12-09 06:32:25.607560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:31.301 [2024-12-09 06:32:25.645974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:33:31.301 [2024-12-09 06:32:25.672825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:31.301 [2024-12-09 06:32:25.706242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:33:31.301 [2024-12-09 06:32:25.737255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:31.301 [2024-12-09 06:32:25.770249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:33:31.301 Running I/O for 1 seconds... 00:33:31.562 Running I/O for 1 seconds... 00:33:31.562 Running I/O for 1 seconds... 00:33:31.562 Running I/O for 1 seconds... 00:33:32.503 8962.00 IOPS, 35.01 MiB/s 00:33:32.503 Latency(us) 00:33:32.503 [2024-12-09T05:32:27.090Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:32.503 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:33:32.503 Nvme1n1 : 1.01 8970.04 35.04 0.00 0.00 14127.15 2129.92 25004.50 00:33:32.503 [2024-12-09T05:32:27.090Z] =================================================================================================================== 00:33:32.503 [2024-12-09T05:32:27.090Z] Total : 8970.04 35.04 0.00 0.00 14127.15 2129.92 25004.50 00:33:32.503 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 552222 00:33:32.503 12443.00 IOPS, 48.61 MiB/s [2024-12-09T05:32:27.090Z] 8675.00 IOPS, 33.89 MiB/s 00:33:32.503 Latency(us) 00:33:32.503 [2024-12-09T05:32:27.090Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:32.503 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:33:32.503 Nvme1n1 : 1.01 12488.19 48.78 0.00 0.00 10213.86 5268.09 15627.82 00:33:32.503 [2024-12-09T05:32:27.090Z] =================================================================================================================== 00:33:32.503 [2024-12-09T05:32:27.090Z] Total : 12488.19 48.78 0.00 0.00 10213.86 5268.09 15627.82 00:33:32.503 00:33:32.503 Latency(us) 00:33:32.503 [2024-12-09T05:32:27.090Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:32.503 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:33:32.503 Nvme1n1 : 1.01 8793.79 34.35 0.00 0.00 14518.36 3579.27 32263.88 00:33:32.503 [2024-12-09T05:32:27.090Z] =================================================================================================================== 00:33:32.503 [2024-12-09T05:32:27.090Z] Total : 8793.79 34.35 0.00 0.00 14518.36 3579.27 32263.88 00:33:32.503 195168.00 IOPS, 762.38 MiB/s 00:33:32.503 Latency(us) 00:33:32.503 [2024-12-09T05:32:27.090Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:32.503 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:33:32.503 Nvme1n1 : 1.00 194815.20 761.00 0.00 0.00 653.58 274.12 1814.84 00:33:32.503 [2024-12-09T05:32:27.090Z] =================================================================================================================== 00:33:32.503 [2024-12-09T05:32:27.090Z] Total : 194815.20 761.00 0.00 0.00 653.58 274.12 1814.84 00:33:32.765 06:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 552225 00:33:32.765 06:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 552228 00:33:32.765 06:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:32.765 06:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:32.765 06:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:32.765 06:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:32.765 06:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:33:32.765 06:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:33:32.765 06:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:32.765 06:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:33:32.765 06:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:32.765 06:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:33:32.765 06:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:32.765 06:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:32.765 rmmod nvme_tcp 00:33:32.765 rmmod nvme_fabrics 00:33:32.765 rmmod nvme_keyring 00:33:32.765 06:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:32.765 06:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:33:32.765 06:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:33:32.765 06:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 552040 ']' 00:33:32.765 06:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 552040 00:33:32.765 06:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 552040 ']' 00:33:32.765 06:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 552040 00:33:32.765 06:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:33:32.765 06:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:32.765 06:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 552040 00:33:32.765 06:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:32.765 06:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:32.765 06:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 552040' 00:33:32.765 killing process with pid 552040 00:33:32.765 06:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 552040 00:33:32.765 06:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 552040 00:33:33.027 06:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:33.027 06:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:33.027 06:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:33.027 06:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:33:33.027 06:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:33:33.027 06:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:33.027 06:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:33:33.027 06:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:33.027 06:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:33.027 06:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:33.027 06:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:33.027 06:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:34.938 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:34.938 00:33:34.938 real 0m12.665s 00:33:34.938 user 0m15.709s 00:33:34.938 sys 0m7.344s 00:33:34.938 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:34.938 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:34.938 ************************************ 00:33:34.938 END TEST nvmf_bdev_io_wait 00:33:34.938 ************************************ 00:33:34.938 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:33:34.938 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:34.938 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:34.938 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:35.199 ************************************ 00:33:35.199 START TEST nvmf_queue_depth 00:33:35.199 ************************************ 00:33:35.199 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:33:35.199 * Looking for test storage... 00:33:35.199 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:35.199 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:35.199 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:35.199 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:33:35.199 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:35.199 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:35.199 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:35.199 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:35.199 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:33:35.199 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:33:35.199 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:33:35.199 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:33:35.199 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:33:35.199 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:33:35.199 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:33:35.199 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:35.199 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:33:35.199 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:33:35.199 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:35.199 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:35.199 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:33:35.199 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:33:35.199 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:35.199 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:33:35.199 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:33:35.199 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:33:35.199 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:33:35.199 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:35.199 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:33:35.199 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:33:35.199 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:35.199 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:35.199 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:33:35.199 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:35.199 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:35.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:35.199 --rc genhtml_branch_coverage=1 00:33:35.199 --rc genhtml_function_coverage=1 00:33:35.199 --rc genhtml_legend=1 00:33:35.199 --rc geninfo_all_blocks=1 00:33:35.199 --rc geninfo_unexecuted_blocks=1 00:33:35.199 00:33:35.199 ' 00:33:35.199 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:35.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:35.199 --rc genhtml_branch_coverage=1 00:33:35.199 --rc genhtml_function_coverage=1 00:33:35.199 --rc genhtml_legend=1 00:33:35.199 --rc geninfo_all_blocks=1 00:33:35.199 --rc geninfo_unexecuted_blocks=1 00:33:35.199 00:33:35.199 ' 00:33:35.199 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:35.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:35.199 --rc genhtml_branch_coverage=1 00:33:35.199 --rc genhtml_function_coverage=1 00:33:35.199 --rc genhtml_legend=1 00:33:35.199 --rc geninfo_all_blocks=1 00:33:35.199 --rc geninfo_unexecuted_blocks=1 00:33:35.199 00:33:35.199 ' 00:33:35.199 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:35.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:35.200 --rc genhtml_branch_coverage=1 00:33:35.200 --rc genhtml_function_coverage=1 00:33:35.200 --rc genhtml_legend=1 00:33:35.200 --rc geninfo_all_blocks=1 00:33:35.200 --rc geninfo_unexecuted_blocks=1 00:33:35.200 00:33:35.200 ' 00:33:35.200 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:35.200 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:33:35.200 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:35.200 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:35.200 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:35.200 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:35.200 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:35.200 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:35.200 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:35.200 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:35.200 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:35.200 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:35.200 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:33:35.200 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:33:35.200 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:35.200 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:35.200 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:35.200 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:35.200 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:35.200 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:33:35.200 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:35.200 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:35.200 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:35.200 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:35.200 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:35.200 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:35.200 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:33:35.200 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:35.200 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:33:35.200 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:35.200 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:35.200 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:35.200 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:35.200 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:35.200 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:35.200 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:35.200 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:35.200 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:35.200 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:35.200 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:33:35.200 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:33:35.200 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:35.200 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:33:35.200 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:35.200 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:35.200 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:35.200 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:35.200 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:35.200 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:35.200 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:35.200 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:35.461 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:35.461 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:35.461 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:33:35.461 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:43.600 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:43.600 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:33:43.600 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:43.600 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:43.600 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:43.600 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:43.600 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:43.600 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:33:43.600 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:43.600 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:33:43.600 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:33:43.600 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:33:43.600 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:33:43.600 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:33:43.600 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:33:43.600 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:43.600 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:43.600 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:43.600 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:43.600 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:43.600 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:43.601 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:43.601 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:43.601 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:43.601 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:43.601 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:43.601 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.644 ms 00:33:43.601 00:33:43.601 --- 10.0.0.2 ping statistics --- 00:33:43.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:43.601 rtt min/avg/max/mdev = 0.644/0.644/0.644/0.000 ms 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:43.601 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:43.601 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.312 ms 00:33:43.601 00:33:43.601 --- 10.0.0.1 ping statistics --- 00:33:43.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:43.601 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:43.601 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:43.601 06:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:33:43.601 06:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:43.601 06:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:43.601 06:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:43.601 06:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=556458 00:33:43.602 06:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 556458 00:33:43.602 06:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:33:43.602 06:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 556458 ']' 00:33:43.602 06:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:43.602 06:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:43.602 06:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:43.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:43.602 06:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:43.602 06:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:43.602 [2024-12-09 06:32:37.088960] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:43.602 [2024-12-09 06:32:37.090077] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:33:43.602 [2024-12-09 06:32:37.090127] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:43.602 [2024-12-09 06:32:37.174279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:43.602 [2024-12-09 06:32:37.223405] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:43.602 [2024-12-09 06:32:37.223466] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:43.602 [2024-12-09 06:32:37.223474] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:43.602 [2024-12-09 06:32:37.223481] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:43.602 [2024-12-09 06:32:37.223487] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:43.602 [2024-12-09 06:32:37.224284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:43.602 [2024-12-09 06:32:37.299702] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:43.602 [2024-12-09 06:32:37.299965] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:43.602 06:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:43.602 06:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:33:43.602 06:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:43.602 06:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:43.602 06:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:43.602 06:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:43.602 06:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:43.602 06:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.602 06:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:43.602 [2024-12-09 06:32:37.965059] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:43.602 06:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.602 06:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:43.602 06:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.602 06:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:43.602 Malloc0 00:33:43.602 06:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.602 06:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:43.602 06:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.602 06:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:43.602 06:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.602 06:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:43.602 06:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.602 06:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:43.602 06:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.602 06:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:43.602 06:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.602 06:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:43.602 [2024-12-09 06:32:38.044989] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:43.602 06:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.602 06:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=556642 00:33:43.602 06:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:43.602 06:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 556642 /var/tmp/bdevperf.sock 00:33:43.602 06:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:33:43.602 06:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 556642 ']' 00:33:43.602 06:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:43.602 06:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:43.602 06:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:43.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:43.602 06:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:43.602 06:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:43.602 [2024-12-09 06:32:38.103728] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:33:43.602 [2024-12-09 06:32:38.103796] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid556642 ] 00:33:43.861 [2024-12-09 06:32:38.195256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:43.861 [2024-12-09 06:32:38.246404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:44.431 06:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:44.431 06:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:33:44.431 06:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:44.431 06:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.431 06:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:44.692 NVMe0n1 00:33:44.692 06:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.692 06:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:44.692 Running I/O for 10 seconds... 00:33:47.019 9216.00 IOPS, 36.00 MiB/s [2024-12-09T05:32:42.547Z] 10604.00 IOPS, 41.42 MiB/s [2024-12-09T05:32:43.489Z] 11106.00 IOPS, 43.38 MiB/s [2024-12-09T05:32:44.431Z] 11521.75 IOPS, 45.01 MiB/s [2024-12-09T05:32:45.371Z] 11868.60 IOPS, 46.36 MiB/s [2024-12-09T05:32:46.310Z] 12098.33 IOPS, 47.26 MiB/s [2024-12-09T05:32:47.248Z] 12190.43 IOPS, 47.62 MiB/s [2024-12-09T05:32:48.628Z] 12322.50 IOPS, 48.13 MiB/s [2024-12-09T05:32:49.567Z] 12437.11 IOPS, 48.58 MiB/s [2024-12-09T05:32:49.567Z] 12510.90 IOPS, 48.87 MiB/s 00:33:54.980 Latency(us) 00:33:54.980 [2024-12-09T05:32:49.567Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:54.980 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:33:54.980 Verification LBA range: start 0x0 length 0x4000 00:33:54.980 NVMe0n1 : 10.05 12550.66 49.03 0.00 0.00 81305.81 8922.98 66140.95 00:33:54.980 [2024-12-09T05:32:49.567Z] =================================================================================================================== 00:33:54.980 [2024-12-09T05:32:49.567Z] Total : 12550.66 49.03 0.00 0.00 81305.81 8922.98 66140.95 00:33:54.980 { 00:33:54.980 "results": [ 00:33:54.980 { 00:33:54.980 "job": "NVMe0n1", 00:33:54.980 "core_mask": "0x1", 00:33:54.980 "workload": "verify", 00:33:54.980 "status": "finished", 00:33:54.980 "verify_range": { 00:33:54.980 "start": 0, 00:33:54.980 "length": 16384 00:33:54.980 }, 00:33:54.980 "queue_depth": 1024, 00:33:54.980 "io_size": 4096, 00:33:54.980 "runtime": 10.048553, 00:33:54.980 "iops": 12550.662767067059, 00:33:54.980 "mibps": 49.0260264338557, 00:33:54.980 "io_failed": 0, 00:33:54.980 "io_timeout": 0, 00:33:54.980 "avg_latency_us": 81305.80516125569, 00:33:54.980 "min_latency_us": 8922.978461538461, 00:33:54.980 "max_latency_us": 66140.94769230769 00:33:54.980 } 00:33:54.980 ], 00:33:54.980 "core_count": 1 00:33:54.980 } 00:33:54.980 06:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 556642 00:33:54.980 06:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 556642 ']' 00:33:54.980 06:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 556642 00:33:54.980 06:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:33:54.980 06:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:54.980 06:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 556642 00:33:54.980 06:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:54.980 06:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:54.980 06:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 556642' 00:33:54.981 killing process with pid 556642 00:33:54.981 06:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 556642 00:33:54.981 Received shutdown signal, test time was about 10.000000 seconds 00:33:54.981 00:33:54.981 Latency(us) 00:33:54.981 [2024-12-09T05:32:49.568Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:54.981 [2024-12-09T05:32:49.568Z] =================================================================================================================== 00:33:54.981 [2024-12-09T05:32:49.568Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:54.981 06:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 556642 00:33:54.981 06:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:33:54.981 06:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:33:54.981 06:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:54.981 06:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:33:54.981 06:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:54.981 06:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:33:54.981 06:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:54.981 06:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:54.981 rmmod nvme_tcp 00:33:54.981 rmmod nvme_fabrics 00:33:54.981 rmmod nvme_keyring 00:33:54.981 06:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:54.981 06:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:33:54.981 06:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:33:54.981 06:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 556458 ']' 00:33:54.981 06:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 556458 00:33:54.981 06:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 556458 ']' 00:33:54.981 06:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 556458 00:33:54.981 06:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:33:54.981 06:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:54.981 06:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 556458 00:33:55.241 06:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:55.241 06:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:55.241 06:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 556458' 00:33:55.241 killing process with pid 556458 00:33:55.241 06:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 556458 00:33:55.241 06:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 556458 00:33:55.241 06:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:55.241 06:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:55.241 06:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:55.241 06:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:33:55.241 06:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:33:55.241 06:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:55.241 06:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:33:55.241 06:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:55.241 06:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:55.241 06:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:55.241 06:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:55.241 06:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:57.784 06:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:57.784 00:33:57.784 real 0m22.247s 00:33:57.784 user 0m24.641s 00:33:57.784 sys 0m7.252s 00:33:57.784 06:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:57.784 06:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:57.784 ************************************ 00:33:57.784 END TEST nvmf_queue_depth 00:33:57.784 ************************************ 00:33:57.784 06:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:33:57.784 06:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:57.784 06:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:57.784 06:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:57.784 ************************************ 00:33:57.784 START TEST nvmf_target_multipath 00:33:57.784 ************************************ 00:33:57.784 06:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:33:57.784 * Looking for test storage... 00:33:57.784 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:57.784 06:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:57.784 06:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:33:57.784 06:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:57.784 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:57.784 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:57.784 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:57.784 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:57.784 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:33:57.785 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:33:57.785 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:33:57.785 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:33:57.785 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:33:57.785 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:33:57.785 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:33:57.785 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:57.785 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:33:57.785 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:33:57.785 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:57.785 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:57.785 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:33:57.785 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:33:57.785 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:57.785 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:33:57.785 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:33:57.785 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:33:57.785 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:33:57.785 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:57.785 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:33:57.785 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:33:57.785 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:57.785 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:57.785 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:33:57.785 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:57.785 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:57.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:57.785 --rc genhtml_branch_coverage=1 00:33:57.785 --rc genhtml_function_coverage=1 00:33:57.785 --rc genhtml_legend=1 00:33:57.785 --rc geninfo_all_blocks=1 00:33:57.785 --rc geninfo_unexecuted_blocks=1 00:33:57.785 00:33:57.785 ' 00:33:57.785 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:57.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:57.785 --rc genhtml_branch_coverage=1 00:33:57.785 --rc genhtml_function_coverage=1 00:33:57.785 --rc genhtml_legend=1 00:33:57.785 --rc geninfo_all_blocks=1 00:33:57.785 --rc geninfo_unexecuted_blocks=1 00:33:57.785 00:33:57.785 ' 00:33:57.785 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:57.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:57.785 --rc genhtml_branch_coverage=1 00:33:57.785 --rc genhtml_function_coverage=1 00:33:57.785 --rc genhtml_legend=1 00:33:57.785 --rc geninfo_all_blocks=1 00:33:57.785 --rc geninfo_unexecuted_blocks=1 00:33:57.785 00:33:57.785 ' 00:33:57.785 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:57.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:57.785 --rc genhtml_branch_coverage=1 00:33:57.785 --rc genhtml_function_coverage=1 00:33:57.785 --rc genhtml_legend=1 00:33:57.785 --rc geninfo_all_blocks=1 00:33:57.785 --rc geninfo_unexecuted_blocks=1 00:33:57.785 00:33:57.785 ' 00:33:57.785 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:57.785 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:33:57.785 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:57.785 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:57.785 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:57.785 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:57.785 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:57.785 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:57.785 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:57.785 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:57.785 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:57.785 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:57.785 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:33:57.785 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:33:57.785 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:57.785 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:57.785 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:57.785 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:57.785 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:57.785 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:33:57.785 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:57.785 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:57.785 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:57.785 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:57.785 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:57.785 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:57.785 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:33:57.785 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:57.785 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:33:57.785 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:57.785 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:57.785 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:57.785 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:57.785 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:57.785 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:57.785 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:57.785 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:57.785 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:57.785 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:57.785 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:57.786 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:57.786 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:33:57.786 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:57.786 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:33:57.786 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:57.786 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:57.786 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:57.786 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:57.786 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:57.786 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:57.786 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:57.786 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:57.786 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:57.786 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:57.786 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:33:57.786 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:34:05.921 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:05.921 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:34:05.921 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:05.921 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:05.921 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:05.921 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:05.921 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:05.921 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:34:05.921 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:05.921 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:34:05.921 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:34:05.921 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:34:05.921 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:34:05.921 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:34:05.921 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:34:05.921 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:05.921 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:05.921 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:05.921 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:05.921 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:05.921 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:05.921 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:05.921 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:05.921 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:05.921 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:05.921 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:05.921 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:05.921 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:05.921 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:05.921 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:05.922 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:05.922 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:05.922 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:05.922 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:05.922 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:05.922 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.528 ms 00:34:05.922 00:34:05.922 --- 10.0.0.2 ping statistics --- 00:34:05.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:05.922 rtt min/avg/max/mdev = 0.528/0.528/0.528/0.000 ms 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:05.922 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:05.922 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.302 ms 00:34:05.922 00:34:05.922 --- 10.0.0.1 ping statistics --- 00:34:05.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:05.922 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:34:05.922 only one NIC for nvmf test 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:05.922 rmmod nvme_tcp 00:34:05.922 rmmod nvme_fabrics 00:34:05.922 rmmod nvme_keyring 00:34:05.922 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:05.923 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:34:05.923 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:34:05.923 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:34:05.923 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:05.923 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:05.923 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:05.923 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:34:05.923 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:34:05.923 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:05.923 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:34:05.923 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:05.923 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:05.923 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:05.923 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:05.923 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:07.306 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:07.306 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:34:07.306 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:34:07.306 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:07.306 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:34:07.306 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:07.306 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:34:07.306 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:07.306 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:07.306 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:07.306 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:34:07.306 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:34:07.306 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:34:07.306 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:07.306 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:07.306 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:07.306 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:34:07.306 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:34:07.306 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:07.306 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:34:07.306 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:07.306 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:07.306 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:07.306 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:07.306 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:07.306 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:07.306 00:34:07.306 real 0m9.837s 00:34:07.306 user 0m2.117s 00:34:07.306 sys 0m5.673s 00:34:07.306 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:07.306 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:34:07.306 ************************************ 00:34:07.306 END TEST nvmf_target_multipath 00:34:07.306 ************************************ 00:34:07.306 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:34:07.306 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:07.306 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:07.306 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:07.306 ************************************ 00:34:07.306 START TEST nvmf_zcopy 00:34:07.306 ************************************ 00:34:07.307 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:34:07.307 * Looking for test storage... 00:34:07.307 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:07.307 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:07.307 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:34:07.307 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:07.567 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:07.567 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:07.567 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:07.567 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:07.567 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:34:07.567 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:34:07.567 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:34:07.567 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:34:07.567 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:34:07.567 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:34:07.567 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:34:07.567 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:07.567 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:34:07.567 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:34:07.567 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:07.567 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:07.567 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:34:07.567 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:34:07.567 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:07.567 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:34:07.567 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:34:07.567 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:34:07.567 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:34:07.567 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:07.567 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:34:07.567 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:34:07.567 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:07.567 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:07.567 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:34:07.567 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:07.567 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:07.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:07.567 --rc genhtml_branch_coverage=1 00:34:07.567 --rc genhtml_function_coverage=1 00:34:07.567 --rc genhtml_legend=1 00:34:07.567 --rc geninfo_all_blocks=1 00:34:07.567 --rc geninfo_unexecuted_blocks=1 00:34:07.567 00:34:07.567 ' 00:34:07.567 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:07.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:07.567 --rc genhtml_branch_coverage=1 00:34:07.567 --rc genhtml_function_coverage=1 00:34:07.567 --rc genhtml_legend=1 00:34:07.567 --rc geninfo_all_blocks=1 00:34:07.567 --rc geninfo_unexecuted_blocks=1 00:34:07.567 00:34:07.567 ' 00:34:07.567 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:07.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:07.567 --rc genhtml_branch_coverage=1 00:34:07.567 --rc genhtml_function_coverage=1 00:34:07.567 --rc genhtml_legend=1 00:34:07.567 --rc geninfo_all_blocks=1 00:34:07.567 --rc geninfo_unexecuted_blocks=1 00:34:07.567 00:34:07.567 ' 00:34:07.567 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:07.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:07.567 --rc genhtml_branch_coverage=1 00:34:07.567 --rc genhtml_function_coverage=1 00:34:07.567 --rc genhtml_legend=1 00:34:07.568 --rc geninfo_all_blocks=1 00:34:07.568 --rc geninfo_unexecuted_blocks=1 00:34:07.568 00:34:07.568 ' 00:34:07.568 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:07.568 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:34:07.568 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:07.568 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:07.568 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:07.568 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:07.568 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:07.568 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:07.568 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:07.568 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:07.568 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:07.568 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:07.568 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:34:07.568 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:34:07.568 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:07.568 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:07.568 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:07.568 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:07.568 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:07.568 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:34:07.568 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:07.568 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:07.568 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:07.568 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:07.568 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:07.568 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:07.568 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:34:07.568 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:07.568 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:34:07.568 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:07.568 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:07.568 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:07.568 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:07.568 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:07.568 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:07.568 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:07.568 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:07.568 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:07.568 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:07.568 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:34:07.568 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:07.568 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:07.568 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:07.568 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:07.568 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:07.568 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:07.568 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:07.568 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:07.568 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:07.568 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:07.568 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:34:07.568 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:15.703 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:15.703 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:34:15.703 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:15.703 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:15.703 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:15.703 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:15.703 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:15.703 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:34:15.703 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:15.703 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:34:15.703 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:34:15.703 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:34:15.703 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:34:15.703 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:34:15.703 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:34:15.703 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:15.703 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:15.703 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:15.703 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:15.703 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:15.703 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:15.703 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:15.703 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:15.703 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:15.703 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:15.703 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:15.703 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:15.703 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:15.703 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:15.703 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:15.704 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:15.704 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:15.704 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:15.704 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:15.704 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:15.704 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.532 ms 00:34:15.704 00:34:15.704 --- 10.0.0.2 ping statistics --- 00:34:15.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:15.704 rtt min/avg/max/mdev = 0.532/0.532/0.532/0.000 ms 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:15.704 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:15.704 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:34:15.704 00:34:15.704 --- 10.0.0.1 ping statistics --- 00:34:15.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:15.704 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=566616 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 566616 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 566616 ']' 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:15.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:15.704 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:15.704 [2024-12-09 06:33:09.474743] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:15.704 [2024-12-09 06:33:09.475827] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:34:15.704 [2024-12-09 06:33:09.475880] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:15.704 [2024-12-09 06:33:09.554213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:15.704 [2024-12-09 06:33:09.603345] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:15.704 [2024-12-09 06:33:09.603397] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:15.704 [2024-12-09 06:33:09.603405] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:15.704 [2024-12-09 06:33:09.603412] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:15.704 [2024-12-09 06:33:09.603419] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:15.704 [2024-12-09 06:33:09.604169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:15.704 [2024-12-09 06:33:09.679532] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:15.704 [2024-12-09 06:33:09.679783] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:15.964 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:15.964 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:34:15.964 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:15.964 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:15.964 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:15.964 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:15.964 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:34:15.964 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:34:15.964 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.964 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:15.964 [2024-12-09 06:33:10.345009] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:15.964 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.964 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:34:15.964 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.964 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:15.964 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.964 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:15.964 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.964 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:15.964 [2024-12-09 06:33:10.369139] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:15.964 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.964 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:15.964 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.964 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:15.964 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.964 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:34:15.964 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.964 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:15.964 malloc0 00:34:15.964 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.964 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:34:15.964 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.964 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:15.964 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.964 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:34:15.964 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:34:15.964 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:34:15.964 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:34:15.964 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:15.964 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:15.964 { 00:34:15.964 "params": { 00:34:15.964 "name": "Nvme$subsystem", 00:34:15.964 "trtype": "$TEST_TRANSPORT", 00:34:15.964 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:15.964 "adrfam": "ipv4", 00:34:15.964 "trsvcid": "$NVMF_PORT", 00:34:15.964 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:15.964 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:15.964 "hdgst": ${hdgst:-false}, 00:34:15.964 "ddgst": ${ddgst:-false} 00:34:15.964 }, 00:34:15.964 "method": "bdev_nvme_attach_controller" 00:34:15.964 } 00:34:15.964 EOF 00:34:15.964 )") 00:34:15.964 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:34:15.964 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:34:15.964 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:34:15.964 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:15.964 "params": { 00:34:15.964 "name": "Nvme1", 00:34:15.964 "trtype": "tcp", 00:34:15.964 "traddr": "10.0.0.2", 00:34:15.964 "adrfam": "ipv4", 00:34:15.964 "trsvcid": "4420", 00:34:15.964 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:15.964 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:15.964 "hdgst": false, 00:34:15.964 "ddgst": false 00:34:15.964 }, 00:34:15.964 "method": "bdev_nvme_attach_controller" 00:34:15.964 }' 00:34:15.964 [2024-12-09 06:33:10.467909] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:34:15.964 [2024-12-09 06:33:10.467978] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid567032 ] 00:34:16.225 [2024-12-09 06:33:10.559485] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:16.225 [2024-12-09 06:33:10.610356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:16.486 Running I/O for 10 seconds... 00:34:18.369 6929.00 IOPS, 54.13 MiB/s [2024-12-09T05:33:13.896Z] 6995.50 IOPS, 54.65 MiB/s [2024-12-09T05:33:14.837Z] 7570.67 IOPS, 59.15 MiB/s [2024-12-09T05:33:16.217Z] 8043.75 IOPS, 62.84 MiB/s [2024-12-09T05:33:17.157Z] 8328.60 IOPS, 65.07 MiB/s [2024-12-09T05:33:18.097Z] 8513.67 IOPS, 66.51 MiB/s [2024-12-09T05:33:19.035Z] 8649.29 IOPS, 67.57 MiB/s [2024-12-09T05:33:20.017Z] 8749.88 IOPS, 68.36 MiB/s [2024-12-09T05:33:20.958Z] 8825.89 IOPS, 68.95 MiB/s [2024-12-09T05:33:20.958Z] 8885.90 IOPS, 69.42 MiB/s 00:34:26.371 Latency(us) 00:34:26.371 [2024-12-09T05:33:20.958Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:26.371 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:34:26.371 Verification LBA range: start 0x0 length 0x1000 00:34:26.371 Nvme1n1 : 10.01 8889.36 69.45 0.00 0.00 14353.60 1701.42 25710.28 00:34:26.371 [2024-12-09T05:33:20.958Z] =================================================================================================================== 00:34:26.371 [2024-12-09T05:33:20.958Z] Total : 8889.36 69.45 0.00 0.00 14353.60 1701.42 25710.28 00:34:26.371 06:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=568748 00:34:26.371 06:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:34:26.371 06:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:26.371 06:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:34:26.371 06:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:34:26.371 06:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:34:26.371 06:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:34:26.371 06:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:26.371 06:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:26.371 { 00:34:26.371 "params": { 00:34:26.371 "name": "Nvme$subsystem", 00:34:26.371 "trtype": "$TEST_TRANSPORT", 00:34:26.371 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:26.371 "adrfam": "ipv4", 00:34:26.371 "trsvcid": "$NVMF_PORT", 00:34:26.371 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:26.371 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:26.371 "hdgst": ${hdgst:-false}, 00:34:26.371 "ddgst": ${ddgst:-false} 00:34:26.371 }, 00:34:26.371 "method": "bdev_nvme_attach_controller" 00:34:26.371 } 00:34:26.371 EOF 00:34:26.371 )") 00:34:26.371 [2024-12-09 06:33:20.956581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.371 [2024-12-09 06:33:20.956609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.631 06:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:34:26.631 06:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:34:26.631 06:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:34:26.631 06:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:26.631 "params": { 00:34:26.631 "name": "Nvme1", 00:34:26.631 "trtype": "tcp", 00:34:26.631 "traddr": "10.0.0.2", 00:34:26.631 "adrfam": "ipv4", 00:34:26.631 "trsvcid": "4420", 00:34:26.631 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:26.631 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:26.631 "hdgst": false, 00:34:26.631 "ddgst": false 00:34:26.631 }, 00:34:26.631 "method": "bdev_nvme_attach_controller" 00:34:26.631 }' 00:34:26.631 [2024-12-09 06:33:20.968552] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.631 [2024-12-09 06:33:20.968563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.631 [2024-12-09 06:33:20.980550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.631 [2024-12-09 06:33:20.980560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.632 [2024-12-09 06:33:20.992549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.632 [2024-12-09 06:33:20.992558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.632 [2024-12-09 06:33:21.000874] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:34:26.632 [2024-12-09 06:33:21.000919] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid568748 ] 00:34:26.632 [2024-12-09 06:33:21.004549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.632 [2024-12-09 06:33:21.004563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.632 [2024-12-09 06:33:21.016549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.632 [2024-12-09 06:33:21.016558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.632 [2024-12-09 06:33:21.028550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.632 [2024-12-09 06:33:21.028559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.632 [2024-12-09 06:33:21.040549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.632 [2024-12-09 06:33:21.040558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.632 [2024-12-09 06:33:21.052549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.632 [2024-12-09 06:33:21.052558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.632 [2024-12-09 06:33:21.064549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.632 [2024-12-09 06:33:21.064558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.632 [2024-12-09 06:33:21.076549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.632 [2024-12-09 06:33:21.076559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.632 [2024-12-09 06:33:21.083486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:26.632 [2024-12-09 06:33:21.088550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.632 [2024-12-09 06:33:21.088559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.632 [2024-12-09 06:33:21.100550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.632 [2024-12-09 06:33:21.100562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.632 [2024-12-09 06:33:21.112550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.632 [2024-12-09 06:33:21.112564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.632 [2024-12-09 06:33:21.113498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:26.632 [2024-12-09 06:33:21.124554] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.632 [2024-12-09 06:33:21.124566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.632 [2024-12-09 06:33:21.136554] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.632 [2024-12-09 06:33:21.136569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.632 [2024-12-09 06:33:21.148551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.632 [2024-12-09 06:33:21.148561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.632 [2024-12-09 06:33:21.160551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.632 [2024-12-09 06:33:21.160562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.632 [2024-12-09 06:33:21.172550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.632 [2024-12-09 06:33:21.172559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.632 [2024-12-09 06:33:21.184584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.632 [2024-12-09 06:33:21.184601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.632 [2024-12-09 06:33:21.196556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.632 [2024-12-09 06:33:21.196569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.632 [2024-12-09 06:33:21.208553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.632 [2024-12-09 06:33:21.208565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.892 [2024-12-09 06:33:21.220551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.892 [2024-12-09 06:33:21.220570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.892 [2024-12-09 06:33:21.232550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.892 [2024-12-09 06:33:21.232559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.892 [2024-12-09 06:33:21.244550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.892 [2024-12-09 06:33:21.244559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.892 [2024-12-09 06:33:21.256550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.892 [2024-12-09 06:33:21.256558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.892 [2024-12-09 06:33:21.268551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.892 [2024-12-09 06:33:21.268562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.892 [2024-12-09 06:33:21.280549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.892 [2024-12-09 06:33:21.280558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.892 [2024-12-09 06:33:21.292550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.892 [2024-12-09 06:33:21.292558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.892 [2024-12-09 06:33:21.304549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.892 [2024-12-09 06:33:21.304558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.892 [2024-12-09 06:33:21.316551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.892 [2024-12-09 06:33:21.316561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.892 [2024-12-09 06:33:21.328550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.892 [2024-12-09 06:33:21.328558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.892 [2024-12-09 06:33:21.340550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.892 [2024-12-09 06:33:21.340559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.892 [2024-12-09 06:33:21.352550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.892 [2024-12-09 06:33:21.352559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.892 [2024-12-09 06:33:21.364554] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.892 [2024-12-09 06:33:21.364570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.892 Running I/O for 5 seconds... 00:34:26.892 [2024-12-09 06:33:21.380440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.892 [2024-12-09 06:33:21.380462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.892 [2024-12-09 06:33:21.393711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.892 [2024-12-09 06:33:21.393728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.892 [2024-12-09 06:33:21.408080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.892 [2024-12-09 06:33:21.408098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.893 [2024-12-09 06:33:21.421600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.893 [2024-12-09 06:33:21.421617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.893 [2024-12-09 06:33:21.435768] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.893 [2024-12-09 06:33:21.435784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.893 [2024-12-09 06:33:21.449621] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.893 [2024-12-09 06:33:21.449637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.893 [2024-12-09 06:33:21.463417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.893 [2024-12-09 06:33:21.463437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.893 [2024-12-09 06:33:21.476758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.893 [2024-12-09 06:33:21.476774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.153 [2024-12-09 06:33:21.489849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.153 [2024-12-09 06:33:21.489865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.153 [2024-12-09 06:33:21.504172] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.153 [2024-12-09 06:33:21.504189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.153 [2024-12-09 06:33:21.517462] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.153 [2024-12-09 06:33:21.517478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.153 [2024-12-09 06:33:21.531779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.153 [2024-12-09 06:33:21.531794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.153 [2024-12-09 06:33:21.545140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.153 [2024-12-09 06:33:21.545155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.153 [2024-12-09 06:33:21.559738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.153 [2024-12-09 06:33:21.559753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.153 [2024-12-09 06:33:21.572965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.153 [2024-12-09 06:33:21.572981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.153 [2024-12-09 06:33:21.587444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.153 [2024-12-09 06:33:21.587465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.153 [2024-12-09 06:33:21.600882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.153 [2024-12-09 06:33:21.600898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.153 [2024-12-09 06:33:21.615771] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.153 [2024-12-09 06:33:21.615787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.153 [2024-12-09 06:33:21.629073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.153 [2024-12-09 06:33:21.629089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.153 [2024-12-09 06:33:21.644154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.153 [2024-12-09 06:33:21.644170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.153 [2024-12-09 06:33:21.657337] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.153 [2024-12-09 06:33:21.657352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.153 [2024-12-09 06:33:21.672054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.153 [2024-12-09 06:33:21.672069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.153 [2024-12-09 06:33:21.685120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.153 [2024-12-09 06:33:21.685135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.153 [2024-12-09 06:33:21.699586] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.153 [2024-12-09 06:33:21.699602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.153 [2024-12-09 06:33:21.713156] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.153 [2024-12-09 06:33:21.713171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.153 [2024-12-09 06:33:21.727899] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.153 [2024-12-09 06:33:21.727915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.413 [2024-12-09 06:33:21.741407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.413 [2024-12-09 06:33:21.741423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.413 [2024-12-09 06:33:21.755821] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.413 [2024-12-09 06:33:21.755836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.413 [2024-12-09 06:33:21.769141] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.413 [2024-12-09 06:33:21.769156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.413 [2024-12-09 06:33:21.783735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.413 [2024-12-09 06:33:21.783750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.413 [2024-12-09 06:33:21.797073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.413 [2024-12-09 06:33:21.797088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.413 [2024-12-09 06:33:21.812004] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.413 [2024-12-09 06:33:21.812019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.413 [2024-12-09 06:33:21.825247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.413 [2024-12-09 06:33:21.825263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.413 [2024-12-09 06:33:21.839624] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.414 [2024-12-09 06:33:21.839639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.414 [2024-12-09 06:33:21.852653] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.414 [2024-12-09 06:33:21.852668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.414 [2024-12-09 06:33:21.865116] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.414 [2024-12-09 06:33:21.865132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.414 [2024-12-09 06:33:21.879306] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.414 [2024-12-09 06:33:21.879322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.414 [2024-12-09 06:33:21.892633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.414 [2024-12-09 06:33:21.892648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.414 [2024-12-09 06:33:21.905697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.414 [2024-12-09 06:33:21.905713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.414 [2024-12-09 06:33:21.920061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.414 [2024-12-09 06:33:21.920077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.414 [2024-12-09 06:33:21.933486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.414 [2024-12-09 06:33:21.933501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.414 [2024-12-09 06:33:21.947801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.414 [2024-12-09 06:33:21.947817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.414 [2024-12-09 06:33:21.960991] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.414 [2024-12-09 06:33:21.961006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.414 [2024-12-09 06:33:21.975208] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.414 [2024-12-09 06:33:21.975224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.414 [2024-12-09 06:33:21.988596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.414 [2024-12-09 06:33:21.988611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.675 [2024-12-09 06:33:22.001406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.675 [2024-12-09 06:33:22.001422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.675 [2024-12-09 06:33:22.015682] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.675 [2024-12-09 06:33:22.015698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.675 [2024-12-09 06:33:22.028945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.675 [2024-12-09 06:33:22.028960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.675 [2024-12-09 06:33:22.043851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.675 [2024-12-09 06:33:22.043867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.675 [2024-12-09 06:33:22.057568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.675 [2024-12-09 06:33:22.057584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.675 [2024-12-09 06:33:22.071572] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.675 [2024-12-09 06:33:22.071587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.675 [2024-12-09 06:33:22.085045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.675 [2024-12-09 06:33:22.085060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.675 [2024-12-09 06:33:22.099376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.675 [2024-12-09 06:33:22.099392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.675 [2024-12-09 06:33:22.112425] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.675 [2024-12-09 06:33:22.112440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.675 [2024-12-09 06:33:22.125326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.675 [2024-12-09 06:33:22.125342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.675 [2024-12-09 06:33:22.139785] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.675 [2024-12-09 06:33:22.139801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.675 [2024-12-09 06:33:22.153187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.675 [2024-12-09 06:33:22.153203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.675 [2024-12-09 06:33:22.167717] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.675 [2024-12-09 06:33:22.167733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.675 [2024-12-09 06:33:22.180818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.675 [2024-12-09 06:33:22.180834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.675 [2024-12-09 06:33:22.195806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.675 [2024-12-09 06:33:22.195822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.675 [2024-12-09 06:33:22.208935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.675 [2024-12-09 06:33:22.208951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.675 [2024-12-09 06:33:22.223771] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.675 [2024-12-09 06:33:22.223787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.675 [2024-12-09 06:33:22.237004] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.675 [2024-12-09 06:33:22.237019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.675 [2024-12-09 06:33:22.251346] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.675 [2024-12-09 06:33:22.251361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.937 [2024-12-09 06:33:22.264412] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.937 [2024-12-09 06:33:22.264429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.937 [2024-12-09 06:33:22.277237] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.937 [2024-12-09 06:33:22.277252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.937 [2024-12-09 06:33:22.292421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.937 [2024-12-09 06:33:22.292437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.937 [2024-12-09 06:33:22.305917] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.937 [2024-12-09 06:33:22.305932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.937 [2024-12-09 06:33:22.320089] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.937 [2024-12-09 06:33:22.320105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.937 [2024-12-09 06:33:22.333588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.937 [2024-12-09 06:33:22.333603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.937 [2024-12-09 06:33:22.347927] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.937 [2024-12-09 06:33:22.347942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.937 [2024-12-09 06:33:22.361243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.937 [2024-12-09 06:33:22.361258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.937 18651.00 IOPS, 145.71 MiB/s [2024-12-09T05:33:22.524Z] [2024-12-09 06:33:22.375501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.937 [2024-12-09 06:33:22.375516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.937 [2024-12-09 06:33:22.388950] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.937 [2024-12-09 06:33:22.388965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.937 [2024-12-09 06:33:22.403333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.937 [2024-12-09 06:33:22.403349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.937 [2024-12-09 06:33:22.417007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.937 [2024-12-09 06:33:22.417022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.937 [2024-12-09 06:33:22.432096] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.937 [2024-12-09 06:33:22.432113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.937 [2024-12-09 06:33:22.445544] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.937 [2024-12-09 06:33:22.445560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.937 [2024-12-09 06:33:22.459828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.937 [2024-12-09 06:33:22.459845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.937 [2024-12-09 06:33:22.473201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.937 [2024-12-09 06:33:22.473217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.937 [2024-12-09 06:33:22.488020] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.937 [2024-12-09 06:33:22.488036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.937 [2024-12-09 06:33:22.501346] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.937 [2024-12-09 06:33:22.501366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.937 [2024-12-09 06:33:22.515626] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.937 [2024-12-09 06:33:22.515642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.199 [2024-12-09 06:33:22.528792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.199 [2024-12-09 06:33:22.528808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.199 [2024-12-09 06:33:22.541963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.199 [2024-12-09 06:33:22.541978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.199 [2024-12-09 06:33:22.555884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.199 [2024-12-09 06:33:22.555901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.199 [2024-12-09 06:33:22.569378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.199 [2024-12-09 06:33:22.569393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.199 [2024-12-09 06:33:22.583927] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.199 [2024-12-09 06:33:22.583943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.199 [2024-12-09 06:33:22.597481] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.199 [2024-12-09 06:33:22.597497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.199 [2024-12-09 06:33:22.611626] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.199 [2024-12-09 06:33:22.611643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.199 [2024-12-09 06:33:22.624887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.199 [2024-12-09 06:33:22.624902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.199 [2024-12-09 06:33:22.639773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.199 [2024-12-09 06:33:22.639789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.199 [2024-12-09 06:33:22.653101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.199 [2024-12-09 06:33:22.653116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.199 [2024-12-09 06:33:22.668070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.199 [2024-12-09 06:33:22.668088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.199 [2024-12-09 06:33:22.681223] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.199 [2024-12-09 06:33:22.681239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.199 [2024-12-09 06:33:22.695908] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.199 [2024-12-09 06:33:22.695924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.199 [2024-12-09 06:33:22.709370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.199 [2024-12-09 06:33:22.709386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.199 [2024-12-09 06:33:22.723650] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.199 [2024-12-09 06:33:22.723667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.199 [2024-12-09 06:33:22.736943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.199 [2024-12-09 06:33:22.736960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.199 [2024-12-09 06:33:22.752277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.199 [2024-12-09 06:33:22.752293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.199 [2024-12-09 06:33:22.765402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.199 [2024-12-09 06:33:22.765421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.199 [2024-12-09 06:33:22.779705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.199 [2024-12-09 06:33:22.779722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.460 [2024-12-09 06:33:22.793158] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.460 [2024-12-09 06:33:22.793174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.460 [2024-12-09 06:33:22.807509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.460 [2024-12-09 06:33:22.807525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.460 [2024-12-09 06:33:22.820870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.460 [2024-12-09 06:33:22.820885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.460 [2024-12-09 06:33:22.835823] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.460 [2024-12-09 06:33:22.835840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.460 [2024-12-09 06:33:22.849163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.460 [2024-12-09 06:33:22.849179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.460 [2024-12-09 06:33:22.864179] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.460 [2024-12-09 06:33:22.864195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.460 [2024-12-09 06:33:22.877445] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.460 [2024-12-09 06:33:22.877465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.460 [2024-12-09 06:33:22.892231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.460 [2024-12-09 06:33:22.892247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.460 [2024-12-09 06:33:22.905857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.460 [2024-12-09 06:33:22.905872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.460 [2024-12-09 06:33:22.920457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.461 [2024-12-09 06:33:22.920474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.461 [2024-12-09 06:33:22.933408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.461 [2024-12-09 06:33:22.933424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.461 [2024-12-09 06:33:22.947769] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.461 [2024-12-09 06:33:22.947785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.461 [2024-12-09 06:33:22.961073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.461 [2024-12-09 06:33:22.961088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.461 [2024-12-09 06:33:22.975397] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.461 [2024-12-09 06:33:22.975413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.461 [2024-12-09 06:33:22.988514] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.461 [2024-12-09 06:33:22.988530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.461 [2024-12-09 06:33:23.001347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.461 [2024-12-09 06:33:23.001364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.461 [2024-12-09 06:33:23.015735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.461 [2024-12-09 06:33:23.015751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.461 [2024-12-09 06:33:23.028854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.461 [2024-12-09 06:33:23.028874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.461 [2024-12-09 06:33:23.043358] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.461 [2024-12-09 06:33:23.043375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.722 [2024-12-09 06:33:23.056710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.722 [2024-12-09 06:33:23.056727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.722 [2024-12-09 06:33:23.069876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.722 [2024-12-09 06:33:23.069892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.722 [2024-12-09 06:33:23.084120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.722 [2024-12-09 06:33:23.084136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.722 [2024-12-09 06:33:23.097429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.722 [2024-12-09 06:33:23.097445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.722 [2024-12-09 06:33:23.111803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.722 [2024-12-09 06:33:23.111819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.722 [2024-12-09 06:33:23.125129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.722 [2024-12-09 06:33:23.125145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.722 [2024-12-09 06:33:23.139694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.722 [2024-12-09 06:33:23.139711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.722 [2024-12-09 06:33:23.153158] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.722 [2024-12-09 06:33:23.153173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.722 [2024-12-09 06:33:23.168253] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.722 [2024-12-09 06:33:23.168269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.722 [2024-12-09 06:33:23.181476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.722 [2024-12-09 06:33:23.181492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.722 [2024-12-09 06:33:23.196060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.722 [2024-12-09 06:33:23.196076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.722 [2024-12-09 06:33:23.209495] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.722 [2024-12-09 06:33:23.209519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.723 [2024-12-09 06:33:23.223495] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.723 [2024-12-09 06:33:23.223511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.723 [2024-12-09 06:33:23.236581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.723 [2024-12-09 06:33:23.236597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.723 [2024-12-09 06:33:23.249886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.723 [2024-12-09 06:33:23.249902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.723 [2024-12-09 06:33:23.264303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.723 [2024-12-09 06:33:23.264319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.723 [2024-12-09 06:33:23.277612] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.723 [2024-12-09 06:33:23.277628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.723 [2024-12-09 06:33:23.291657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.723 [2024-12-09 06:33:23.291674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.723 [2024-12-09 06:33:23.305204] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.723 [2024-12-09 06:33:23.305220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.985 [2024-12-09 06:33:23.319886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.985 [2024-12-09 06:33:23.319902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.985 [2024-12-09 06:33:23.333207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.985 [2024-12-09 06:33:23.333222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.985 [2024-12-09 06:33:23.347322] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.985 [2024-12-09 06:33:23.347338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.985 [2024-12-09 06:33:23.360746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.985 [2024-12-09 06:33:23.360762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.985 [2024-12-09 06:33:23.373882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.985 [2024-12-09 06:33:23.373897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.985 18665.00 IOPS, 145.82 MiB/s [2024-12-09T05:33:23.572Z] [2024-12-09 06:33:23.388149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.985 [2024-12-09 06:33:23.388165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.985 [2024-12-09 06:33:23.401391] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.985 [2024-12-09 06:33:23.401406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.985 [2024-12-09 06:33:23.415396] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.985 [2024-12-09 06:33:23.415412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.985 [2024-12-09 06:33:23.428361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.985 [2024-12-09 06:33:23.428377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.985 [2024-12-09 06:33:23.441375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.985 [2024-12-09 06:33:23.441390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.985 [2024-12-09 06:33:23.455918] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.985 [2024-12-09 06:33:23.455933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.985 [2024-12-09 06:33:23.469122] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.985 [2024-12-09 06:33:23.469137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.985 [2024-12-09 06:33:23.483895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.985 [2024-12-09 06:33:23.483910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.985 [2024-12-09 06:33:23.497100] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.985 [2024-12-09 06:33:23.497115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.985 [2024-12-09 06:33:23.512199] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.985 [2024-12-09 06:33:23.512214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.985 [2024-12-09 06:33:23.525739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.985 [2024-12-09 06:33:23.525754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.985 [2024-12-09 06:33:23.540176] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.985 [2024-12-09 06:33:23.540192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.985 [2024-12-09 06:33:23.553358] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.985 [2024-12-09 06:33:23.553373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.985 [2024-12-09 06:33:23.567569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.985 [2024-12-09 06:33:23.567584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.247 [2024-12-09 06:33:23.580794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.247 [2024-12-09 06:33:23.580810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.247 [2024-12-09 06:33:23.593956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.247 [2024-12-09 06:33:23.593972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.247 [2024-12-09 06:33:23.608005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.247 [2024-12-09 06:33:23.608021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.247 [2024-12-09 06:33:23.621724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.247 [2024-12-09 06:33:23.621739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.247 [2024-12-09 06:33:23.636022] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.247 [2024-12-09 06:33:23.636037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.247 [2024-12-09 06:33:23.649362] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.247 [2024-12-09 06:33:23.649377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.247 [2024-12-09 06:33:23.663909] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.247 [2024-12-09 06:33:23.663925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.247 [2024-12-09 06:33:23.677208] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.247 [2024-12-09 06:33:23.677224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.247 [2024-12-09 06:33:23.691694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.247 [2024-12-09 06:33:23.691711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.247 [2024-12-09 06:33:23.705444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.247 [2024-12-09 06:33:23.705465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.247 [2024-12-09 06:33:23.720399] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.247 [2024-12-09 06:33:23.720416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.247 [2024-12-09 06:33:23.733499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.247 [2024-12-09 06:33:23.733514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.247 [2024-12-09 06:33:23.747618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.247 [2024-12-09 06:33:23.747634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.247 [2024-12-09 06:33:23.761161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.247 [2024-12-09 06:33:23.761177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.247 [2024-12-09 06:33:23.776115] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.247 [2024-12-09 06:33:23.776131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.247 [2024-12-09 06:33:23.789439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.247 [2024-12-09 06:33:23.789459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.247 [2024-12-09 06:33:23.803873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.247 [2024-12-09 06:33:23.803890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.247 [2024-12-09 06:33:23.816925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.247 [2024-12-09 06:33:23.816941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.247 [2024-12-09 06:33:23.831490] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.247 [2024-12-09 06:33:23.831506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.556 [2024-12-09 06:33:23.845142] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.556 [2024-12-09 06:33:23.845158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.556 [2024-12-09 06:33:23.860163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.556 [2024-12-09 06:33:23.860179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.556 [2024-12-09 06:33:23.873556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.556 [2024-12-09 06:33:23.873572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.556 [2024-12-09 06:33:23.887592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.556 [2024-12-09 06:33:23.887608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.556 [2024-12-09 06:33:23.900880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.556 [2024-12-09 06:33:23.900896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.556 [2024-12-09 06:33:23.915904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.556 [2024-12-09 06:33:23.915920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.556 [2024-12-09 06:33:23.929094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.556 [2024-12-09 06:33:23.929109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.556 [2024-12-09 06:33:23.943922] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.557 [2024-12-09 06:33:23.943937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.557 [2024-12-09 06:33:23.957272] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.557 [2024-12-09 06:33:23.957287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.557 [2024-12-09 06:33:23.972049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.557 [2024-12-09 06:33:23.972064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.557 [2024-12-09 06:33:23.985257] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.557 [2024-12-09 06:33:23.985273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.557 [2024-12-09 06:33:24.000080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.557 [2024-12-09 06:33:24.000096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.557 [2024-12-09 06:33:24.013315] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.557 [2024-12-09 06:33:24.013330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.557 [2024-12-09 06:33:24.027670] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.557 [2024-12-09 06:33:24.027685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.557 [2024-12-09 06:33:24.040989] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.557 [2024-12-09 06:33:24.041003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.557 [2024-12-09 06:33:24.055854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.557 [2024-12-09 06:33:24.055870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.557 [2024-12-09 06:33:24.069440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.557 [2024-12-09 06:33:24.069463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.557 [2024-12-09 06:33:24.083895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.557 [2024-12-09 06:33:24.083912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.557 [2024-12-09 06:33:24.097389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.557 [2024-12-09 06:33:24.097405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.557 [2024-12-09 06:33:24.112049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.557 [2024-12-09 06:33:24.112071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.557 [2024-12-09 06:33:24.125336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.557 [2024-12-09 06:33:24.125353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.557 [2024-12-09 06:33:24.139718] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.557 [2024-12-09 06:33:24.139735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.825 [2024-12-09 06:33:24.153024] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.825 [2024-12-09 06:33:24.153040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.825 [2024-12-09 06:33:24.167822] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.825 [2024-12-09 06:33:24.167838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.825 [2024-12-09 06:33:24.181055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.825 [2024-12-09 06:33:24.181070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.825 [2024-12-09 06:33:24.196105] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.825 [2024-12-09 06:33:24.196121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.825 [2024-12-09 06:33:24.209355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.825 [2024-12-09 06:33:24.209370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.825 [2024-12-09 06:33:24.224400] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.825 [2024-12-09 06:33:24.224416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.825 [2024-12-09 06:33:24.237222] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.825 [2024-12-09 06:33:24.237237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.825 [2024-12-09 06:33:24.252334] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.825 [2024-12-09 06:33:24.252350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.825 [2024-12-09 06:33:24.265755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.825 [2024-12-09 06:33:24.265770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.825 [2024-12-09 06:33:24.280419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.825 [2024-12-09 06:33:24.280435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.825 [2024-12-09 06:33:24.294065] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.825 [2024-12-09 06:33:24.294081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.825 [2024-12-09 06:33:24.308093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.825 [2024-12-09 06:33:24.308109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.825 [2024-12-09 06:33:24.321184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.825 [2024-12-09 06:33:24.321199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.825 [2024-12-09 06:33:24.336158] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.825 [2024-12-09 06:33:24.336179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.825 [2024-12-09 06:33:24.349390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.825 [2024-12-09 06:33:24.349406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.825 [2024-12-09 06:33:24.363880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.825 [2024-12-09 06:33:24.363896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.825 [2024-12-09 06:33:24.376776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.825 [2024-12-09 06:33:24.376792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.825 18681.33 IOPS, 145.95 MiB/s [2024-12-09T05:33:24.412Z] [2024-12-09 06:33:24.389893] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.825 [2024-12-09 06:33:24.389909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.825 [2024-12-09 06:33:24.404209] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.825 [2024-12-09 06:33:24.404225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.092 [2024-12-09 06:33:24.417390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.092 [2024-12-09 06:33:24.417406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.093 [2024-12-09 06:33:24.431864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.093 [2024-12-09 06:33:24.431880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.093 [2024-12-09 06:33:24.445117] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.093 [2024-12-09 06:33:24.445133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.093 [2024-12-09 06:33:24.460070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.093 [2024-12-09 06:33:24.460086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.093 [2024-12-09 06:33:24.473197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.093 [2024-12-09 06:33:24.473213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.093 [2024-12-09 06:33:24.487557] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.093 [2024-12-09 06:33:24.487573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.093 [2024-12-09 06:33:24.500857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.093 [2024-12-09 06:33:24.500873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.093 [2024-12-09 06:33:24.515925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.093 [2024-12-09 06:33:24.515940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.093 [2024-12-09 06:33:24.528987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.093 [2024-12-09 06:33:24.529002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.093 [2024-12-09 06:33:24.544014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.093 [2024-12-09 06:33:24.544030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.093 [2024-12-09 06:33:24.557192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.093 [2024-12-09 06:33:24.557208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.093 [2024-12-09 06:33:24.571621] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.093 [2024-12-09 06:33:24.571637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.093 [2024-12-09 06:33:24.584700] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.093 [2024-12-09 06:33:24.584717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.093 [2024-12-09 06:33:24.597652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.093 [2024-12-09 06:33:24.597672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.093 [2024-12-09 06:33:24.612182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.093 [2024-12-09 06:33:24.612198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.093 [2024-12-09 06:33:24.625806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.093 [2024-12-09 06:33:24.625823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.093 [2024-12-09 06:33:24.640133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.093 [2024-12-09 06:33:24.640150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.093 [2024-12-09 06:33:24.653010] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.093 [2024-12-09 06:33:24.653026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.093 [2024-12-09 06:33:24.668235] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.093 [2024-12-09 06:33:24.668251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.367 [2024-12-09 06:33:24.681696] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.367 [2024-12-09 06:33:24.681712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.367 [2024-12-09 06:33:24.696277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.367 [2024-12-09 06:33:24.696294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.367 [2024-12-09 06:33:24.709436] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.367 [2024-12-09 06:33:24.709457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.367 [2024-12-09 06:33:24.723872] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.367 [2024-12-09 06:33:24.723888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.367 [2024-12-09 06:33:24.737600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.367 [2024-12-09 06:33:24.737616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.367 [2024-12-09 06:33:24.751849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.367 [2024-12-09 06:33:24.751865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.367 [2024-12-09 06:33:24.765248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.367 [2024-12-09 06:33:24.765264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.367 [2024-12-09 06:33:24.780198] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.367 [2024-12-09 06:33:24.780215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.367 [2024-12-09 06:33:24.793711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.367 [2024-12-09 06:33:24.793727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.367 [2024-12-09 06:33:24.807863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.367 [2024-12-09 06:33:24.807878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.367 [2024-12-09 06:33:24.820816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.367 [2024-12-09 06:33:24.820831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.367 [2024-12-09 06:33:24.835875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.367 [2024-12-09 06:33:24.835891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.367 [2024-12-09 06:33:24.848922] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.367 [2024-12-09 06:33:24.848937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.367 [2024-12-09 06:33:24.863885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.367 [2024-12-09 06:33:24.863901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.367 [2024-12-09 06:33:24.877038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.367 [2024-12-09 06:33:24.877053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.367 [2024-12-09 06:33:24.892324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.367 [2024-12-09 06:33:24.892340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.367 [2024-12-09 06:33:24.905377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.367 [2024-12-09 06:33:24.905392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.367 [2024-12-09 06:33:24.919765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.367 [2024-12-09 06:33:24.919780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.367 [2024-12-09 06:33:24.933190] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.367 [2024-12-09 06:33:24.933205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.650 [2024-12-09 06:33:24.948297] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.650 [2024-12-09 06:33:24.948312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.650 [2024-12-09 06:33:24.961841] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.650 [2024-12-09 06:33:24.961857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.650 [2024-12-09 06:33:24.975911] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.650 [2024-12-09 06:33:24.975927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.650 [2024-12-09 06:33:24.989214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.650 [2024-12-09 06:33:24.989230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.650 [2024-12-09 06:33:25.004165] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.650 [2024-12-09 06:33:25.004181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.650 [2024-12-09 06:33:25.017714] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.650 [2024-12-09 06:33:25.017729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.650 [2024-12-09 06:33:25.032116] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.650 [2024-12-09 06:33:25.032131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.650 [2024-12-09 06:33:25.045538] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.650 [2024-12-09 06:33:25.045555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.650 [2024-12-09 06:33:25.059857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.650 [2024-12-09 06:33:25.059872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.650 [2024-12-09 06:33:25.072869] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.650 [2024-12-09 06:33:25.072884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.650 [2024-12-09 06:33:25.088324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.650 [2024-12-09 06:33:25.088340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.650 [2024-12-09 06:33:25.101724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.650 [2024-12-09 06:33:25.101739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.650 [2024-12-09 06:33:25.115911] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.650 [2024-12-09 06:33:25.115926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.650 [2024-12-09 06:33:25.129045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.650 [2024-12-09 06:33:25.129061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.650 [2024-12-09 06:33:25.144265] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.650 [2024-12-09 06:33:25.144281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.650 [2024-12-09 06:33:25.157286] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.650 [2024-12-09 06:33:25.157302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.650 [2024-12-09 06:33:25.171281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.650 [2024-12-09 06:33:25.171297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.650 [2024-12-09 06:33:25.184400] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.650 [2024-12-09 06:33:25.184416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.650 [2024-12-09 06:33:25.197147] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.650 [2024-12-09 06:33:25.197163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.650 [2024-12-09 06:33:25.212364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.650 [2024-12-09 06:33:25.212380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.650 [2024-12-09 06:33:25.225955] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.650 [2024-12-09 06:33:25.225970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.929 [2024-12-09 06:33:25.240576] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.929 [2024-12-09 06:33:25.240592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.929 [2024-12-09 06:33:25.253652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.929 [2024-12-09 06:33:25.253667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.929 [2024-12-09 06:33:25.268495] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.929 [2024-12-09 06:33:25.268511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.929 [2024-12-09 06:33:25.282046] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.929 [2024-12-09 06:33:25.282061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.929 [2024-12-09 06:33:25.296183] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.929 [2024-12-09 06:33:25.296200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.929 [2024-12-09 06:33:25.309690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.929 [2024-12-09 06:33:25.309705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.929 [2024-12-09 06:33:25.323958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.929 [2024-12-09 06:33:25.323973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.929 [2024-12-09 06:33:25.337233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.929 [2024-12-09 06:33:25.337248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.929 [2024-12-09 06:33:25.351879] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.929 [2024-12-09 06:33:25.351895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.929 [2024-12-09 06:33:25.365395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.929 [2024-12-09 06:33:25.365410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.929 [2024-12-09 06:33:25.379853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.929 [2024-12-09 06:33:25.379869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.929 18683.75 IOPS, 145.97 MiB/s [2024-12-09T05:33:25.516Z] [2024-12-09 06:33:25.393064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.929 [2024-12-09 06:33:25.393079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.929 [2024-12-09 06:33:25.408492] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.929 [2024-12-09 06:33:25.408509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.929 [2024-12-09 06:33:25.421420] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.929 [2024-12-09 06:33:25.421435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.929 [2024-12-09 06:33:25.435576] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.929 [2024-12-09 06:33:25.435591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.929 [2024-12-09 06:33:25.449424] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.929 [2024-12-09 06:33:25.449439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.929 [2024-12-09 06:33:25.463810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.929 [2024-12-09 06:33:25.463826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.929 [2024-12-09 06:33:25.477418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.929 [2024-12-09 06:33:25.477433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.929 [2024-12-09 06:33:25.491813] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.929 [2024-12-09 06:33:25.491829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.929 [2024-12-09 06:33:25.505194] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.929 [2024-12-09 06:33:25.505209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:31.196 [2024-12-09 06:33:25.520195] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:31.196 [2024-12-09 06:33:25.520211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:31.196 [2024-12-09 06:33:25.533806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:31.196 [2024-12-09 06:33:25.533821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:31.196 [2024-12-09 06:33:25.547475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:31.196 [2024-12-09 06:33:25.547491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:31.196 [2024-12-09 06:33:25.560824] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:31.196 [2024-12-09 06:33:25.560840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:31.196 [2024-12-09 06:33:25.575930] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:31.196 [2024-12-09 06:33:25.575946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:31.196 [2024-12-09 06:33:25.589222] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:31.196 [2024-12-09 06:33:25.589237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:31.196 [2024-12-09 06:33:25.604511] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:31.196 [2024-12-09 06:33:25.604527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:31.196 [2024-12-09 06:33:25.617485] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:31.196 [2024-12-09 06:33:25.617501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:31.196 [2024-12-09 06:33:25.631655] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:31.196 [2024-12-09 06:33:25.631671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:31.196 [2024-12-09 06:33:25.645205] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:31.196 [2024-12-09 06:33:25.645224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:31.196 [2024-12-09 06:33:25.659887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:31.196 [2024-12-09 06:33:25.659903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:31.196 [2024-12-09 06:33:25.673372] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:31.196 [2024-12-09 06:33:25.673387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:31.196 [2024-12-09 06:33:25.687884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:31.196 [2024-12-09 06:33:25.687899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:31.196 [2024-12-09 06:33:25.701280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:31.196 [2024-12-09 06:33:25.701295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:31.196 [2024-12-09 06:33:25.716086] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:31.196 [2024-12-09 06:33:25.716102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:31.196 [2024-12-09 06:33:25.729233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:31.196 [2024-12-09 06:33:25.729248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:31.196 [2024-12-09 06:33:25.743586] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:31.196 [2024-12-09 06:33:25.743601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:31.196 [2024-12-09 06:33:25.756919] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:31.196 [2024-12-09 06:33:25.756935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:31.196 [2024-12-09 06:33:25.772423] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:31.196 [2024-12-09 06:33:25.772439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:31.466 [2024-12-09 06:33:25.785695] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:31.466 [2024-12-09 06:33:25.785711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:31.466 [2024-12-09 06:33:25.800125] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:31.466 [2024-12-09 06:33:25.800141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:31.467 [2024-12-09 06:33:25.813023] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:31.467 [2024-12-09 06:33:25.813039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:31.467 [2024-12-09 06:33:25.828197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:31.467 [2024-12-09 06:33:25.828213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:31.467 [2024-12-09 06:33:25.841564] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:31.467 [2024-12-09 06:33:25.841580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:31.467 [2024-12-09 06:33:25.855483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:31.467 [2024-12-09 06:33:25.855499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:31.467 [2024-12-09 06:33:25.868748] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:31.467 [2024-12-09 06:33:25.868764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:31.467 [2024-12-09 06:33:25.881820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:31.467 [2024-12-09 06:33:25.881835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:31.467 [2024-12-09 06:33:25.896039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:31.467 [2024-12-09 06:33:25.896055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:31.467 [2024-12-09 06:33:25.909395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:31.467 [2024-12-09 06:33:25.909415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:31.467 [2024-12-09 06:33:25.923454] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:31.467 [2024-12-09 06:33:25.923470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:31.467 [2024-12-09 06:33:25.936723] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:31.467 [2024-12-09 06:33:25.936739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:31.467 [2024-12-09 06:33:25.949811] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:31.467 [2024-12-09 06:33:25.949826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:31.467 [2024-12-09 06:33:25.963595] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:31.467 [2024-12-09 06:33:25.963611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:31.467 [2024-12-09 06:33:25.976653] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:31.467 [2024-12-09 06:33:25.976669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:31.467 [2024-12-09 06:33:25.989755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:31.467 [2024-12-09 06:33:25.989770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:31.467 [2024-12-09 06:33:26.004032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:31.467 [2024-12-09 06:33:26.004048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:31.467 [2024-12-09 06:33:26.017639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:31.467 [2024-12-09 06:33:26.017655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:31.467 [2024-12-09 06:33:26.031716] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:31.467 [2024-12-09 06:33:26.031732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:31.467 [2024-12-09 06:33:26.045020] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:31.467 [2024-12-09 06:33:26.045035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:31.766 [2024-12-09 06:33:26.059803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:31.766 [2024-12-09 06:33:26.059819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:31.766 [2024-12-09 06:33:26.073027] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:31.766 [2024-12-09 06:33:26.073043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:31.766 [2024-12-09 06:33:26.087961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:31.766 [2024-12-09 06:33:26.087977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:31.766 [2024-12-09 06:33:26.101302] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:31.766 [2024-12-09 06:33:26.101318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:31.766 [2024-12-09 06:33:26.116208] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:31.766 [2024-12-09 06:33:26.116224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:31.766 [2024-12-09 06:33:26.129788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:31.766 [2024-12-09 06:33:26.129803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:31.766 [2024-12-09 06:33:26.143993] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:31.766 [2024-12-09 06:33:26.144009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:31.766 [2024-12-09 06:33:26.157175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:31.766 [2024-12-09 06:33:26.157190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:31.766 [2024-12-09 06:33:26.172385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:31.766 [2024-12-09 06:33:26.172405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:31.766 [2024-12-09 06:33:26.185682] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:31.766 [2024-12-09 06:33:26.185698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:31.766 [2024-12-09 06:33:26.199711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:31.766 [2024-12-09 06:33:26.199727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:31.766 [2024-12-09 06:33:26.213134] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:31.766 [2024-12-09 06:33:26.213150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:31.766 [2024-12-09 06:33:26.228149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:31.766 [2024-12-09 06:33:26.228165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:31.766 [2024-12-09 06:33:26.241687] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:31.766 [2024-12-09 06:33:26.241703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:31.766 [2024-12-09 06:33:26.255869] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:31.766 [2024-12-09 06:33:26.255885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:31.766 [2024-12-09 06:33:26.269049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:31.766 [2024-12-09 06:33:26.269064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:31.766 [2024-12-09 06:33:26.283909] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:31.766 [2024-12-09 06:33:26.283925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:31.766 [2024-12-09 06:33:26.297085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:31.766 [2024-12-09 06:33:26.297100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:31.766 [2024-12-09 06:33:26.311935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:31.766 [2024-12-09 06:33:26.311952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:31.766 [2024-12-09 06:33:26.325639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:31.766 [2024-12-09 06:33:26.325656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:32.063 [2024-12-09 06:33:26.339901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:32.063 [2024-12-09 06:33:26.339918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:32.063 [2024-12-09 06:33:26.353127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:32.063 [2024-12-09 06:33:26.353143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:32.063 [2024-12-09 06:33:26.367570] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:32.063 [2024-12-09 06:33:26.367586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:32.063 [2024-12-09 06:33:26.381042] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:32.063 [2024-12-09 06:33:26.381057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:32.063 18677.60 IOPS, 145.92 MiB/s 00:34:32.063 Latency(us) 00:34:32.063 [2024-12-09T05:33:26.650Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:32.063 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:34:32.063 Nvme1n1 : 5.00 18688.35 146.00 0.00 0.00 6844.15 2318.97 11746.07 00:34:32.063 [2024-12-09T05:33:26.651Z] =================================================================================================================== 00:34:32.064 [2024-12-09T05:33:26.651Z] Total : 18688.35 146.00 0.00 0.00 6844.15 2318.97 11746.07 00:34:32.064 [2024-12-09 06:33:26.392556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:32.064 [2024-12-09 06:33:26.392572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:32.064 [2024-12-09 06:33:26.404557] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:32.064 [2024-12-09 06:33:26.404571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:32.064 [2024-12-09 06:33:26.416559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:32.064 [2024-12-09 06:33:26.416574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:32.064 [2024-12-09 06:33:26.428553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:32.064 [2024-12-09 06:33:26.428566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:32.064 [2024-12-09 06:33:26.440553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:32.064 [2024-12-09 06:33:26.440567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:32.064 [2024-12-09 06:33:26.452551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:32.064 [2024-12-09 06:33:26.452564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:32.064 [2024-12-09 06:33:26.464550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:32.064 [2024-12-09 06:33:26.464560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:32.064 [2024-12-09 06:33:26.476553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:32.064 [2024-12-09 06:33:26.476564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:32.064 [2024-12-09 06:33:26.488551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:32.064 [2024-12-09 06:33:26.488560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:32.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (568748) - No such process 00:34:32.064 06:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 568748 00:34:32.064 06:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:32.064 06:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.064 06:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:32.064 06:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.064 06:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:34:32.064 06:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.064 06:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:32.064 delay0 00:34:32.064 06:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.064 06:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:34:32.064 06:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.064 06:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:32.064 06:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.064 06:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:34:32.343 [2024-12-09 06:33:26.653763] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:34:39.001 Initializing NVMe Controllers 00:34:39.001 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:39.001 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:34:39.001 Initialization complete. Launching workers. 00:34:39.001 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 1220 00:34:39.001 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1494, failed to submit 46 00:34:39.001 success 1344, unsuccessful 150, failed 0 00:34:39.001 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:34:39.001 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:34:39.001 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:39.001 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:34:39.001 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:39.001 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:34:39.001 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:39.001 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:39.001 rmmod nvme_tcp 00:34:39.001 rmmod nvme_fabrics 00:34:39.001 rmmod nvme_keyring 00:34:39.001 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:39.001 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:34:39.001 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:34:39.001 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 566616 ']' 00:34:39.001 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 566616 00:34:39.001 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 566616 ']' 00:34:39.001 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 566616 00:34:39.001 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:34:39.001 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:39.001 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 566616 00:34:39.001 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:39.001 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:39.001 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 566616' 00:34:39.001 killing process with pid 566616 00:34:39.001 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 566616 00:34:39.001 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 566616 00:34:39.001 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:39.001 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:39.001 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:39.001 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:34:39.001 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:34:39.001 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:39.001 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:34:39.001 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:39.001 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:39.001 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:39.001 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:39.001 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:40.912 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:40.912 00:34:40.912 real 0m33.625s 00:34:40.912 user 0m42.852s 00:34:40.912 sys 0m12.172s 00:34:40.912 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:40.912 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:40.912 ************************************ 00:34:40.912 END TEST nvmf_zcopy 00:34:40.912 ************************************ 00:34:40.912 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:34:40.912 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:40.912 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:40.912 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:40.912 ************************************ 00:34:40.912 START TEST nvmf_nmic 00:34:40.912 ************************************ 00:34:40.912 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:34:41.174 * Looking for test storage... 00:34:41.174 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:41.174 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:41.174 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:34:41.174 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:41.174 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:41.174 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:41.174 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:41.174 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:41.174 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:34:41.174 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:34:41.174 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:34:41.174 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:34:41.174 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:34:41.174 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:34:41.174 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:34:41.174 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:41.174 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:34:41.174 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:34:41.174 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:41.174 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:41.174 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:34:41.174 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:34:41.174 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:41.174 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:34:41.174 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:34:41.174 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:34:41.174 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:34:41.174 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:41.174 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:34:41.174 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:34:41.174 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:41.174 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:41.174 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:34:41.174 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:41.174 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:41.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:41.174 --rc genhtml_branch_coverage=1 00:34:41.174 --rc genhtml_function_coverage=1 00:34:41.174 --rc genhtml_legend=1 00:34:41.174 --rc geninfo_all_blocks=1 00:34:41.174 --rc geninfo_unexecuted_blocks=1 00:34:41.174 00:34:41.174 ' 00:34:41.174 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:41.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:41.174 --rc genhtml_branch_coverage=1 00:34:41.174 --rc genhtml_function_coverage=1 00:34:41.174 --rc genhtml_legend=1 00:34:41.174 --rc geninfo_all_blocks=1 00:34:41.174 --rc geninfo_unexecuted_blocks=1 00:34:41.174 00:34:41.174 ' 00:34:41.174 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:41.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:41.174 --rc genhtml_branch_coverage=1 00:34:41.174 --rc genhtml_function_coverage=1 00:34:41.174 --rc genhtml_legend=1 00:34:41.174 --rc geninfo_all_blocks=1 00:34:41.174 --rc geninfo_unexecuted_blocks=1 00:34:41.174 00:34:41.174 ' 00:34:41.174 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:41.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:41.174 --rc genhtml_branch_coverage=1 00:34:41.174 --rc genhtml_function_coverage=1 00:34:41.174 --rc genhtml_legend=1 00:34:41.174 --rc geninfo_all_blocks=1 00:34:41.174 --rc geninfo_unexecuted_blocks=1 00:34:41.174 00:34:41.174 ' 00:34:41.174 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:41.174 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:34:41.174 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:41.174 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:41.174 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:41.174 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:41.174 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:41.174 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:41.174 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:41.174 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:41.174 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:41.174 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:41.174 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:34:41.174 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:34:41.174 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:41.174 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:41.174 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:41.175 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:41.175 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:41.175 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:34:41.175 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:41.175 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:41.175 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:41.175 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:41.175 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:41.175 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:41.175 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:34:41.175 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:41.175 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:34:41.175 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:41.175 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:41.175 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:41.175 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:41.175 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:41.175 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:41.175 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:41.175 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:41.175 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:41.175 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:41.175 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:41.175 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:41.175 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:34:41.175 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:41.175 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:41.175 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:41.175 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:41.175 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:41.175 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:41.175 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:41.175 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:41.175 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:41.175 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:41.175 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:34:41.175 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:49.316 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:49.316 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:49.316 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:49.316 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:49.316 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:49.316 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:49.316 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:49.317 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:49.317 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:49.317 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:49.317 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:49.317 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:49.317 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:49.317 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:49.317 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.608 ms 00:34:49.317 00:34:49.317 --- 10.0.0.2 ping statistics --- 00:34:49.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:49.317 rtt min/avg/max/mdev = 0.608/0.608/0.608/0.000 ms 00:34:49.317 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:49.317 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:49.317 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:34:49.317 00:34:49.317 --- 10.0.0.1 ping statistics --- 00:34:49.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:49.317 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:34:49.317 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:49.317 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:34:49.317 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:49.317 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:49.317 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:49.317 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:49.317 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:49.317 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:49.317 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:49.317 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:34:49.317 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:49.317 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:49.317 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:49.317 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=574732 00:34:49.317 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:34:49.317 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 574732 00:34:49.317 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 574732 ']' 00:34:49.317 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:49.317 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:49.317 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:49.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:49.317 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:49.317 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:49.317 [2024-12-09 06:33:43.270675] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:49.317 [2024-12-09 06:33:43.271755] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:34:49.317 [2024-12-09 06:33:43.271805] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:49.317 [2024-12-09 06:33:43.368868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:49.317 [2024-12-09 06:33:43.422232] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:49.317 [2024-12-09 06:33:43.422288] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:49.317 [2024-12-09 06:33:43.422298] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:49.317 [2024-12-09 06:33:43.422305] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:49.317 [2024-12-09 06:33:43.422314] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:49.317 [2024-12-09 06:33:43.424420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:49.317 [2024-12-09 06:33:43.424556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:49.317 [2024-12-09 06:33:43.424869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:49.317 [2024-12-09 06:33:43.424873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:49.317 [2024-12-09 06:33:43.501956] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:49.317 [2024-12-09 06:33:43.502508] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:49.317 [2024-12-09 06:33:43.503358] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:49.317 [2024-12-09 06:33:43.503464] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:49.317 [2024-12-09 06:33:43.503599] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:49.578 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:49.578 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:34:49.578 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:49.578 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:49.578 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:49.578 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:49.578 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:49.578 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.578 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:49.578 [2024-12-09 06:33:44.133908] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:49.578 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.578 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:49.578 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.578 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:49.839 Malloc0 00:34:49.839 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.839 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:49.839 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.839 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:49.839 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.839 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:49.839 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.839 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:49.839 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.839 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:49.839 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.839 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:49.839 [2024-12-09 06:33:44.218156] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:49.839 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.839 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:34:49.839 test case1: single bdev can't be used in multiple subsystems 00:34:49.839 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:34:49.839 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.839 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:49.839 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.839 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:49.839 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.840 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:49.840 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.840 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:34:49.840 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:34:49.840 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.840 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:49.840 [2024-12-09 06:33:44.253488] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:34:49.840 [2024-12-09 06:33:44.253516] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:34:49.840 [2024-12-09 06:33:44.253525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.840 request: 00:34:49.840 { 00:34:49.840 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:34:49.840 "namespace": { 00:34:49.840 "bdev_name": "Malloc0", 00:34:49.840 "no_auto_visible": false, 00:34:49.840 "hide_metadata": false 00:34:49.840 }, 00:34:49.840 "method": "nvmf_subsystem_add_ns", 00:34:49.840 "req_id": 1 00:34:49.840 } 00:34:49.840 Got JSON-RPC error response 00:34:49.840 response: 00:34:49.840 { 00:34:49.840 "code": -32602, 00:34:49.840 "message": "Invalid parameters" 00:34:49.840 } 00:34:49.840 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:49.840 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:34:49.840 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:34:49.840 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:34:49.840 Adding namespace failed - expected result. 00:34:49.840 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:34:49.840 test case2: host connect to nvmf target in multiple paths 00:34:49.840 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:49.840 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.840 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:49.840 [2024-12-09 06:33:44.265627] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:49.840 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.840 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:34:50.101 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:34:50.673 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:34:50.673 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:34:50.673 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:34:50.673 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:34:50.673 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:34:52.580 06:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:34:52.580 06:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:34:52.580 06:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:34:52.580 06:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:34:52.580 06:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:34:52.580 06:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:34:52.580 06:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:34:52.580 [global] 00:34:52.580 thread=1 00:34:52.580 invalidate=1 00:34:52.580 rw=write 00:34:52.580 time_based=1 00:34:52.580 runtime=1 00:34:52.580 ioengine=libaio 00:34:52.580 direct=1 00:34:52.580 bs=4096 00:34:52.580 iodepth=1 00:34:52.580 norandommap=0 00:34:52.580 numjobs=1 00:34:52.580 00:34:52.580 verify_dump=1 00:34:52.580 verify_backlog=512 00:34:52.580 verify_state_save=0 00:34:52.580 do_verify=1 00:34:52.580 verify=crc32c-intel 00:34:52.580 [job0] 00:34:52.581 filename=/dev/nvme0n1 00:34:52.840 Could not set queue depth (nvme0n1) 00:34:53.100 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:53.100 fio-3.35 00:34:53.100 Starting 1 thread 00:34:54.483 00:34:54.483 job0: (groupid=0, jobs=1): err= 0: pid=575650: Mon Dec 9 06:33:48 2024 00:34:54.483 read: IOPS=18, BW=73.3KiB/s (75.0kB/s)(76.0KiB/1037msec) 00:34:54.483 slat (nsec): min=9507, max=28655, avg=25756.47, stdev=3979.44 00:34:54.483 clat (usec): min=40983, max=42967, avg=41964.74, stdev=445.26 00:34:54.483 lat (usec): min=41011, max=42994, avg=41990.49, stdev=444.80 00:34:54.483 clat percentiles (usec): 00:34:54.483 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:34:54.483 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:34:54.483 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:34:54.483 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:34:54.483 | 99.99th=[42730] 00:34:54.483 write: IOPS=493, BW=1975KiB/s (2022kB/s)(2048KiB/1037msec); 0 zone resets 00:34:54.483 slat (nsec): min=8992, max=65126, avg=27201.71, stdev=11580.79 00:34:54.483 clat (usec): min=198, max=716, avg=433.19, stdev=82.16 00:34:54.483 lat (usec): min=209, max=781, avg=460.39, stdev=88.88 00:34:54.483 clat percentiles (usec): 00:34:54.483 | 1.00th=[ 239], 5.00th=[ 293], 10.00th=[ 326], 20.00th=[ 363], 00:34:54.483 | 30.00th=[ 396], 40.00th=[ 424], 50.00th=[ 441], 60.00th=[ 453], 00:34:54.483 | 70.00th=[ 469], 80.00th=[ 490], 90.00th=[ 529], 95.00th=[ 570], 00:34:54.483 | 99.00th=[ 652], 99.50th=[ 676], 99.90th=[ 717], 99.95th=[ 717], 00:34:54.483 | 99.99th=[ 717] 00:34:54.483 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:34:54.483 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:54.483 lat (usec) : 250=1.69%, 500=78.53%, 750=16.20% 00:34:54.483 lat (msec) : 50=3.58% 00:34:54.483 cpu : usr=0.87%, sys=1.74%, ctx=531, majf=0, minf=1 00:34:54.483 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:54.483 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.483 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.483 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:54.483 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:54.483 00:34:54.483 Run status group 0 (all jobs): 00:34:54.483 READ: bw=73.3KiB/s (75.0kB/s), 73.3KiB/s-73.3KiB/s (75.0kB/s-75.0kB/s), io=76.0KiB (77.8kB), run=1037-1037msec 00:34:54.483 WRITE: bw=1975KiB/s (2022kB/s), 1975KiB/s-1975KiB/s (2022kB/s-2022kB/s), io=2048KiB (2097kB), run=1037-1037msec 00:34:54.483 00:34:54.483 Disk stats (read/write): 00:34:54.483 nvme0n1: ios=65/512, merge=0/0, ticks=686/197, in_queue=883, util=92.89% 00:34:54.483 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:54.483 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:34:54.483 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:54.483 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:34:54.483 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:34:54.483 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:54.483 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:34:54.483 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:54.483 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:34:54.483 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:34:54.483 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:34:54.483 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:54.483 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:34:54.483 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:54.483 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:34:54.483 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:54.483 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:54.483 rmmod nvme_tcp 00:34:54.483 rmmod nvme_fabrics 00:34:54.483 rmmod nvme_keyring 00:34:54.483 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:54.483 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:34:54.483 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:34:54.483 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 574732 ']' 00:34:54.483 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 574732 00:34:54.483 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 574732 ']' 00:34:54.483 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 574732 00:34:54.483 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:34:54.483 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:54.483 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 574732 00:34:54.483 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:54.483 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:54.483 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 574732' 00:34:54.483 killing process with pid 574732 00:34:54.483 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 574732 00:34:54.483 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 574732 00:34:54.745 06:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:54.745 06:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:54.745 06:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:54.745 06:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:34:54.745 06:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:34:54.745 06:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:34:54.745 06:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:54.745 06:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:54.745 06:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:54.745 06:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:54.745 06:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:54.745 06:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:56.652 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:56.652 00:34:56.652 real 0m15.729s 00:34:56.652 user 0m28.511s 00:34:56.652 sys 0m7.434s 00:34:56.652 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:56.652 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:56.652 ************************************ 00:34:56.652 END TEST nvmf_nmic 00:34:56.652 ************************************ 00:34:56.914 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:34:56.914 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:56.914 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:56.914 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:56.914 ************************************ 00:34:56.914 START TEST nvmf_fio_target 00:34:56.914 ************************************ 00:34:56.914 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:34:56.914 * Looking for test storage... 00:34:56.914 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:56.914 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:56.914 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:34:56.914 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:56.914 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:56.914 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:56.914 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:56.914 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:56.914 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:34:56.914 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:34:56.914 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:34:56.914 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:34:56.914 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:34:56.914 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:34:56.914 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:34:56.914 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:56.914 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:34:56.914 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:34:56.914 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:56.914 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:56.914 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:34:56.914 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:34:56.914 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:56.914 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:34:56.914 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:34:56.914 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:34:56.914 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:34:56.914 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:56.914 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:34:56.914 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:34:56.914 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:56.914 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:56.914 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:34:56.914 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:56.914 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:56.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:56.914 --rc genhtml_branch_coverage=1 00:34:56.914 --rc genhtml_function_coverage=1 00:34:56.914 --rc genhtml_legend=1 00:34:56.914 --rc geninfo_all_blocks=1 00:34:56.914 --rc geninfo_unexecuted_blocks=1 00:34:56.914 00:34:56.914 ' 00:34:56.914 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:56.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:56.914 --rc genhtml_branch_coverage=1 00:34:56.914 --rc genhtml_function_coverage=1 00:34:56.914 --rc genhtml_legend=1 00:34:56.914 --rc geninfo_all_blocks=1 00:34:56.914 --rc geninfo_unexecuted_blocks=1 00:34:56.914 00:34:56.914 ' 00:34:56.914 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:56.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:56.914 --rc genhtml_branch_coverage=1 00:34:56.914 --rc genhtml_function_coverage=1 00:34:56.914 --rc genhtml_legend=1 00:34:56.914 --rc geninfo_all_blocks=1 00:34:56.914 --rc geninfo_unexecuted_blocks=1 00:34:56.914 00:34:56.914 ' 00:34:56.914 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:56.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:56.914 --rc genhtml_branch_coverage=1 00:34:56.914 --rc genhtml_function_coverage=1 00:34:56.914 --rc genhtml_legend=1 00:34:56.914 --rc geninfo_all_blocks=1 00:34:56.914 --rc geninfo_unexecuted_blocks=1 00:34:56.914 00:34:56.914 ' 00:34:56.914 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:56.914 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:34:56.914 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:56.914 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:56.914 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:56.914 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:56.914 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:56.914 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:56.914 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:56.914 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:56.914 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:56.914 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:56.914 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:34:56.914 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:34:56.914 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:56.914 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:56.914 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:56.914 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:56.914 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:56.914 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:34:56.914 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:56.914 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:56.914 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:56.914 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:56.915 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:56.915 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:56.915 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:34:56.915 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:56.915 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:34:56.915 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:56.915 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:56.915 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:56.915 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:56.915 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:56.915 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:56.915 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:56.915 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:56.915 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:56.915 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:56.915 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:56.915 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:56.915 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:56.915 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:34:56.915 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:56.915 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:56.915 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:56.915 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:56.915 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:56.915 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:56.915 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:56.915 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:56.915 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:56.915 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:56.915 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:34:56.915 06:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:05.050 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:05.050 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:05.050 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:05.050 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:05.050 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:05.050 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.636 ms 00:35:05.050 00:35:05.050 --- 10.0.0.2 ping statistics --- 00:35:05.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:05.050 rtt min/avg/max/mdev = 0.636/0.636/0.636/0.000 ms 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:05.050 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:05.050 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:35:05.050 00:35:05.050 --- 10.0.0.1 ping statistics --- 00:35:05.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:05.050 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:05.050 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:35:05.051 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:05.051 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:05.051 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:05.051 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:35:05.051 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=579856 00:35:05.051 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 579856 00:35:05.051 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 579856 ']' 00:35:05.051 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:05.051 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:05.051 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:05.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:05.051 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:05.051 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:05.051 [2024-12-09 06:33:58.907773] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:05.051 [2024-12-09 06:33:58.908866] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:35:05.051 [2024-12-09 06:33:58.908917] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:05.051 [2024-12-09 06:33:59.004059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:05.051 [2024-12-09 06:33:59.055589] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:05.051 [2024-12-09 06:33:59.055642] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:05.051 [2024-12-09 06:33:59.055650] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:05.051 [2024-12-09 06:33:59.055657] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:05.051 [2024-12-09 06:33:59.055664] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:05.051 [2024-12-09 06:33:59.057568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:05.051 [2024-12-09 06:33:59.057709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:05.051 [2024-12-09 06:33:59.057859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:05.051 [2024-12-09 06:33:59.057860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:05.051 [2024-12-09 06:33:59.134188] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:05.051 [2024-12-09 06:33:59.134801] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:05.051 [2024-12-09 06:33:59.135360] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:05.051 [2024-12-09 06:33:59.135614] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:05.051 [2024-12-09 06:33:59.135622] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:35:05.312 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:05.312 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:35:05.312 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:05.312 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:05.312 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:05.312 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:05.312 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:35:05.573 [2024-12-09 06:33:59.946763] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:05.573 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:05.834 06:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:35:05.834 06:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:06.094 06:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:35:06.094 06:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:06.094 06:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:35:06.094 06:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:06.354 06:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:35:06.354 06:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:35:06.616 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:06.616 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:35:06.877 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:06.877 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:35:06.877 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:07.136 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:35:07.136 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:35:07.395 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:35:07.395 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:35:07.395 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:07.653 06:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:35:07.654 06:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:35:07.914 06:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:08.174 [2024-12-09 06:34:02.526682] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:08.174 06:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:35:08.174 06:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:35:08.434 06:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:35:09.003 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:35:09.003 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:35:09.003 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:35:09.003 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:35:09.003 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:35:09.003 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:35:10.916 06:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:35:10.916 06:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:35:10.916 06:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:35:10.916 06:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:35:10.916 06:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:35:10.916 06:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:35:10.916 06:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:35:10.916 [global] 00:35:10.916 thread=1 00:35:10.916 invalidate=1 00:35:10.916 rw=write 00:35:10.916 time_based=1 00:35:10.916 runtime=1 00:35:10.916 ioengine=libaio 00:35:10.916 direct=1 00:35:10.916 bs=4096 00:35:10.916 iodepth=1 00:35:10.916 norandommap=0 00:35:10.916 numjobs=1 00:35:10.916 00:35:10.916 verify_dump=1 00:35:10.916 verify_backlog=512 00:35:10.916 verify_state_save=0 00:35:10.916 do_verify=1 00:35:10.916 verify=crc32c-intel 00:35:10.916 [job0] 00:35:10.916 filename=/dev/nvme0n1 00:35:10.916 [job1] 00:35:10.916 filename=/dev/nvme0n2 00:35:10.916 [job2] 00:35:10.916 filename=/dev/nvme0n3 00:35:10.916 [job3] 00:35:10.916 filename=/dev/nvme0n4 00:35:10.916 Could not set queue depth (nvme0n1) 00:35:10.916 Could not set queue depth (nvme0n2) 00:35:10.916 Could not set queue depth (nvme0n3) 00:35:10.916 Could not set queue depth (nvme0n4) 00:35:11.176 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:11.176 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:11.176 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:11.176 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:11.176 fio-3.35 00:35:11.176 Starting 4 threads 00:35:12.559 00:35:12.559 job0: (groupid=0, jobs=1): err= 0: pid=581160: Mon Dec 9 06:34:06 2024 00:35:12.559 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:35:12.559 slat (nsec): min=7280, max=56906, avg=25089.55, stdev=3345.87 00:35:12.559 clat (usec): min=552, max=1221, avg=883.97, stdev=110.33 00:35:12.559 lat (usec): min=578, max=1246, avg=909.06, stdev=110.19 00:35:12.559 clat percentiles (usec): 00:35:12.559 | 1.00th=[ 644], 5.00th=[ 709], 10.00th=[ 734], 20.00th=[ 783], 00:35:12.559 | 30.00th=[ 824], 40.00th=[ 857], 50.00th=[ 889], 60.00th=[ 922], 00:35:12.559 | 70.00th=[ 947], 80.00th=[ 979], 90.00th=[ 1020], 95.00th=[ 1045], 00:35:12.559 | 99.00th=[ 1123], 99.50th=[ 1156], 99.90th=[ 1221], 99.95th=[ 1221], 00:35:12.559 | 99.99th=[ 1221] 00:35:12.559 write: IOPS=933, BW=3732KiB/s (3822kB/s)(3736KiB/1001msec); 0 zone resets 00:35:12.559 slat (nsec): min=9411, max=59068, avg=29697.71, stdev=8910.98 00:35:12.559 clat (usec): min=213, max=954, avg=531.38, stdev=104.36 00:35:12.559 lat (usec): min=225, max=986, avg=561.08, stdev=107.37 00:35:12.559 clat percentiles (usec): 00:35:12.559 | 1.00th=[ 281], 5.00th=[ 351], 10.00th=[ 392], 20.00th=[ 449], 00:35:12.559 | 30.00th=[ 482], 40.00th=[ 506], 50.00th=[ 537], 60.00th=[ 562], 00:35:12.559 | 70.00th=[ 586], 80.00th=[ 611], 90.00th=[ 652], 95.00th=[ 693], 00:35:12.559 | 99.00th=[ 775], 99.50th=[ 848], 99.90th=[ 955], 99.95th=[ 955], 00:35:12.559 | 99.99th=[ 955] 00:35:12.559 bw ( KiB/s): min= 4096, max= 4096, per=42.70%, avg=4096.00, stdev= 0.00, samples=1 00:35:12.559 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:12.559 lat (usec) : 250=0.28%, 500=24.55%, 750=43.43%, 1000=26.56% 00:35:12.559 lat (msec) : 2=5.19% 00:35:12.559 cpu : usr=2.50%, sys=3.70%, ctx=1446, majf=0, minf=1 00:35:12.559 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:12.559 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:12.559 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:12.559 issued rwts: total=512,934,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:12.559 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:12.559 job1: (groupid=0, jobs=1): err= 0: pid=581161: Mon Dec 9 06:34:06 2024 00:35:12.559 read: IOPS=19, BW=79.8KiB/s (81.8kB/s)(80.0KiB/1002msec) 00:35:12.559 slat (nsec): min=9683, max=26682, avg=25361.55, stdev=3698.66 00:35:12.559 clat (usec): min=40836, max=41083, avg=40963.81, stdev=59.78 00:35:12.559 lat (usec): min=40862, max=41109, avg=40989.17, stdev=59.56 00:35:12.559 clat percentiles (usec): 00:35:12.559 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:35:12.559 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:12.559 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:12.559 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:35:12.559 | 99.99th=[41157] 00:35:12.559 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:35:12.559 slat (nsec): min=9514, max=53029, avg=28752.71, stdev=10685.68 00:35:12.559 clat (usec): min=105, max=774, avg=319.70, stdev=151.67 00:35:12.559 lat (usec): min=115, max=821, avg=348.45, stdev=156.98 00:35:12.559 clat percentiles (usec): 00:35:12.559 | 1.00th=[ 111], 5.00th=[ 117], 10.00th=[ 122], 20.00th=[ 176], 00:35:12.559 | 30.00th=[ 235], 40.00th=[ 255], 50.00th=[ 281], 60.00th=[ 351], 00:35:12.559 | 70.00th=[ 388], 80.00th=[ 449], 90.00th=[ 562], 95.00th=[ 603], 00:35:12.559 | 99.00th=[ 660], 99.50th=[ 693], 99.90th=[ 775], 99.95th=[ 775], 00:35:12.559 | 99.99th=[ 775] 00:35:12.559 bw ( KiB/s): min= 4096, max= 4096, per=42.70%, avg=4096.00, stdev= 0.00, samples=1 00:35:12.559 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:12.559 lat (usec) : 250=36.28%, 500=44.92%, 750=14.66%, 1000=0.38% 00:35:12.559 lat (msec) : 50=3.76% 00:35:12.559 cpu : usr=0.90%, sys=1.30%, ctx=536, majf=0, minf=2 00:35:12.559 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:12.559 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:12.559 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:12.559 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:12.559 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:12.559 job2: (groupid=0, jobs=1): err= 0: pid=581162: Mon Dec 9 06:34:06 2024 00:35:12.560 read: IOPS=21, BW=85.4KiB/s (87.5kB/s)(88.0KiB/1030msec) 00:35:12.560 slat (nsec): min=24791, max=27742, avg=25843.27, stdev=806.18 00:35:12.560 clat (usec): min=880, max=42307, avg=34395.79, stdev=16138.35 00:35:12.560 lat (usec): min=905, max=42333, avg=34421.64, stdev=16138.16 00:35:12.560 clat percentiles (usec): 00:35:12.560 | 1.00th=[ 881], 5.00th=[ 922], 10.00th=[ 947], 20.00th=[41157], 00:35:12.560 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:35:12.560 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:12.560 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:12.560 | 99.99th=[42206] 00:35:12.560 write: IOPS=497, BW=1988KiB/s (2036kB/s)(2048KiB/1030msec); 0 zone resets 00:35:12.560 slat (nsec): min=4636, max=51240, avg=30478.62, stdev=9219.22 00:35:12.560 clat (usec): min=149, max=790, avg=495.42, stdev=131.03 00:35:12.560 lat (usec): min=183, max=840, avg=525.90, stdev=135.04 00:35:12.560 clat percentiles (usec): 00:35:12.560 | 1.00th=[ 186], 5.00th=[ 277], 10.00th=[ 306], 20.00th=[ 375], 00:35:12.560 | 30.00th=[ 424], 40.00th=[ 469], 50.00th=[ 506], 60.00th=[ 537], 00:35:12.560 | 70.00th=[ 570], 80.00th=[ 619], 90.00th=[ 668], 95.00th=[ 701], 00:35:12.560 | 99.00th=[ 750], 99.50th=[ 758], 99.90th=[ 791], 99.95th=[ 791], 00:35:12.560 | 99.99th=[ 791] 00:35:12.560 bw ( KiB/s): min= 4096, max= 4096, per=42.70%, avg=4096.00, stdev= 0.00, samples=1 00:35:12.560 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:12.560 lat (usec) : 250=2.43%, 500=44.01%, 750=48.31%, 1000=1.69% 00:35:12.560 lat (msec) : 2=0.19%, 50=3.37% 00:35:12.560 cpu : usr=0.78%, sys=1.46%, ctx=534, majf=0, minf=1 00:35:12.560 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:12.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:12.560 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:12.560 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:12.560 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:12.560 job3: (groupid=0, jobs=1): err= 0: pid=581163: Mon Dec 9 06:34:06 2024 00:35:12.560 read: IOPS=18, BW=75.2KiB/s (77.0kB/s)(76.0KiB/1011msec) 00:35:12.560 slat (nsec): min=10244, max=25741, avg=24637.47, stdev=3489.36 00:35:12.560 clat (usec): min=40385, max=41490, avg=40964.08, stdev=191.19 00:35:12.560 lat (usec): min=40411, max=41501, avg=40988.72, stdev=188.89 00:35:12.560 clat percentiles (usec): 00:35:12.560 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:35:12.560 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:12.560 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:35:12.560 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:35:12.560 | 99.99th=[41681] 00:35:12.560 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:35:12.560 slat (nsec): min=9538, max=53786, avg=29947.86, stdev=8815.06 00:35:12.560 clat (usec): min=138, max=866, avg=415.98, stdev=122.74 00:35:12.560 lat (usec): min=148, max=877, avg=445.93, stdev=124.04 00:35:12.560 clat percentiles (usec): 00:35:12.560 | 1.00th=[ 157], 5.00th=[ 225], 10.00th=[ 273], 20.00th=[ 314], 00:35:12.560 | 30.00th=[ 334], 40.00th=[ 363], 50.00th=[ 416], 60.00th=[ 445], 00:35:12.560 | 70.00th=[ 478], 80.00th=[ 523], 90.00th=[ 578], 95.00th=[ 627], 00:35:12.560 | 99.00th=[ 725], 99.50th=[ 766], 99.90th=[ 865], 99.95th=[ 865], 00:35:12.560 | 99.99th=[ 865] 00:35:12.560 bw ( KiB/s): min= 4096, max= 4096, per=42.70%, avg=4096.00, stdev= 0.00, samples=1 00:35:12.560 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:12.560 lat (usec) : 250=7.72%, 500=65.73%, 750=22.41%, 1000=0.56% 00:35:12.560 lat (msec) : 50=3.58% 00:35:12.560 cpu : usr=0.89%, sys=1.29%, ctx=531, majf=0, minf=1 00:35:12.560 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:12.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:12.560 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:12.560 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:12.560 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:12.560 00:35:12.560 Run status group 0 (all jobs): 00:35:12.560 READ: bw=2225KiB/s (2279kB/s), 75.2KiB/s-2046KiB/s (77.0kB/s-2095kB/s), io=2292KiB (2347kB), run=1001-1030msec 00:35:12.560 WRITE: bw=9592KiB/s (9822kB/s), 1988KiB/s-3732KiB/s (2036kB/s-3822kB/s), io=9880KiB (10.1MB), run=1001-1030msec 00:35:12.560 00:35:12.560 Disk stats (read/write): 00:35:12.560 nvme0n1: ios=562/656, merge=0/0, ticks=483/334, in_queue=817, util=87.47% 00:35:12.560 nvme0n2: ios=66/512, merge=0/0, ticks=1564/152, in_queue=1716, util=98.17% 00:35:12.560 nvme0n3: ios=17/512, merge=0/0, ticks=547/229, in_queue=776, util=88.77% 00:35:12.560 nvme0n4: ios=14/512, merge=0/0, ticks=575/198, in_queue=773, util=89.62% 00:35:12.560 06:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:35:12.560 [global] 00:35:12.560 thread=1 00:35:12.560 invalidate=1 00:35:12.560 rw=randwrite 00:35:12.560 time_based=1 00:35:12.560 runtime=1 00:35:12.560 ioengine=libaio 00:35:12.560 direct=1 00:35:12.560 bs=4096 00:35:12.560 iodepth=1 00:35:12.560 norandommap=0 00:35:12.560 numjobs=1 00:35:12.560 00:35:12.560 verify_dump=1 00:35:12.560 verify_backlog=512 00:35:12.560 verify_state_save=0 00:35:12.560 do_verify=1 00:35:12.560 verify=crc32c-intel 00:35:12.560 [job0] 00:35:12.560 filename=/dev/nvme0n1 00:35:12.560 [job1] 00:35:12.560 filename=/dev/nvme0n2 00:35:12.560 [job2] 00:35:12.560 filename=/dev/nvme0n3 00:35:12.560 [job3] 00:35:12.560 filename=/dev/nvme0n4 00:35:12.560 Could not set queue depth (nvme0n1) 00:35:12.560 Could not set queue depth (nvme0n2) 00:35:12.560 Could not set queue depth (nvme0n3) 00:35:12.560 Could not set queue depth (nvme0n4) 00:35:12.820 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:12.820 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:12.820 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:12.820 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:12.820 fio-3.35 00:35:12.820 Starting 4 threads 00:35:14.205 00:35:14.205 job0: (groupid=0, jobs=1): err= 0: pid=581633: Mon Dec 9 06:34:08 2024 00:35:14.205 read: IOPS=16, BW=66.1KiB/s (67.7kB/s)(68.0KiB/1028msec) 00:35:14.205 slat (nsec): min=24743, max=25565, avg=25181.41, stdev=214.64 00:35:14.205 clat (usec): min=1091, max=42057, avg=39480.99, stdev=9895.48 00:35:14.205 lat (usec): min=1116, max=42082, avg=39506.18, stdev=9895.52 00:35:14.205 clat percentiles (usec): 00:35:14.205 | 1.00th=[ 1090], 5.00th=[ 1090], 10.00th=[41157], 20.00th=[41681], 00:35:14.205 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:35:14.205 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:14.205 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:14.205 | 99.99th=[42206] 00:35:14.205 write: IOPS=498, BW=1992KiB/s (2040kB/s)(2048KiB/1028msec); 0 zone resets 00:35:14.205 slat (nsec): min=9430, max=62410, avg=30130.50, stdev=7443.50 00:35:14.205 clat (usec): min=263, max=972, avg=658.24, stdev=130.68 00:35:14.205 lat (usec): min=273, max=1004, avg=688.37, stdev=132.72 00:35:14.205 clat percentiles (usec): 00:35:14.205 | 1.00th=[ 359], 5.00th=[ 433], 10.00th=[ 478], 20.00th=[ 545], 00:35:14.205 | 30.00th=[ 586], 40.00th=[ 627], 50.00th=[ 668], 60.00th=[ 709], 00:35:14.205 | 70.00th=[ 742], 80.00th=[ 775], 90.00th=[ 807], 95.00th=[ 865], 00:35:14.205 | 99.00th=[ 938], 99.50th=[ 955], 99.90th=[ 971], 99.95th=[ 971], 00:35:14.205 | 99.99th=[ 971] 00:35:14.205 bw ( KiB/s): min= 4096, max= 4096, per=51.40%, avg=4096.00, stdev= 0.00, samples=1 00:35:14.205 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:14.205 lat (usec) : 500=12.29%, 750=58.41%, 1000=26.09% 00:35:14.205 lat (msec) : 2=0.19%, 50=3.02% 00:35:14.205 cpu : usr=0.78%, sys=1.56%, ctx=529, majf=0, minf=2 00:35:14.205 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:14.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.205 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.205 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:14.205 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:14.205 job1: (groupid=0, jobs=1): err= 0: pid=581636: Mon Dec 9 06:34:08 2024 00:35:14.205 read: IOPS=18, BW=73.9KiB/s (75.7kB/s)(76.0KiB/1028msec) 00:35:14.205 slat (nsec): min=9423, max=24911, avg=23153.16, stdev=4591.36 00:35:14.205 clat (usec): min=671, max=42724, avg=39822.42, stdev=9482.79 00:35:14.205 lat (usec): min=682, max=42733, avg=39845.58, stdev=9485.70 00:35:14.205 clat percentiles (usec): 00:35:14.205 | 1.00th=[ 668], 5.00th=[ 668], 10.00th=[41681], 20.00th=[41681], 00:35:14.205 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:35:14.205 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:35:14.205 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:35:14.205 | 99.99th=[42730] 00:35:14.205 write: IOPS=498, BW=1992KiB/s (2040kB/s)(2048KiB/1028msec); 0 zone resets 00:35:14.205 slat (nsec): min=9318, max=62030, avg=29191.74, stdev=7931.29 00:35:14.205 clat (usec): min=137, max=740, avg=491.60, stdev=111.01 00:35:14.205 lat (usec): min=168, max=771, avg=520.79, stdev=112.96 00:35:14.205 clat percentiles (usec): 00:35:14.205 | 1.00th=[ 247], 5.00th=[ 281], 10.00th=[ 359], 20.00th=[ 388], 00:35:14.205 | 30.00th=[ 433], 40.00th=[ 478], 50.00th=[ 494], 60.00th=[ 519], 00:35:14.205 | 70.00th=[ 562], 80.00th=[ 594], 90.00th=[ 635], 95.00th=[ 660], 00:35:14.205 | 99.00th=[ 701], 99.50th=[ 709], 99.90th=[ 742], 99.95th=[ 742], 00:35:14.205 | 99.99th=[ 742] 00:35:14.205 bw ( KiB/s): min= 4096, max= 4096, per=51.40%, avg=4096.00, stdev= 0.00, samples=1 00:35:14.205 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:14.205 lat (usec) : 250=1.32%, 500=49.15%, 750=46.14% 00:35:14.205 lat (msec) : 50=3.39% 00:35:14.205 cpu : usr=1.17%, sys=1.07%, ctx=531, majf=0, minf=1 00:35:14.205 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:14.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.205 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.205 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:14.205 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:14.205 job2: (groupid=0, jobs=1): err= 0: pid=581638: Mon Dec 9 06:34:08 2024 00:35:14.205 read: IOPS=19, BW=78.8KiB/s (80.7kB/s)(80.0KiB/1015msec) 00:35:14.205 slat (nsec): min=27119, max=45519, avg=28771.25, stdev=4111.32 00:35:14.205 clat (usec): min=1016, max=44985, avg=39635.33, stdev=9136.36 00:35:14.205 lat (usec): min=1044, max=45018, avg=39664.10, stdev=9136.74 00:35:14.205 clat percentiles (usec): 00:35:14.205 | 1.00th=[ 1020], 5.00th=[ 1020], 10.00th=[41157], 20.00th=[41157], 00:35:14.205 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:35:14.205 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:35:14.205 | 99.00th=[44827], 99.50th=[44827], 99.90th=[44827], 99.95th=[44827], 00:35:14.205 | 99.99th=[44827] 00:35:14.205 write: IOPS=504, BW=2018KiB/s (2066kB/s)(2048KiB/1015msec); 0 zone resets 00:35:14.205 slat (nsec): min=9132, max=54608, avg=31859.86, stdev=8675.39 00:35:14.205 clat (usec): min=131, max=732, avg=393.05, stdev=104.13 00:35:14.205 lat (usec): min=142, max=766, avg=424.91, stdev=106.54 00:35:14.205 clat percentiles (usec): 00:35:14.205 | 1.00th=[ 194], 5.00th=[ 251], 10.00th=[ 285], 20.00th=[ 314], 00:35:14.205 | 30.00th=[ 330], 40.00th=[ 343], 50.00th=[ 363], 60.00th=[ 404], 00:35:14.205 | 70.00th=[ 453], 80.00th=[ 490], 90.00th=[ 537], 95.00th=[ 578], 00:35:14.205 | 99.00th=[ 652], 99.50th=[ 668], 99.90th=[ 734], 99.95th=[ 734], 00:35:14.205 | 99.99th=[ 734] 00:35:14.205 bw ( KiB/s): min= 4096, max= 4096, per=51.40%, avg=4096.00, stdev= 0.00, samples=1 00:35:14.205 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:14.205 lat (usec) : 250=4.70%, 500=74.44%, 750=17.11% 00:35:14.205 lat (msec) : 2=0.19%, 50=3.57% 00:35:14.205 cpu : usr=1.28%, sys=1.87%, ctx=532, majf=0, minf=1 00:35:14.205 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:14.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.205 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.205 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:14.205 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:14.205 job3: (groupid=0, jobs=1): err= 0: pid=581640: Mon Dec 9 06:34:08 2024 00:35:14.205 read: IOPS=18, BW=74.7KiB/s (76.5kB/s)(76.0KiB/1017msec) 00:35:14.205 slat (nsec): min=25805, max=26763, avg=26176.37, stdev=291.63 00:35:14.205 clat (usec): min=833, max=42058, avg=39524.97, stdev=9379.13 00:35:14.205 lat (usec): min=860, max=42084, avg=39551.15, stdev=9379.04 00:35:14.205 clat percentiles (usec): 00:35:14.205 | 1.00th=[ 832], 5.00th=[ 832], 10.00th=[41157], 20.00th=[41157], 00:35:14.205 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:35:14.205 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:14.205 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:14.205 | 99.99th=[42206] 00:35:14.205 write: IOPS=503, BW=2014KiB/s (2062kB/s)(2048KiB/1017msec); 0 zone resets 00:35:14.205 slat (nsec): min=9695, max=65656, avg=31244.76, stdev=9269.17 00:35:14.205 clat (usec): min=156, max=852, avg=479.31, stdev=125.35 00:35:14.205 lat (usec): min=180, max=885, avg=510.55, stdev=129.30 00:35:14.205 clat percentiles (usec): 00:35:14.205 | 1.00th=[ 186], 5.00th=[ 273], 10.00th=[ 314], 20.00th=[ 371], 00:35:14.205 | 30.00th=[ 416], 40.00th=[ 449], 50.00th=[ 482], 60.00th=[ 510], 00:35:14.205 | 70.00th=[ 537], 80.00th=[ 594], 90.00th=[ 644], 95.00th=[ 685], 00:35:14.205 | 99.00th=[ 766], 99.50th=[ 791], 99.90th=[ 857], 99.95th=[ 857], 00:35:14.205 | 99.99th=[ 857] 00:35:14.205 bw ( KiB/s): min= 4096, max= 4096, per=51.40%, avg=4096.00, stdev= 0.00, samples=1 00:35:14.205 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:14.205 lat (usec) : 250=2.82%, 500=51.79%, 750=40.49%, 1000=1.51% 00:35:14.205 lat (msec) : 50=3.39% 00:35:14.205 cpu : usr=0.89%, sys=1.48%, ctx=532, majf=0, minf=1 00:35:14.205 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:14.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.205 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.205 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:14.205 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:14.205 00:35:14.205 Run status group 0 (all jobs): 00:35:14.205 READ: bw=292KiB/s (299kB/s), 66.1KiB/s-78.8KiB/s (67.7kB/s-80.7kB/s), io=300KiB (307kB), run=1015-1028msec 00:35:14.205 WRITE: bw=7969KiB/s (8160kB/s), 1992KiB/s-2018KiB/s (2040kB/s-2066kB/s), io=8192KiB (8389kB), run=1015-1028msec 00:35:14.205 00:35:14.205 Disk stats (read/write): 00:35:14.205 nvme0n1: ios=62/512, merge=0/0, ticks=511/316, in_queue=827, util=87.88% 00:35:14.205 nvme0n2: ios=47/512, merge=0/0, ticks=669/232, in_queue=901, util=96.54% 00:35:14.205 nvme0n3: ios=16/512, merge=0/0, ticks=624/121, in_queue=745, util=88.81% 00:35:14.205 nvme0n4: ios=54/512, merge=0/0, ticks=1465/216, in_queue=1681, util=98.63% 00:35:14.205 06:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:35:14.205 [global] 00:35:14.205 thread=1 00:35:14.205 invalidate=1 00:35:14.206 rw=write 00:35:14.206 time_based=1 00:35:14.206 runtime=1 00:35:14.206 ioengine=libaio 00:35:14.206 direct=1 00:35:14.206 bs=4096 00:35:14.206 iodepth=128 00:35:14.206 norandommap=0 00:35:14.206 numjobs=1 00:35:14.206 00:35:14.206 verify_dump=1 00:35:14.206 verify_backlog=512 00:35:14.206 verify_state_save=0 00:35:14.206 do_verify=1 00:35:14.206 verify=crc32c-intel 00:35:14.206 [job0] 00:35:14.206 filename=/dev/nvme0n1 00:35:14.206 [job1] 00:35:14.206 filename=/dev/nvme0n2 00:35:14.206 [job2] 00:35:14.206 filename=/dev/nvme0n3 00:35:14.206 [job3] 00:35:14.206 filename=/dev/nvme0n4 00:35:14.206 Could not set queue depth (nvme0n1) 00:35:14.206 Could not set queue depth (nvme0n2) 00:35:14.206 Could not set queue depth (nvme0n3) 00:35:14.206 Could not set queue depth (nvme0n4) 00:35:14.466 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:14.466 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:14.466 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:14.466 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:14.466 fio-3.35 00:35:14.466 Starting 4 threads 00:35:15.853 00:35:15.853 job0: (groupid=0, jobs=1): err= 0: pid=582108: Mon Dec 9 06:34:10 2024 00:35:15.853 read: IOPS=7111, BW=27.8MiB/s (29.1MB/s)(28.0MiB/1008msec) 00:35:15.853 slat (nsec): min=922, max=8359.6k, avg=64752.97, stdev=496786.71 00:35:15.853 clat (usec): min=2848, max=21294, avg=8499.10, stdev=2499.97 00:35:15.853 lat (usec): min=2852, max=21298, avg=8563.86, stdev=2531.39 00:35:15.853 clat percentiles (usec): 00:35:15.853 | 1.00th=[ 4113], 5.00th=[ 5604], 10.00th=[ 5932], 20.00th=[ 6652], 00:35:15.853 | 30.00th=[ 6980], 40.00th=[ 7373], 50.00th=[ 7898], 60.00th=[ 8455], 00:35:15.853 | 70.00th=[ 9241], 80.00th=[10421], 90.00th=[11731], 95.00th=[12911], 00:35:15.853 | 99.00th=[16909], 99.50th=[18744], 99.90th=[20579], 99.95th=[20579], 00:35:15.853 | 99.99th=[21365] 00:35:15.853 write: IOPS=7249, BW=28.3MiB/s (29.7MB/s)(28.5MiB/1008msec); 0 zone resets 00:35:15.853 slat (nsec): min=1615, max=9969.3k, avg=68464.50, stdev=443143.06 00:35:15.853 clat (usec): min=1166, max=51907, avg=9156.46, stdev=6863.19 00:35:15.853 lat (usec): min=1177, max=51908, avg=9224.93, stdev=6902.36 00:35:15.853 clat percentiles (usec): 00:35:15.853 | 1.00th=[ 2737], 5.00th=[ 4293], 10.00th=[ 4948], 20.00th=[ 5932], 00:35:15.853 | 30.00th=[ 6652], 40.00th=[ 7046], 50.00th=[ 7177], 60.00th=[ 7439], 00:35:15.853 | 70.00th=[ 8291], 80.00th=[ 9765], 90.00th=[14746], 95.00th=[20579], 00:35:15.853 | 99.00th=[41681], 99.50th=[50070], 99.90th=[51119], 99.95th=[52167], 00:35:15.853 | 99.99th=[52167] 00:35:15.853 bw ( KiB/s): min=25968, max=31496, per=28.18%, avg=28732.00, stdev=3908.89, samples=2 00:35:15.853 iops : min= 6492, max= 7874, avg=7183.00, stdev=977.22, samples=2 00:35:15.853 lat (msec) : 2=0.15%, 4=2.02%, 10=76.88%, 20=18.25%, 50=2.44% 00:35:15.853 lat (msec) : 100=0.26% 00:35:15.853 cpu : usr=4.97%, sys=5.96%, ctx=641, majf=0, minf=1 00:35:15.853 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:35:15.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:15.853 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:15.853 issued rwts: total=7168,7307,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:15.853 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:15.853 job1: (groupid=0, jobs=1): err= 0: pid=582109: Mon Dec 9 06:34:10 2024 00:35:15.853 read: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec) 00:35:15.853 slat (nsec): min=943, max=10718k, avg=84294.90, stdev=646614.40 00:35:15.853 clat (usec): min=3633, max=34418, avg=11325.90, stdev=5241.40 00:35:15.853 lat (usec): min=3643, max=34420, avg=11410.20, stdev=5280.82 00:35:15.853 clat percentiles (usec): 00:35:15.853 | 1.00th=[ 5145], 5.00th=[ 6194], 10.00th=[ 6783], 20.00th=[ 7701], 00:35:15.853 | 30.00th=[ 8160], 40.00th=[ 8356], 50.00th=[ 8848], 60.00th=[10552], 00:35:15.853 | 70.00th=[12649], 80.00th=[15008], 90.00th=[18482], 95.00th=[22414], 00:35:15.853 | 99.00th=[28967], 99.50th=[29230], 99.90th=[34341], 99.95th=[34341], 00:35:15.853 | 99.99th=[34341] 00:35:15.853 write: IOPS=5757, BW=22.5MiB/s (23.6MB/s)(22.6MiB/1003msec); 0 zone resets 00:35:15.853 slat (nsec): min=1641, max=13800k, avg=73162.91, stdev=568832.67 00:35:15.853 clat (usec): min=827, max=43970, avg=11005.48, stdev=7390.69 00:35:15.853 lat (usec): min=835, max=43977, avg=11078.64, stdev=7439.47 00:35:15.853 clat percentiles (usec): 00:35:15.853 | 1.00th=[ 1565], 5.00th=[ 3556], 10.00th=[ 4293], 20.00th=[ 5342], 00:35:15.853 | 30.00th=[ 6849], 40.00th=[ 7504], 50.00th=[ 9110], 60.00th=[10945], 00:35:15.853 | 70.00th=[13173], 80.00th=[15270], 90.00th=[17957], 95.00th=[23462], 00:35:15.853 | 99.00th=[40633], 99.50th=[41681], 99.90th=[43779], 99.95th=[43779], 00:35:15.853 | 99.99th=[43779] 00:35:15.853 bw ( KiB/s): min=20608, max=24568, per=22.15%, avg=22588.00, stdev=2800.14, samples=2 00:35:15.853 iops : min= 5152, max= 6142, avg=5647.00, stdev=700.04, samples=2 00:35:15.853 lat (usec) : 1000=0.07% 00:35:15.853 lat (msec) : 2=0.81%, 4=2.96%, 10=51.75%, 20=35.89%, 50=8.52% 00:35:15.853 cpu : usr=4.49%, sys=6.49%, ctx=389, majf=0, minf=2 00:35:15.853 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:35:15.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:15.853 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:15.853 issued rwts: total=5632,5775,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:15.853 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:15.853 job2: (groupid=0, jobs=1): err= 0: pid=582110: Mon Dec 9 06:34:10 2024 00:35:15.853 read: IOPS=7414, BW=29.0MiB/s (30.4MB/s)(29.1MiB/1003msec) 00:35:15.853 slat (nsec): min=1002, max=16028k, avg=70588.76, stdev=581013.78 00:35:15.853 clat (usec): min=1449, max=25754, avg=9146.65, stdev=2718.67 00:35:15.853 lat (usec): min=3905, max=25779, avg=9217.24, stdev=2759.21 00:35:15.853 clat percentiles (usec): 00:35:15.853 | 1.00th=[ 5538], 5.00th=[ 6259], 10.00th=[ 6521], 20.00th=[ 7111], 00:35:15.853 | 30.00th=[ 7701], 40.00th=[ 7898], 50.00th=[ 8356], 60.00th=[ 8717], 00:35:15.853 | 70.00th=[ 9503], 80.00th=[11076], 90.00th=[13304], 95.00th=[14484], 00:35:15.853 | 99.00th=[17957], 99.50th=[18744], 99.90th=[22152], 99.95th=[22152], 00:35:15.853 | 99.99th=[25822] 00:35:15.853 write: IOPS=7657, BW=29.9MiB/s (31.4MB/s)(30.0MiB/1003msec); 0 zone resets 00:35:15.853 slat (nsec): min=1695, max=6749.4k, avg=57451.02, stdev=338840.44 00:35:15.853 clat (usec): min=1225, max=18695, avg=7703.00, stdev=1813.55 00:35:15.853 lat (usec): min=1236, max=18697, avg=7760.45, stdev=1821.57 00:35:15.853 clat percentiles (usec): 00:35:15.853 | 1.00th=[ 3359], 5.00th=[ 4621], 10.00th=[ 5014], 20.00th=[ 6128], 00:35:15.853 | 30.00th=[ 7111], 40.00th=[ 7832], 50.00th=[ 8094], 60.00th=[ 8160], 00:35:15.853 | 70.00th=[ 8291], 80.00th=[ 8586], 90.00th=[ 9896], 95.00th=[10814], 00:35:15.853 | 99.00th=[11207], 99.50th=[11600], 99.90th=[15401], 99.95th=[16909], 00:35:15.853 | 99.99th=[18744] 00:35:15.853 bw ( KiB/s): min=28688, max=32752, per=30.13%, avg=30720.00, stdev=2873.68, samples=2 00:35:15.853 iops : min= 7172, max= 8188, avg=7680.00, stdev=718.42, samples=2 00:35:15.853 lat (msec) : 2=0.19%, 4=0.91%, 10=81.75%, 20=16.94%, 50=0.21% 00:35:15.853 cpu : usr=5.19%, sys=5.99%, ctx=745, majf=0, minf=1 00:35:15.853 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:35:15.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:15.853 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:15.853 issued rwts: total=7437,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:15.853 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:15.853 job3: (groupid=0, jobs=1): err= 0: pid=582111: Mon Dec 9 06:34:10 2024 00:35:15.853 read: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec) 00:35:15.853 slat (nsec): min=1066, max=14248k, avg=102219.81, stdev=790950.63 00:35:15.853 clat (usec): min=3669, max=61967, avg=11844.86, stdev=6260.47 00:35:15.853 lat (usec): min=3671, max=61970, avg=11947.08, stdev=6327.51 00:35:15.853 clat percentiles (usec): 00:35:15.853 | 1.00th=[ 5735], 5.00th=[ 6456], 10.00th=[ 7308], 20.00th=[ 7898], 00:35:15.853 | 30.00th=[ 9110], 40.00th=[ 9503], 50.00th=[10028], 60.00th=[10945], 00:35:15.853 | 70.00th=[12911], 80.00th=[14353], 90.00th=[17695], 95.00th=[20055], 00:35:15.853 | 99.00th=[39584], 99.50th=[57934], 99.90th=[62129], 99.95th=[62129], 00:35:15.853 | 99.99th=[62129] 00:35:15.853 write: IOPS=4898, BW=19.1MiB/s (20.1MB/s)(19.3MiB/1007msec); 0 zone resets 00:35:15.853 slat (nsec): min=1758, max=33946k, avg=102089.63, stdev=781890.40 00:35:15.853 clat (usec): min=2188, max=66594, avg=14801.69, stdev=11876.18 00:35:15.853 lat (usec): min=2191, max=66603, avg=14903.78, stdev=11929.11 00:35:15.853 clat percentiles (usec): 00:35:15.853 | 1.00th=[ 4015], 5.00th=[ 5145], 10.00th=[ 5997], 20.00th=[ 7898], 00:35:15.853 | 30.00th=[ 8455], 40.00th=[ 9503], 50.00th=[11076], 60.00th=[12518], 00:35:15.853 | 70.00th=[14746], 80.00th=[16319], 90.00th=[32637], 95.00th=[43779], 00:35:15.853 | 99.00th=[58983], 99.50th=[63177], 99.90th=[65274], 99.95th=[66323], 00:35:15.853 | 99.99th=[66847] 00:35:15.853 bw ( KiB/s): min=18528, max=19920, per=18.85%, avg=19224.00, stdev=984.29, samples=2 00:35:15.853 iops : min= 4632, max= 4980, avg=4806.00, stdev=246.07, samples=2 00:35:15.853 lat (msec) : 4=0.71%, 10=45.69%, 20=43.36%, 50=7.93%, 100=2.31% 00:35:15.853 cpu : usr=4.08%, sys=4.08%, ctx=368, majf=0, minf=1 00:35:15.853 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:35:15.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:15.853 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:15.853 issued rwts: total=4608,4933,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:15.853 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:15.853 00:35:15.853 Run status group 0 (all jobs): 00:35:15.853 READ: bw=96.3MiB/s (101MB/s), 17.9MiB/s-29.0MiB/s (18.7MB/s-30.4MB/s), io=97.1MiB (102MB), run=1003-1008msec 00:35:15.853 WRITE: bw=99.6MiB/s (104MB/s), 19.1MiB/s-29.9MiB/s (20.1MB/s-31.4MB/s), io=100MiB (105MB), run=1003-1008msec 00:35:15.853 00:35:15.853 Disk stats (read/write): 00:35:15.853 nvme0n1: ios=6147/6144, merge=0/0, ticks=48877/54623, in_queue=103500, util=88.48% 00:35:15.853 nvme0n2: ios=4649/5120, merge=0/0, ticks=47141/56536, in_queue=103677, util=92.99% 00:35:15.853 nvme0n3: ios=6150/6656, merge=0/0, ticks=52565/49172, in_queue=101737, util=88.92% 00:35:15.853 nvme0n4: ios=3633/3890, merge=0/0, ticks=42280/58075, in_queue=100355, util=99.68% 00:35:15.853 06:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:35:15.853 [global] 00:35:15.853 thread=1 00:35:15.853 invalidate=1 00:35:15.853 rw=randwrite 00:35:15.853 time_based=1 00:35:15.853 runtime=1 00:35:15.853 ioengine=libaio 00:35:15.853 direct=1 00:35:15.853 bs=4096 00:35:15.853 iodepth=128 00:35:15.853 norandommap=0 00:35:15.853 numjobs=1 00:35:15.853 00:35:15.853 verify_dump=1 00:35:15.853 verify_backlog=512 00:35:15.854 verify_state_save=0 00:35:15.854 do_verify=1 00:35:15.854 verify=crc32c-intel 00:35:15.854 [job0] 00:35:15.854 filename=/dev/nvme0n1 00:35:15.854 [job1] 00:35:15.854 filename=/dev/nvme0n2 00:35:15.854 [job2] 00:35:15.854 filename=/dev/nvme0n3 00:35:15.854 [job3] 00:35:15.854 filename=/dev/nvme0n4 00:35:15.854 Could not set queue depth (nvme0n1) 00:35:15.854 Could not set queue depth (nvme0n2) 00:35:15.854 Could not set queue depth (nvme0n3) 00:35:15.854 Could not set queue depth (nvme0n4) 00:35:16.114 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:16.114 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:16.114 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:16.114 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:16.114 fio-3.35 00:35:16.114 Starting 4 threads 00:35:17.495 00:35:17.495 job0: (groupid=0, jobs=1): err= 0: pid=582570: Mon Dec 9 06:34:11 2024 00:35:17.495 read: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec) 00:35:17.495 slat (nsec): min=1000, max=41321k, avg=164361.12, stdev=1297192.79 00:35:17.495 clat (usec): min=7493, max=66194, avg=20691.69, stdev=12438.56 00:35:17.495 lat (usec): min=7497, max=66222, avg=20856.05, stdev=12537.56 00:35:17.495 clat percentiles (usec): 00:35:17.495 | 1.00th=[ 8586], 5.00th=[ 9241], 10.00th=[ 9896], 20.00th=[10290], 00:35:17.495 | 30.00th=[10945], 40.00th=[13698], 50.00th=[15795], 60.00th=[20841], 00:35:17.495 | 70.00th=[24249], 80.00th=[30540], 90.00th=[39060], 95.00th=[50594], 00:35:17.495 | 99.00th=[60031], 99.50th=[60031], 99.90th=[61080], 99.95th=[62653], 00:35:17.495 | 99.99th=[66323] 00:35:17.495 write: IOPS=2866, BW=11.2MiB/s (11.7MB/s)(11.3MiB/1005msec); 0 zone resets 00:35:17.495 slat (nsec): min=1697, max=15504k, avg=195846.71, stdev=1078929.74 00:35:17.495 clat (usec): min=1205, max=91697, avg=25764.39, stdev=22822.08 00:35:17.495 lat (usec): min=1216, max=91704, avg=25960.24, stdev=22988.83 00:35:17.495 clat percentiles (usec): 00:35:17.495 | 1.00th=[ 4555], 5.00th=[ 8356], 10.00th=[ 8979], 20.00th=[ 9372], 00:35:17.495 | 30.00th=[10814], 40.00th=[15401], 50.00th=[15795], 60.00th=[19530], 00:35:17.495 | 70.00th=[27657], 80.00th=[33817], 90.00th=[78119], 95.00th=[81265], 00:35:17.495 | 99.00th=[86508], 99.50th=[89654], 99.90th=[91751], 99.95th=[91751], 00:35:17.495 | 99.99th=[91751] 00:35:17.495 bw ( KiB/s): min= 8888, max=13136, per=10.96%, avg=11012.00, stdev=3003.79, samples=2 00:35:17.495 iops : min= 2222, max= 3284, avg=2753.00, stdev=750.95, samples=2 00:35:17.495 lat (msec) : 2=0.06%, 10=19.43%, 20=41.57%, 50=29.28%, 100=9.67% 00:35:17.495 cpu : usr=2.29%, sys=3.19%, ctx=264, majf=0, minf=1 00:35:17.495 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:35:17.495 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:17.496 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:17.496 issued rwts: total=2560,2881,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:17.496 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:17.496 job1: (groupid=0, jobs=1): err= 0: pid=582583: Mon Dec 9 06:34:11 2024 00:35:17.496 read: IOPS=8721, BW=34.1MiB/s (35.7MB/s)(34.3MiB/1007msec) 00:35:17.496 slat (nsec): min=960, max=6879.1k, avg=58217.94, stdev=468550.85 00:35:17.496 clat (usec): min=2237, max=13998, avg=7579.45, stdev=1742.56 00:35:17.496 lat (usec): min=2442, max=15478, avg=7637.67, stdev=1778.50 00:35:17.496 clat percentiles (usec): 00:35:17.496 | 1.00th=[ 4948], 5.00th=[ 5735], 10.00th=[ 5997], 20.00th=[ 6390], 00:35:17.496 | 30.00th=[ 6652], 40.00th=[ 6849], 50.00th=[ 7046], 60.00th=[ 7308], 00:35:17.496 | 70.00th=[ 7635], 80.00th=[ 8455], 90.00th=[10552], 95.00th=[11469], 00:35:17.496 | 99.00th=[12518], 99.50th=[12911], 99.90th=[13566], 99.95th=[13566], 00:35:17.496 | 99.99th=[13960] 00:35:17.496 write: IOPS=9151, BW=35.7MiB/s (37.5MB/s)(36.0MiB/1007msec); 0 zone resets 00:35:17.496 slat (nsec): min=1575, max=6013.6k, avg=47889.82, stdev=322799.45 00:35:17.496 clat (usec): min=1269, max=13605, avg=6639.47, stdev=1558.37 00:35:17.496 lat (usec): min=1278, max=13613, avg=6687.36, stdev=1567.79 00:35:17.496 clat percentiles (usec): 00:35:17.496 | 1.00th=[ 2900], 5.00th=[ 4228], 10.00th=[ 4490], 20.00th=[ 5211], 00:35:17.496 | 30.00th=[ 6259], 40.00th=[ 6652], 50.00th=[ 6980], 60.00th=[ 7046], 00:35:17.496 | 70.00th=[ 7177], 80.00th=[ 7308], 90.00th=[ 8979], 95.00th=[ 9241], 00:35:17.496 | 99.00th=[10552], 99.50th=[11207], 99.90th=[13173], 99.95th=[13566], 00:35:17.496 | 99.99th=[13566] 00:35:17.496 bw ( KiB/s): min=36472, max=36864, per=36.48%, avg=36668.00, stdev=277.19, samples=2 00:35:17.496 iops : min= 9118, max= 9216, avg=9167.00, stdev=69.30, samples=2 00:35:17.496 lat (msec) : 2=0.19%, 4=1.68%, 10=90.27%, 20=7.85% 00:35:17.496 cpu : usr=4.17%, sys=9.15%, ctx=732, majf=0, minf=1 00:35:17.496 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:35:17.496 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:17.496 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:17.496 issued rwts: total=8783,9216,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:17.496 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:17.496 job2: (groupid=0, jobs=1): err= 0: pid=582585: Mon Dec 9 06:34:11 2024 00:35:17.496 read: IOPS=7517, BW=29.4MiB/s (30.8MB/s)(29.6MiB/1007msec) 00:35:17.496 slat (nsec): min=995, max=8219.5k, avg=67898.98, stdev=554476.97 00:35:17.496 clat (usec): min=1832, max=18731, avg=8964.02, stdev=2159.23 00:35:17.496 lat (usec): min=2628, max=19597, avg=9031.92, stdev=2203.72 00:35:17.496 clat percentiles (usec): 00:35:17.496 | 1.00th=[ 5407], 5.00th=[ 6259], 10.00th=[ 7111], 20.00th=[ 7504], 00:35:17.496 | 30.00th=[ 7832], 40.00th=[ 8094], 50.00th=[ 8356], 60.00th=[ 8586], 00:35:17.496 | 70.00th=[ 9110], 80.00th=[10552], 90.00th=[12649], 95.00th=[13698], 00:35:17.496 | 99.00th=[15270], 99.50th=[15664], 99.90th=[18744], 99.95th=[18744], 00:35:17.496 | 99.99th=[18744] 00:35:17.496 write: IOPS=7626, BW=29.8MiB/s (31.2MB/s)(30.0MiB/1007msec); 0 zone resets 00:35:17.496 slat (nsec): min=1620, max=7227.6k, avg=58764.76, stdev=426869.96 00:35:17.496 clat (usec): min=1553, max=16103, avg=7788.08, stdev=1844.60 00:35:17.496 lat (usec): min=1561, max=16106, avg=7846.84, stdev=1860.42 00:35:17.496 clat percentiles (usec): 00:35:17.496 | 1.00th=[ 3425], 5.00th=[ 5080], 10.00th=[ 5342], 20.00th=[ 6325], 00:35:17.496 | 30.00th=[ 6915], 40.00th=[ 7635], 50.00th=[ 8094], 60.00th=[ 8291], 00:35:17.496 | 70.00th=[ 8455], 80.00th=[ 8586], 90.00th=[10552], 95.00th=[11338], 00:35:17.496 | 99.00th=[13173], 99.50th=[13960], 99.90th=[15008], 99.95th=[15270], 00:35:17.496 | 99.99th=[16057] 00:35:17.496 bw ( KiB/s): min=29872, max=31568, per=30.56%, avg=30720.00, stdev=1199.25, samples=2 00:35:17.496 iops : min= 7468, max= 7892, avg=7680.00, stdev=299.81, samples=2 00:35:17.496 lat (msec) : 2=0.12%, 4=0.75%, 10=82.85%, 20=16.28% 00:35:17.496 cpu : usr=5.37%, sys=6.76%, ctx=538, majf=0, minf=1 00:35:17.496 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:35:17.496 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:17.496 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:17.496 issued rwts: total=7570,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:17.496 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:17.496 job3: (groupid=0, jobs=1): err= 0: pid=582586: Mon Dec 9 06:34:11 2024 00:35:17.496 read: IOPS=5084, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1007msec) 00:35:17.496 slat (nsec): min=1037, max=13606k, avg=90785.57, stdev=706086.36 00:35:17.496 clat (usec): min=3390, max=36924, avg=11709.41, stdev=3719.91 00:35:17.496 lat (usec): min=3399, max=36935, avg=11800.20, stdev=3789.58 00:35:17.496 clat percentiles (usec): 00:35:17.496 | 1.00th=[ 6456], 5.00th=[ 8029], 10.00th=[ 8717], 20.00th=[ 9372], 00:35:17.496 | 30.00th=[ 9503], 40.00th=[10028], 50.00th=[10552], 60.00th=[11469], 00:35:17.496 | 70.00th=[12125], 80.00th=[14091], 90.00th=[15795], 95.00th=[19006], 00:35:17.496 | 99.00th=[27132], 99.50th=[31065], 99.90th=[35914], 99.95th=[36963], 00:35:17.496 | 99.99th=[36963] 00:35:17.496 write: IOPS=5488, BW=21.4MiB/s (22.5MB/s)(21.6MiB/1007msec); 0 zone resets 00:35:17.496 slat (nsec): min=1656, max=9501.5k, avg=85263.80, stdev=556097.24 00:35:17.496 clat (usec): min=1223, max=43581, avg=12263.61, stdev=6750.49 00:35:17.496 lat (usec): min=1235, max=43585, avg=12348.88, stdev=6795.54 00:35:17.496 clat percentiles (usec): 00:35:17.496 | 1.00th=[ 4752], 5.00th=[ 5997], 10.00th=[ 6325], 20.00th=[ 7767], 00:35:17.496 | 30.00th=[ 8586], 40.00th=[ 8848], 50.00th=[ 9372], 60.00th=[10945], 00:35:17.496 | 70.00th=[13304], 80.00th=[15795], 90.00th=[21103], 95.00th=[29230], 00:35:17.496 | 99.00th=[35914], 99.50th=[36439], 99.90th=[37487], 99.95th=[37487], 00:35:17.496 | 99.99th=[43779] 00:35:17.496 bw ( KiB/s): min=18624, max=24576, per=21.49%, avg=21600.00, stdev=4208.70, samples=2 00:35:17.496 iops : min= 4656, max= 6144, avg=5400.00, stdev=1052.17, samples=2 00:35:17.496 lat (msec) : 2=0.08%, 4=0.28%, 10=46.97%, 20=44.82%, 50=7.85% 00:35:17.496 cpu : usr=4.77%, sys=5.17%, ctx=340, majf=0, minf=2 00:35:17.496 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:35:17.496 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:17.496 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:17.496 issued rwts: total=5120,5527,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:17.496 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:17.496 00:35:17.496 Run status group 0 (all jobs): 00:35:17.496 READ: bw=93.2MiB/s (97.8MB/s), 9.95MiB/s-34.1MiB/s (10.4MB/s-35.7MB/s), io=93.9MiB (98.4MB), run=1005-1007msec 00:35:17.496 WRITE: bw=98.2MiB/s (103MB/s), 11.2MiB/s-35.7MiB/s (11.7MB/s-37.5MB/s), io=98.8MiB (104MB), run=1005-1007msec 00:35:17.496 00:35:17.496 Disk stats (read/write): 00:35:17.496 nvme0n1: ios=2069/2155, merge=0/0, ticks=23303/29812, in_queue=53115, util=96.79% 00:35:17.496 nvme0n2: ios=7422/7680, merge=0/0, ticks=53303/49037, in_queue=102340, util=88.52% 00:35:17.496 nvme0n3: ios=6144/6618, merge=0/0, ticks=53110/49802, in_queue=102912, util=88.91% 00:35:17.496 nvme0n4: ios=4579/4615, merge=0/0, ticks=49627/53033, in_queue=102660, util=90.30% 00:35:17.496 06:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:35:17.496 06:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=582622 00:35:17.496 06:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:35:17.496 06:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:35:17.496 [global] 00:35:17.496 thread=1 00:35:17.496 invalidate=1 00:35:17.496 rw=read 00:35:17.496 time_based=1 00:35:17.496 runtime=10 00:35:17.496 ioengine=libaio 00:35:17.496 direct=1 00:35:17.496 bs=4096 00:35:17.496 iodepth=1 00:35:17.496 norandommap=1 00:35:17.496 numjobs=1 00:35:17.496 00:35:17.496 [job0] 00:35:17.496 filename=/dev/nvme0n1 00:35:17.496 [job1] 00:35:17.496 filename=/dev/nvme0n2 00:35:17.496 [job2] 00:35:17.496 filename=/dev/nvme0n3 00:35:17.496 [job3] 00:35:17.496 filename=/dev/nvme0n4 00:35:17.496 Could not set queue depth (nvme0n1) 00:35:17.496 Could not set queue depth (nvme0n2) 00:35:17.496 Could not set queue depth (nvme0n3) 00:35:17.496 Could not set queue depth (nvme0n4) 00:35:17.757 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:17.757 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:17.757 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:17.757 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:17.757 fio-3.35 00:35:17.757 Starting 4 threads 00:35:20.301 06:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:35:20.562 06:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:35:20.562 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=13549568, buflen=4096 00:35:20.562 fio: pid=582935, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:35:20.821 06:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:20.821 06:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:35:20.821 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=5210112, buflen=4096 00:35:20.821 fio: pid=582934, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:35:20.821 06:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:20.821 06:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:35:20.821 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=12218368, buflen=4096 00:35:20.821 fio: pid=582902, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:35:21.081 06:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:21.081 06:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:35:21.081 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=319488, buflen=4096 00:35:21.081 fio: pid=582915, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:35:21.081 00:35:21.081 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=582902: Mon Dec 9 06:34:15 2024 00:35:21.081 read: IOPS=994, BW=3976KiB/s (4071kB/s)(11.7MiB/3001msec) 00:35:21.081 slat (usec): min=6, max=11028, avg=31.01, stdev=239.30 00:35:21.081 clat (usec): min=483, max=9137, avg=961.26, stdev=220.59 00:35:21.081 lat (usec): min=508, max=11939, avg=992.27, stdev=325.80 00:35:21.081 clat percentiles (usec): 00:35:21.081 | 1.00th=[ 668], 5.00th=[ 766], 10.00th=[ 816], 20.00th=[ 881], 00:35:21.081 | 30.00th=[ 914], 40.00th=[ 947], 50.00th=[ 971], 60.00th=[ 988], 00:35:21.081 | 70.00th=[ 1012], 80.00th=[ 1037], 90.00th=[ 1074], 95.00th=[ 1090], 00:35:21.081 | 99.00th=[ 1156], 99.50th=[ 1205], 99.90th=[ 5800], 99.95th=[ 5932], 00:35:21.081 | 99.99th=[ 9110] 00:35:21.081 bw ( KiB/s): min= 3984, max= 4104, per=42.39%, avg=4041.60, stdev=49.77, samples=5 00:35:21.081 iops : min= 996, max= 1026, avg=1010.40, stdev=12.44, samples=5 00:35:21.081 lat (usec) : 500=0.07%, 750=3.85%, 1000=61.29% 00:35:21.081 lat (msec) : 2=34.65%, 10=0.10% 00:35:21.081 cpu : usr=1.13%, sys=2.90%, ctx=2986, majf=0, minf=1 00:35:21.081 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:21.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:21.082 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:21.082 issued rwts: total=2984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:21.082 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:21.082 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=582915: Mon Dec 9 06:34:15 2024 00:35:21.082 read: IOPS=24, BW=97.3KiB/s (99.7kB/s)(312KiB/3206msec) 00:35:21.082 slat (usec): min=9, max=4807, avg=89.42, stdev=538.09 00:35:21.082 clat (usec): min=680, max=42250, avg=40667.66, stdev=6525.33 00:35:21.082 lat (usec): min=708, max=47058, avg=40757.92, stdev=6562.44 00:35:21.082 clat percentiles (usec): 00:35:21.082 | 1.00th=[ 685], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:35:21.082 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:35:21.082 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:21.082 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:21.082 | 99.99th=[42206] 00:35:21.082 bw ( KiB/s): min= 96, max= 104, per=1.02%, avg=97.83, stdev= 3.25, samples=6 00:35:21.082 iops : min= 24, max= 26, avg=24.33, stdev= 0.82, samples=6 00:35:21.082 lat (usec) : 750=1.27%, 1000=1.27% 00:35:21.082 lat (msec) : 50=96.20% 00:35:21.082 cpu : usr=0.12%, sys=0.00%, ctx=82, majf=0, minf=2 00:35:21.082 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:21.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:21.082 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:21.082 issued rwts: total=79,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:21.082 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:21.082 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=582934: Mon Dec 9 06:34:15 2024 00:35:21.082 read: IOPS=448, BW=1794KiB/s (1837kB/s)(5088KiB/2836msec) 00:35:21.082 slat (usec): min=6, max=15427, avg=43.64, stdev=488.58 00:35:21.082 clat (usec): min=280, max=42143, avg=2162.30, stdev=7522.45 00:35:21.082 lat (usec): min=307, max=42170, avg=2205.95, stdev=7535.36 00:35:21.082 clat percentiles (usec): 00:35:21.082 | 1.00th=[ 433], 5.00th=[ 523], 10.00th=[ 570], 20.00th=[ 635], 00:35:21.082 | 30.00th=[ 685], 40.00th=[ 725], 50.00th=[ 758], 60.00th=[ 799], 00:35:21.082 | 70.00th=[ 824], 80.00th=[ 857], 90.00th=[ 889], 95.00th=[ 963], 00:35:21.082 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:21.082 | 99.99th=[42206] 00:35:21.082 bw ( KiB/s): min= 96, max= 5024, per=15.10%, avg=1440.00, stdev=2148.53, samples=5 00:35:21.082 iops : min= 24, max= 1256, avg=360.00, stdev=537.13, samples=5 00:35:21.082 lat (usec) : 500=3.53%, 750=44.15%, 1000=48.47% 00:35:21.082 lat (msec) : 2=0.31%, 50=3.46% 00:35:21.082 cpu : usr=0.63%, sys=1.13%, ctx=1277, majf=0, minf=2 00:35:21.082 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:21.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:21.082 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:21.082 issued rwts: total=1273,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:21.082 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:21.082 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=582935: Mon Dec 9 06:34:15 2024 00:35:21.082 read: IOPS=1240, BW=4960KiB/s (5079kB/s)(12.9MiB/2668msec) 00:35:21.082 slat (nsec): min=7046, max=58441, avg=23160.14, stdev=7747.82 00:35:21.082 clat (usec): min=277, max=957, avg=771.55, stdev=66.71 00:35:21.082 lat (usec): min=303, max=973, avg=794.71, stdev=68.37 00:35:21.082 clat percentiles (usec): 00:35:21.082 | 1.00th=[ 570], 5.00th=[ 652], 10.00th=[ 676], 20.00th=[ 725], 00:35:21.082 | 30.00th=[ 750], 40.00th=[ 766], 50.00th=[ 783], 60.00th=[ 799], 00:35:21.082 | 70.00th=[ 807], 80.00th=[ 824], 90.00th=[ 840], 95.00th=[ 865], 00:35:21.082 | 99.00th=[ 906], 99.50th=[ 914], 99.90th=[ 947], 99.95th=[ 947], 00:35:21.082 | 99.99th=[ 955] 00:35:21.082 bw ( KiB/s): min= 4936, max= 5072, per=52.53%, avg=5008.00, stdev=53.37, samples=5 00:35:21.082 iops : min= 1234, max= 1268, avg=1252.00, stdev=13.34, samples=5 00:35:21.082 lat (usec) : 500=0.30%, 750=29.89%, 1000=69.78% 00:35:21.082 cpu : usr=1.09%, sys=3.45%, ctx=3309, majf=0, minf=2 00:35:21.082 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:21.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:21.082 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:21.082 issued rwts: total=3309,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:21.082 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:21.082 00:35:21.082 Run status group 0 (all jobs): 00:35:21.082 READ: bw=9533KiB/s (9762kB/s), 97.3KiB/s-4960KiB/s (99.7kB/s-5079kB/s), io=29.8MiB (31.3MB), run=2668-3206msec 00:35:21.082 00:35:21.082 Disk stats (read/write): 00:35:21.082 nvme0n1: ios=2858/0, merge=0/0, ticks=2763/0, in_queue=2763, util=94.99% 00:35:21.082 nvme0n2: ios=76/0, merge=0/0, ticks=3091/0, in_queue=3091, util=96.04% 00:35:21.082 nvme0n3: ios=1073/0, merge=0/0, ticks=3714/0, in_queue=3714, util=99.89% 00:35:21.082 nvme0n4: ios=3261/0, merge=0/0, ticks=2466/0, in_queue=2466, util=96.45% 00:35:21.082 06:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:21.082 06:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:35:21.342 06:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:21.342 06:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:35:21.603 06:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:21.603 06:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:35:21.862 06:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:21.862 06:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:35:21.862 06:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:35:21.862 06:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 582622 00:35:21.862 06:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:35:21.862 06:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:35:22.123 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:22.123 06:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:35:22.123 06:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:35:22.123 06:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:35:22.123 06:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:22.123 06:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:35:22.123 06:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:22.123 06:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:35:22.123 06:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:35:22.123 06:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:35:22.123 nvmf hotplug test: fio failed as expected 00:35:22.123 06:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:22.123 06:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:35:22.123 06:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:35:22.123 06:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:35:22.123 06:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:35:22.123 06:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:35:22.123 06:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:22.123 06:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:35:22.123 06:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:22.123 06:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:35:22.123 06:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:22.123 06:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:22.123 rmmod nvme_tcp 00:35:22.123 rmmod nvme_fabrics 00:35:22.384 rmmod nvme_keyring 00:35:22.384 06:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:22.384 06:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:35:22.384 06:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:35:22.384 06:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 579856 ']' 00:35:22.384 06:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 579856 00:35:22.384 06:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 579856 ']' 00:35:22.384 06:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 579856 00:35:22.384 06:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:35:22.384 06:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:22.384 06:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 579856 00:35:22.384 06:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:22.384 06:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:22.384 06:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 579856' 00:35:22.384 killing process with pid 579856 00:35:22.384 06:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 579856 00:35:22.384 06:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 579856 00:35:22.384 06:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:22.384 06:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:22.384 06:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:22.384 06:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:35:22.384 06:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:22.384 06:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:35:22.384 06:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:35:22.384 06:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:22.384 06:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:22.385 06:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:22.385 06:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:22.385 06:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:24.931 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:24.931 00:35:24.931 real 0m27.718s 00:35:24.931 user 1m47.662s 00:35:24.931 sys 0m11.706s 00:35:24.931 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:24.931 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:24.931 ************************************ 00:35:24.931 END TEST nvmf_fio_target 00:35:24.931 ************************************ 00:35:24.931 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:35:24.931 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:24.931 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:24.931 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:24.931 ************************************ 00:35:24.931 START TEST nvmf_bdevio 00:35:24.931 ************************************ 00:35:24.931 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:35:24.931 * Looking for test storage... 00:35:24.931 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:24.931 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:24.931 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:35:24.931 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:24.931 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:24.931 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:24.931 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:24.931 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:24.931 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:35:24.931 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:35:24.931 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:35:24.931 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:35:24.931 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:35:24.931 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:35:24.931 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:35:24.931 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:24.931 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:35:24.931 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:35:24.931 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:24.931 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:24.931 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:35:24.931 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:35:24.931 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:24.931 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:35:24.931 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:35:24.931 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:35:24.931 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:35:24.931 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:24.931 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:35:24.931 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:35:24.931 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:24.931 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:24.932 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:35:24.932 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:24.932 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:24.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:24.932 --rc genhtml_branch_coverage=1 00:35:24.932 --rc genhtml_function_coverage=1 00:35:24.932 --rc genhtml_legend=1 00:35:24.932 --rc geninfo_all_blocks=1 00:35:24.932 --rc geninfo_unexecuted_blocks=1 00:35:24.932 00:35:24.932 ' 00:35:24.932 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:24.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:24.932 --rc genhtml_branch_coverage=1 00:35:24.932 --rc genhtml_function_coverage=1 00:35:24.932 --rc genhtml_legend=1 00:35:24.932 --rc geninfo_all_blocks=1 00:35:24.932 --rc geninfo_unexecuted_blocks=1 00:35:24.932 00:35:24.932 ' 00:35:24.932 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:24.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:24.932 --rc genhtml_branch_coverage=1 00:35:24.932 --rc genhtml_function_coverage=1 00:35:24.932 --rc genhtml_legend=1 00:35:24.932 --rc geninfo_all_blocks=1 00:35:24.932 --rc geninfo_unexecuted_blocks=1 00:35:24.932 00:35:24.932 ' 00:35:24.932 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:24.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:24.932 --rc genhtml_branch_coverage=1 00:35:24.932 --rc genhtml_function_coverage=1 00:35:24.932 --rc genhtml_legend=1 00:35:24.932 --rc geninfo_all_blocks=1 00:35:24.932 --rc geninfo_unexecuted_blocks=1 00:35:24.932 00:35:24.932 ' 00:35:24.932 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:24.932 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:35:24.932 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:24.932 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:24.932 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:24.932 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:24.932 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:24.932 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:24.932 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:24.932 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:24.932 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:24.932 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:24.932 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:35:24.932 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:35:24.932 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:24.932 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:24.932 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:24.932 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:24.932 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:24.932 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:35:24.932 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:24.932 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:24.932 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:24.932 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:24.932 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:24.932 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:24.932 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:35:24.932 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:24.932 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:35:24.932 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:24.932 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:24.932 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:24.932 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:24.932 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:24.932 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:24.932 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:24.932 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:24.932 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:24.932 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:24.932 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:24.932 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:24.932 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:35:24.932 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:24.933 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:24.933 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:24.933 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:24.933 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:24.933 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:24.933 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:24.933 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:24.933 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:24.933 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:24.933 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:35:24.933 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:33.091 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:33.091 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:35:33.091 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:33.091 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:33.091 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:33.091 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:33.091 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:33.091 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:35:33.091 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:33.091 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:35:33.091 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:35:33.091 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:35:33.091 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:35:33.091 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:35:33.091 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:35:33.091 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:33.091 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:33.091 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:33.092 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:33.092 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:33.092 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:33.092 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:33.092 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:33.092 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.549 ms 00:35:33.092 00:35:33.092 --- 10.0.0.2 ping statistics --- 00:35:33.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:33.092 rtt min/avg/max/mdev = 0.549/0.549/0.549/0.000 ms 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:33.092 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:33.092 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.259 ms 00:35:33.092 00:35:33.092 --- 10.0.0.1 ping statistics --- 00:35:33.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:33.092 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:33.092 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:33.093 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:33.093 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=587609 00:35:33.093 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 587609 00:35:33.093 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:35:33.093 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 587609 ']' 00:35:33.093 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:33.093 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:33.093 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:33.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:33.093 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:33.093 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:33.093 [2024-12-09 06:34:26.697668] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:33.093 [2024-12-09 06:34:26.699114] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:35:33.093 [2024-12-09 06:34:26.699184] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:33.093 [2024-12-09 06:34:26.779417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:33.093 [2024-12-09 06:34:26.828226] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:33.093 [2024-12-09 06:34:26.828277] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:33.093 [2024-12-09 06:34:26.828286] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:33.093 [2024-12-09 06:34:26.828293] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:33.093 [2024-12-09 06:34:26.828299] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:33.093 [2024-12-09 06:34:26.830121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:35:33.093 [2024-12-09 06:34:26.830275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:35:33.093 [2024-12-09 06:34:26.830425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:33.093 [2024-12-09 06:34:26.830426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:35:33.093 [2024-12-09 06:34:26.914815] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:33.093 [2024-12-09 06:34:26.915325] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:33.093 [2024-12-09 06:34:26.916044] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:35:33.093 [2024-12-09 06:34:26.916770] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:33.093 [2024-12-09 06:34:26.916809] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:33.093 06:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:33.093 06:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:35:33.093 06:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:33.093 06:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:33.093 06:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:33.093 06:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:33.093 06:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:33.093 06:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.093 06:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:33.093 [2024-12-09 06:34:27.563255] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:33.093 06:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.093 06:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:33.093 06:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.093 06:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:33.093 Malloc0 00:35:33.093 06:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.093 06:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:33.093 06:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.093 06:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:33.093 06:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.093 06:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:33.093 06:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.093 06:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:33.093 06:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.093 06:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:33.093 06:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.093 06:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:33.093 [2024-12-09 06:34:27.643218] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:33.093 06:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.093 06:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:35:33.093 06:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:35:33.093 06:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:35:33.093 06:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:35:33.093 06:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:33.093 06:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:33.093 { 00:35:33.093 "params": { 00:35:33.093 "name": "Nvme$subsystem", 00:35:33.093 "trtype": "$TEST_TRANSPORT", 00:35:33.093 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:33.093 "adrfam": "ipv4", 00:35:33.093 "trsvcid": "$NVMF_PORT", 00:35:33.093 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:33.093 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:33.093 "hdgst": ${hdgst:-false}, 00:35:33.093 "ddgst": ${ddgst:-false} 00:35:33.093 }, 00:35:33.093 "method": "bdev_nvme_attach_controller" 00:35:33.093 } 00:35:33.093 EOF 00:35:33.093 )") 00:35:33.093 06:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:35:33.093 06:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:35:33.093 06:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:35:33.093 06:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:33.093 "params": { 00:35:33.093 "name": "Nvme1", 00:35:33.093 "trtype": "tcp", 00:35:33.093 "traddr": "10.0.0.2", 00:35:33.093 "adrfam": "ipv4", 00:35:33.093 "trsvcid": "4420", 00:35:33.093 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:33.093 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:33.093 "hdgst": false, 00:35:33.093 "ddgst": false 00:35:33.093 }, 00:35:33.093 "method": "bdev_nvme_attach_controller" 00:35:33.093 }' 00:35:33.355 [2024-12-09 06:34:27.700251] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:35:33.355 [2024-12-09 06:34:27.700334] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid587683 ] 00:35:33.355 [2024-12-09 06:34:27.798929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:33.355 [2024-12-09 06:34:27.854240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:33.355 [2024-12-09 06:34:27.854385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:33.355 [2024-12-09 06:34:27.854385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:33.616 I/O targets: 00:35:33.616 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:35:33.616 00:35:33.616 00:35:33.616 CUnit - A unit testing framework for C - Version 2.1-3 00:35:33.616 http://cunit.sourceforge.net/ 00:35:33.616 00:35:33.616 00:35:33.616 Suite: bdevio tests on: Nvme1n1 00:35:33.616 Test: blockdev write read block ...passed 00:35:33.616 Test: blockdev write zeroes read block ...passed 00:35:33.616 Test: blockdev write zeroes read no split ...passed 00:35:33.616 Test: blockdev write zeroes read split ...passed 00:35:33.616 Test: blockdev write zeroes read split partial ...passed 00:35:33.616 Test: blockdev reset ...[2024-12-09 06:34:28.180402] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:35:33.616 [2024-12-09 06:34:28.180502] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217da40 (9): Bad file descriptor 00:35:33.616 [2024-12-09 06:34:28.187315] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:35:33.616 passed 00:35:33.877 Test: blockdev write read 8 blocks ...passed 00:35:33.877 Test: blockdev write read size > 128k ...passed 00:35:33.877 Test: blockdev write read invalid size ...passed 00:35:33.877 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:35:33.877 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:35:33.877 Test: blockdev write read max offset ...passed 00:35:33.877 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:35:33.877 Test: blockdev writev readv 8 blocks ...passed 00:35:33.877 Test: blockdev writev readv 30 x 1block ...passed 00:35:33.877 Test: blockdev writev readv block ...passed 00:35:33.877 Test: blockdev writev readv size > 128k ...passed 00:35:33.877 Test: blockdev writev readv size > 128k in two iovs ...passed 00:35:33.877 Test: blockdev comparev and writev ...[2024-12-09 06:34:28.453521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:33.877 [2024-12-09 06:34:28.453553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:33.877 [2024-12-09 06:34:28.453568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:33.877 [2024-12-09 06:34:28.453575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.877 [2024-12-09 06:34:28.454079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:33.877 [2024-12-09 06:34:28.454092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:33.877 [2024-12-09 06:34:28.454106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:33.877 [2024-12-09 06:34:28.454114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:33.877 [2024-12-09 06:34:28.454601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:33.877 [2024-12-09 06:34:28.454613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:33.877 [2024-12-09 06:34:28.454626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:33.877 [2024-12-09 06:34:28.454633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:33.877 [2024-12-09 06:34:28.455125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:33.877 [2024-12-09 06:34:28.455137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:33.877 [2024-12-09 06:34:28.455150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:33.877 [2024-12-09 06:34:28.455157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:34.138 passed 00:35:34.138 Test: blockdev nvme passthru rw ...passed 00:35:34.138 Test: blockdev nvme passthru vendor specific ...[2024-12-09 06:34:28.539288] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:34.138 [2024-12-09 06:34:28.539302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:34.138 [2024-12-09 06:34:28.539661] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:34.138 [2024-12-09 06:34:28.539673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:34.138 [2024-12-09 06:34:28.540014] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:34.138 [2024-12-09 06:34:28.540024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:34.138 [2024-12-09 06:34:28.540329] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:34.138 [2024-12-09 06:34:28.540341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:34.138 passed 00:35:34.138 Test: blockdev nvme admin passthru ...passed 00:35:34.138 Test: blockdev copy ...passed 00:35:34.138 00:35:34.138 Run Summary: Type Total Ran Passed Failed Inactive 00:35:34.138 suites 1 1 n/a 0 0 00:35:34.138 tests 23 23 23 0 0 00:35:34.138 asserts 152 152 152 0 n/a 00:35:34.138 00:35:34.138 Elapsed time = 1.162 seconds 00:35:34.138 06:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:34.138 06:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.138 06:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:34.138 06:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.138 06:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:35:34.138 06:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:35:34.138 06:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:34.138 06:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:35:34.138 06:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:34.138 06:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:35:34.138 06:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:34.138 06:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:34.138 rmmod nvme_tcp 00:35:34.398 rmmod nvme_fabrics 00:35:34.399 rmmod nvme_keyring 00:35:34.399 06:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:34.399 06:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:35:34.399 06:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:35:34.399 06:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 587609 ']' 00:35:34.399 06:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 587609 00:35:34.399 06:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 587609 ']' 00:35:34.399 06:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 587609 00:35:34.399 06:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:35:34.399 06:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:34.399 06:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 587609 00:35:34.399 06:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:35:34.399 06:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:35:34.399 06:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 587609' 00:35:34.399 killing process with pid 587609 00:35:34.399 06:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 587609 00:35:34.399 06:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 587609 00:35:34.659 06:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:34.659 06:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:34.659 06:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:34.659 06:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:35:34.659 06:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:35:34.659 06:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:34.659 06:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:35:34.659 06:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:34.659 06:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:34.659 06:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:34.659 06:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:34.659 06:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:36.574 06:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:36.574 00:35:36.574 real 0m12.019s 00:35:36.574 user 0m9.167s 00:35:36.574 sys 0m6.315s 00:35:36.574 06:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:36.574 06:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:36.574 ************************************ 00:35:36.574 END TEST nvmf_bdevio 00:35:36.574 ************************************ 00:35:36.574 06:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:35:36.574 00:35:36.574 real 4m56.521s 00:35:36.574 user 9m37.244s 00:35:36.574 sys 2m3.044s 00:35:36.574 06:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:36.574 06:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:36.574 ************************************ 00:35:36.574 END TEST nvmf_target_core_interrupt_mode 00:35:36.574 ************************************ 00:35:36.574 06:34:31 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:35:36.574 06:34:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:36.574 06:34:31 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:36.574 06:34:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:36.835 ************************************ 00:35:36.835 START TEST nvmf_interrupt 00:35:36.835 ************************************ 00:35:36.835 06:34:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:35:36.835 * Looking for test storage... 00:35:36.835 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:36.835 06:34:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:36.835 06:34:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lcov --version 00:35:36.835 06:34:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:36.835 06:34:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:36.835 06:34:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:36.835 06:34:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:36.835 06:34:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:36.835 06:34:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:35:36.835 06:34:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:35:36.835 06:34:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:35:36.835 06:34:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:35:36.835 06:34:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:35:36.835 06:34:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:35:36.835 06:34:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:35:36.835 06:34:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:36.835 06:34:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:35:36.835 06:34:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:35:36.835 06:34:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:36.835 06:34:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:36.835 06:34:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:35:36.835 06:34:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:35:36.835 06:34:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:36.835 06:34:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:35:36.835 06:34:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:35:36.835 06:34:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:35:36.835 06:34:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:35:36.835 06:34:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:36.835 06:34:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:35:36.835 06:34:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:35:36.835 06:34:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:36.835 06:34:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:36.835 06:34:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:35:36.835 06:34:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:36.835 06:34:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:36.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:36.835 --rc genhtml_branch_coverage=1 00:35:36.835 --rc genhtml_function_coverage=1 00:35:36.835 --rc genhtml_legend=1 00:35:36.835 --rc geninfo_all_blocks=1 00:35:36.835 --rc geninfo_unexecuted_blocks=1 00:35:36.835 00:35:36.835 ' 00:35:36.835 06:34:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:36.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:36.835 --rc genhtml_branch_coverage=1 00:35:36.835 --rc genhtml_function_coverage=1 00:35:36.835 --rc genhtml_legend=1 00:35:36.835 --rc geninfo_all_blocks=1 00:35:36.835 --rc geninfo_unexecuted_blocks=1 00:35:36.835 00:35:36.835 ' 00:35:36.835 06:34:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:36.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:36.835 --rc genhtml_branch_coverage=1 00:35:36.835 --rc genhtml_function_coverage=1 00:35:36.835 --rc genhtml_legend=1 00:35:36.835 --rc geninfo_all_blocks=1 00:35:36.835 --rc geninfo_unexecuted_blocks=1 00:35:36.835 00:35:36.835 ' 00:35:36.835 06:34:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:36.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:36.835 --rc genhtml_branch_coverage=1 00:35:36.835 --rc genhtml_function_coverage=1 00:35:36.835 --rc genhtml_legend=1 00:35:36.835 --rc geninfo_all_blocks=1 00:35:36.835 --rc geninfo_unexecuted_blocks=1 00:35:36.835 00:35:36.835 ' 00:35:36.835 06:34:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:36.835 06:34:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:35:36.835 06:34:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:36.835 06:34:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:36.836 06:34:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:36.836 06:34:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:36.836 06:34:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:36.836 06:34:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:36.836 06:34:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:36.836 06:34:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:36.836 06:34:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:36.836 06:34:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:36.836 06:34:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:35:36.836 06:34:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:35:36.836 06:34:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:36.836 06:34:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:36.836 06:34:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:36.836 06:34:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:36.836 06:34:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:36.836 06:34:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:35:36.836 06:34:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:36.836 06:34:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:36.836 06:34:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:36.836 06:34:31 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:36.836 06:34:31 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:36.836 06:34:31 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:36.836 06:34:31 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:35:36.836 06:34:31 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:36.836 06:34:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:35:36.836 06:34:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:36.836 06:34:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:36.836 06:34:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:36.836 06:34:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:36.836 06:34:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:36.836 06:34:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:36.836 06:34:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:36.836 06:34:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:36.836 06:34:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:36.836 06:34:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:37.098 06:34:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:35:37.098 06:34:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:35:37.098 06:34:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:35:37.098 06:34:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:37.098 06:34:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:37.098 06:34:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:37.098 06:34:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:37.098 06:34:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:37.098 06:34:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:37.098 06:34:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:37.098 06:34:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:37.098 06:34:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:37.098 06:34:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:37.098 06:34:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:35:37.098 06:34:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:43.679 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:43.679 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:35:43.679 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:43.679 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:43.679 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:43.679 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:43.679 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:43.679 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:35:43.679 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:43.679 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:35:43.679 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:35:43.679 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:35:43.679 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:35:43.679 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:35:43.679 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:35:43.679 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:43.679 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:43.679 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:43.679 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:43.679 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:43.679 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:43.679 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:43.679 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:43.679 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:43.679 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:43.679 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:43.679 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:43.679 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:43.679 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:43.679 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:43.679 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:43.679 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:43.679 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:43.679 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:43.679 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:43.680 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:43.680 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:43.680 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:43.680 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:43.680 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:43.680 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:43.680 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:43.680 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:43.680 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:43.680 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:43.680 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:43.680 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:43.680 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:43.680 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:43.680 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:43.680 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:43.680 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:43.680 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:43.680 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:43.680 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:43.680 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:43.680 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:43.680 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:43.680 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:43.680 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:43.680 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:43.680 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:43.680 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:43.680 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:43.680 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:43.680 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:43.680 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:43.680 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:43.680 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:43.680 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:43.680 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:43.680 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:43.680 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:43.680 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:35:43.680 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:43.680 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:43.680 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:43.680 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:43.680 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:43.680 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:43.680 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:43.680 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:43.680 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:43.680 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:43.680 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:43.680 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:43.680 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:43.680 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:43.680 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:43.680 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:43.680 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:43.680 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:43.680 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:43.680 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:43.680 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:43.680 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:43.940 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:43.940 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:43.940 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:43.940 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:43.940 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:43.940 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.649 ms 00:35:43.940 00:35:43.940 --- 10.0.0.2 ping statistics --- 00:35:43.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:43.940 rtt min/avg/max/mdev = 0.649/0.649/0.649/0.000 ms 00:35:43.940 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:43.940 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:43.940 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.301 ms 00:35:43.940 00:35:43.940 --- 10.0.0.1 ping statistics --- 00:35:43.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:43.940 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:35:43.940 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:43.940 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:35:43.940 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:43.941 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:43.941 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:43.941 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:43.941 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:43.941 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:43.941 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:43.941 06:34:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:35:43.941 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:43.941 06:34:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:43.941 06:34:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:43.941 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=591860 00:35:43.941 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 591860 00:35:43.941 06:34:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 591860 ']' 00:35:43.941 06:34:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:43.941 06:34:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:43.941 06:34:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:43.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:43.941 06:34:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:43.941 06:34:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:43.941 06:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:35:43.941 [2024-12-09 06:34:38.483244] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:43.941 [2024-12-09 06:34:38.484700] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:35:43.941 [2024-12-09 06:34:38.484765] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:44.201 [2024-12-09 06:34:38.582436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:44.201 [2024-12-09 06:34:38.632188] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:44.201 [2024-12-09 06:34:38.632241] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:44.201 [2024-12-09 06:34:38.632249] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:44.201 [2024-12-09 06:34:38.632256] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:44.201 [2024-12-09 06:34:38.632263] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:44.201 [2024-12-09 06:34:38.633908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:44.201 [2024-12-09 06:34:38.633912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:44.201 [2024-12-09 06:34:38.710094] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:44.202 [2024-12-09 06:34:38.710879] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:44.202 [2024-12-09 06:34:38.711108] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:44.775 06:34:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:44.775 06:34:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:35:44.775 06:34:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:44.775 06:34:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:44.775 06:34:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:44.775 06:34:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:44.775 06:34:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:35:44.775 06:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:35:44.775 06:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:35:44.775 06:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:35:45.037 5000+0 records in 00:35:45.037 5000+0 records out 00:35:45.037 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0197561 s, 518 MB/s 00:35:45.037 06:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:35:45.037 06:34:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.037 06:34:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:45.037 AIO0 00:35:45.037 06:34:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.037 06:34:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:35:45.037 06:34:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.037 06:34:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:45.037 [2024-12-09 06:34:39.422931] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:45.037 06:34:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.037 06:34:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:35:45.037 06:34:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.037 06:34:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:45.037 06:34:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.037 06:34:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:35:45.037 06:34:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.037 06:34:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:45.037 06:34:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.037 06:34:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:45.037 06:34:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.037 06:34:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:45.037 [2024-12-09 06:34:39.451320] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:45.037 06:34:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.037 06:34:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:35:45.037 06:34:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 591860 0 00:35:45.037 06:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 591860 0 idle 00:35:45.037 06:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=591860 00:35:45.037 06:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:45.037 06:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:45.037 06:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:45.037 06:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:45.037 06:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:45.037 06:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:45.037 06:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:45.037 06:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:45.037 06:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:45.037 06:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 591860 -w 256 00:35:45.037 06:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:45.298 06:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 591860 root 20 0 128.2g 44032 32768 S 0.0 0.0 0:00.31 reactor_0' 00:35:45.298 06:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 591860 root 20 0 128.2g 44032 32768 S 0.0 0.0 0:00.31 reactor_0 00:35:45.298 06:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:45.298 06:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:45.298 06:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:45.298 06:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:45.298 06:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:45.298 06:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:45.298 06:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:45.298 06:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:45.298 06:34:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:35:45.298 06:34:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 591860 1 00:35:45.298 06:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 591860 1 idle 00:35:45.298 06:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=591860 00:35:45.298 06:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:45.298 06:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:45.298 06:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:45.298 06:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:45.298 06:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:45.298 06:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:45.298 06:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:45.298 06:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:45.298 06:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:45.298 06:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 591860 -w 256 00:35:45.298 06:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:45.298 06:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 591864 root 20 0 128.2g 44032 32768 S 0.0 0.0 0:00.00 reactor_1' 00:35:45.298 06:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 591864 root 20 0 128.2g 44032 32768 S 0.0 0.0 0:00.00 reactor_1 00:35:45.298 06:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:45.298 06:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:45.298 06:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:45.298 06:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:45.298 06:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:45.298 06:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:45.298 06:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:45.298 06:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:45.298 06:34:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:35:45.298 06:34:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=592190 00:35:45.298 06:34:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:35:45.298 06:34:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:45.298 06:34:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:35:45.298 06:34:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 591860 0 00:35:45.298 06:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 591860 0 busy 00:35:45.298 06:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=591860 00:35:45.298 06:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:45.298 06:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:35:45.298 06:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:35:45.298 06:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:45.298 06:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:35:45.298 06:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:45.298 06:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:45.298 06:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:45.298 06:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 591860 -w 256 00:35:45.298 06:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:45.559 06:34:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 591860 root 20 0 128.2g 44032 32768 R 80.0 0.0 0:00.44 reactor_0' 00:35:45.559 06:34:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 591860 root 20 0 128.2g 44032 32768 R 80.0 0.0 0:00.44 reactor_0 00:35:45.559 06:34:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:45.559 06:34:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:45.559 06:34:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=80.0 00:35:45.559 06:34:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=80 00:35:45.559 06:34:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:35:45.559 06:34:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:35:45.559 06:34:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:35:45.559 06:34:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:45.559 06:34:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:35:45.559 06:34:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:35:45.559 06:34:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 591860 1 00:35:45.559 06:34:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 591860 1 busy 00:35:45.559 06:34:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=591860 00:35:45.559 06:34:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:45.559 06:34:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:35:45.559 06:34:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:35:45.559 06:34:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:45.559 06:34:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:35:45.559 06:34:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:45.559 06:34:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:45.559 06:34:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:45.559 06:34:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 591860 -w 256 00:35:45.560 06:34:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:45.820 06:34:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 591864 root 20 0 128.2g 44032 32768 R 93.8 0.0 0:00.24 reactor_1' 00:35:45.820 06:34:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 591864 root 20 0 128.2g 44032 32768 R 93.8 0.0 0:00.24 reactor_1 00:35:45.820 06:34:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:45.820 06:34:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:45.820 06:34:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=93.8 00:35:45.820 06:34:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=93 00:35:45.820 06:34:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:35:45.820 06:34:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:35:45.820 06:34:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:35:45.820 06:34:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:45.820 06:34:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 592190 00:35:55.822 Initializing NVMe Controllers 00:35:55.822 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:55.822 Controller IO queue size 256, less than required. 00:35:55.822 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:55.822 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:35:55.822 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:35:55.822 Initialization complete. Launching workers. 00:35:55.822 ======================================================== 00:35:55.822 Latency(us) 00:35:55.822 Device Information : IOPS MiB/s Average min max 00:35:55.822 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 18925.57 73.93 13531.84 3636.45 32784.36 00:35:55.822 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 21573.66 84.27 11870.72 3003.20 20830.59 00:35:55.822 ======================================================== 00:35:55.822 Total : 40499.23 158.20 12646.98 3003.20 32784.36 00:35:55.822 00:35:55.822 [2024-12-09 06:34:49.987551] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170d6e0 is same with the state(6) to be set 00:35:55.822 06:34:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:35:55.822 06:34:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 591860 0 00:35:55.822 06:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 591860 0 idle 00:35:55.822 06:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=591860 00:35:55.822 06:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:55.822 06:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:55.822 06:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:55.822 06:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:55.822 06:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:55.822 06:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:55.822 06:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:55.822 06:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:55.822 06:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:55.822 06:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 591860 -w 256 00:35:55.822 06:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:55.822 06:34:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 591860 root 20 0 128.2g 44032 32768 S 0.0 0.0 0:20.30 reactor_0' 00:35:55.822 06:34:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 591860 root 20 0 128.2g 44032 32768 S 0.0 0.0 0:20.30 reactor_0 00:35:55.822 06:34:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:55.822 06:34:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:55.822 06:34:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:55.822 06:34:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:55.822 06:34:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:55.822 06:34:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:55.822 06:34:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:55.822 06:34:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:55.822 06:34:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:35:55.822 06:34:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 591860 1 00:35:55.822 06:34:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 591860 1 idle 00:35:55.822 06:34:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=591860 00:35:55.822 06:34:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:55.822 06:34:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:55.822 06:34:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:55.823 06:34:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:55.823 06:34:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:55.823 06:34:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:55.823 06:34:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:55.823 06:34:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:55.823 06:34:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:55.823 06:34:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:55.823 06:34:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 591860 -w 256 00:35:55.823 06:34:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 591864 root 20 0 128.2g 44032 32768 S 0.0 0.0 0:10.00 reactor_1' 00:35:55.823 06:34:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:55.823 06:34:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 591864 root 20 0 128.2g 44032 32768 S 0.0 0.0 0:10.00 reactor_1 00:35:55.823 06:34:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:55.823 06:34:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:55.823 06:34:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:55.823 06:34:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:55.823 06:34:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:55.823 06:34:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:55.823 06:34:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:55.823 06:34:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:35:56.394 06:34:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:35:56.394 06:34:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:35:56.394 06:34:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:35:56.394 06:34:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:35:56.394 06:34:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:35:58.308 06:34:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:35:58.308 06:34:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:35:58.308 06:34:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:35:58.308 06:34:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:35:58.308 06:34:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:35:58.308 06:34:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:35:58.308 06:34:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:35:58.309 06:34:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 591860 0 00:35:58.309 06:34:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 591860 0 idle 00:35:58.309 06:34:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=591860 00:35:58.309 06:34:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:58.309 06:34:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:58.309 06:34:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:58.309 06:34:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:58.309 06:34:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:58.309 06:34:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:58.309 06:34:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:58.309 06:34:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:58.309 06:34:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:58.309 06:34:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 591860 -w 256 00:35:58.309 06:34:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:58.571 06:34:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 591860 root 20 0 128.2g 78848 32768 S 0.0 0.1 0:20.58 reactor_0' 00:35:58.571 06:34:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 591860 root 20 0 128.2g 78848 32768 S 0.0 0.1 0:20.58 reactor_0 00:35:58.571 06:34:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:58.571 06:34:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:58.571 06:34:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:58.571 06:34:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:58.571 06:34:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:58.571 06:34:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:58.571 06:34:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:58.571 06:34:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:58.571 06:34:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:35:58.571 06:34:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 591860 1 00:35:58.571 06:34:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 591860 1 idle 00:35:58.571 06:34:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=591860 00:35:58.571 06:34:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:58.571 06:34:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:58.571 06:34:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:58.571 06:34:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:58.571 06:34:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:58.571 06:34:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:58.571 06:34:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:58.571 06:34:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:58.571 06:34:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:58.571 06:34:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 591860 -w 256 00:35:58.571 06:34:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:58.833 06:34:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 591864 root 20 0 128.2g 78848 32768 S 0.0 0.1 0:10.07 reactor_1' 00:35:58.833 06:34:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 591864 root 20 0 128.2g 78848 32768 S 0.0 0.1 0:10.07 reactor_1 00:35:58.833 06:34:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:58.833 06:34:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:58.833 06:34:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:58.833 06:34:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:58.833 06:34:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:58.833 06:34:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:58.833 06:34:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:58.833 06:34:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:58.833 06:34:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:35:58.833 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:58.833 06:34:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:35:58.833 06:34:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:35:58.833 06:34:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:58.833 06:34:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:35:58.833 06:34:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:35:58.833 06:34:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:58.833 06:34:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:35:58.833 06:34:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:35:58.833 06:34:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:35:58.833 06:34:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:58.833 06:34:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:35:59.094 06:34:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:59.094 06:34:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:35:59.094 06:34:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:59.094 06:34:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:59.094 rmmod nvme_tcp 00:35:59.094 rmmod nvme_fabrics 00:35:59.094 rmmod nvme_keyring 00:35:59.094 06:34:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:59.094 06:34:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:35:59.094 06:34:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:35:59.094 06:34:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 591860 ']' 00:35:59.094 06:34:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 591860 00:35:59.094 06:34:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 591860 ']' 00:35:59.094 06:34:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 591860 00:35:59.094 06:34:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:35:59.094 06:34:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:59.094 06:34:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 591860 00:35:59.094 06:34:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:59.094 06:34:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:59.094 06:34:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 591860' 00:35:59.094 killing process with pid 591860 00:35:59.094 06:34:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 591860 00:35:59.095 06:34:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 591860 00:35:59.356 06:34:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:59.356 06:34:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:59.356 06:34:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:59.356 06:34:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:35:59.356 06:34:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:35:59.356 06:34:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:35:59.356 06:34:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:59.356 06:34:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:59.356 06:34:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:59.356 06:34:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:59.356 06:34:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:59.356 06:34:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:01.272 06:34:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:01.272 00:36:01.272 real 0m24.578s 00:36:01.272 user 0m39.569s 00:36:01.272 sys 0m9.750s 00:36:01.272 06:34:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:01.272 06:34:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:01.272 ************************************ 00:36:01.272 END TEST nvmf_interrupt 00:36:01.272 ************************************ 00:36:01.272 00:36:01.272 real 29m51.511s 00:36:01.272 user 60m5.636s 00:36:01.272 sys 9m59.153s 00:36:01.272 06:34:55 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:01.272 06:34:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:01.272 ************************************ 00:36:01.272 END TEST nvmf_tcp 00:36:01.272 ************************************ 00:36:01.272 06:34:55 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:36:01.272 06:34:55 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:36:01.272 06:34:55 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:01.272 06:34:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:01.272 06:34:55 -- common/autotest_common.sh@10 -- # set +x 00:36:01.533 ************************************ 00:36:01.533 START TEST spdkcli_nvmf_tcp 00:36:01.533 ************************************ 00:36:01.533 06:34:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:36:01.533 * Looking for test storage... 00:36:01.533 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:36:01.533 06:34:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:01.533 06:34:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:36:01.533 06:34:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:01.533 06:34:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:01.533 06:34:56 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:01.533 06:34:56 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:01.533 06:34:56 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:01.533 06:34:56 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:36:01.533 06:34:56 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:36:01.533 06:34:56 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:36:01.533 06:34:56 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:36:01.533 06:34:56 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:36:01.533 06:34:56 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:36:01.533 06:34:56 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:36:01.533 06:34:56 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:01.533 06:34:56 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:36:01.533 06:34:56 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:36:01.533 06:34:56 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:01.533 06:34:56 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:01.533 06:34:56 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:36:01.533 06:34:56 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:36:01.533 06:34:56 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:01.533 06:34:56 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:36:01.533 06:34:56 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:36:01.533 06:34:56 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:36:01.533 06:34:56 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:36:01.533 06:34:56 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:01.533 06:34:56 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:36:01.533 06:34:56 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:36:01.533 06:34:56 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:01.533 06:34:56 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:01.533 06:34:56 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:36:01.533 06:34:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:01.533 06:34:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:01.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:01.533 --rc genhtml_branch_coverage=1 00:36:01.533 --rc genhtml_function_coverage=1 00:36:01.533 --rc genhtml_legend=1 00:36:01.533 --rc geninfo_all_blocks=1 00:36:01.533 --rc geninfo_unexecuted_blocks=1 00:36:01.533 00:36:01.533 ' 00:36:01.533 06:34:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:01.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:01.533 --rc genhtml_branch_coverage=1 00:36:01.533 --rc genhtml_function_coverage=1 00:36:01.533 --rc genhtml_legend=1 00:36:01.533 --rc geninfo_all_blocks=1 00:36:01.533 --rc geninfo_unexecuted_blocks=1 00:36:01.533 00:36:01.533 ' 00:36:01.533 06:34:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:01.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:01.533 --rc genhtml_branch_coverage=1 00:36:01.533 --rc genhtml_function_coverage=1 00:36:01.533 --rc genhtml_legend=1 00:36:01.533 --rc geninfo_all_blocks=1 00:36:01.533 --rc geninfo_unexecuted_blocks=1 00:36:01.533 00:36:01.533 ' 00:36:01.534 06:34:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:01.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:01.534 --rc genhtml_branch_coverage=1 00:36:01.534 --rc genhtml_function_coverage=1 00:36:01.534 --rc genhtml_legend=1 00:36:01.534 --rc geninfo_all_blocks=1 00:36:01.534 --rc geninfo_unexecuted_blocks=1 00:36:01.534 00:36:01.534 ' 00:36:01.534 06:34:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:36:01.534 06:34:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:36:01.534 06:34:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:36:01.534 06:34:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:01.534 06:34:56 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:36:01.534 06:34:56 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:01.534 06:34:56 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:01.534 06:34:56 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:01.534 06:34:56 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:01.534 06:34:56 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:01.534 06:34:56 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:01.534 06:34:56 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:01.534 06:34:56 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:01.534 06:34:56 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:01.534 06:34:56 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:01.796 06:34:56 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:36:01.796 06:34:56 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:36:01.796 06:34:56 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:01.796 06:34:56 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:01.796 06:34:56 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:01.796 06:34:56 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:01.796 06:34:56 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:01.796 06:34:56 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:36:01.796 06:34:56 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:01.796 06:34:56 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:01.796 06:34:56 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:01.796 06:34:56 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:01.796 06:34:56 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:01.796 06:34:56 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:01.796 06:34:56 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:36:01.796 06:34:56 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:01.796 06:34:56 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:36:01.796 06:34:56 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:01.796 06:34:56 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:01.796 06:34:56 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:01.796 06:34:56 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:01.796 06:34:56 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:01.796 06:34:56 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:01.796 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:01.796 06:34:56 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:01.796 06:34:56 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:01.796 06:34:56 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:01.796 06:34:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:36:01.796 06:34:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:36:01.796 06:34:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:36:01.796 06:34:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:36:01.796 06:34:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:01.796 06:34:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:01.796 06:34:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:36:01.796 06:34:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=594951 00:36:01.796 06:34:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 594951 00:36:01.796 06:34:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 594951 ']' 00:36:01.796 06:34:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:01.796 06:34:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:01.796 06:34:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:36:01.796 06:34:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:01.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:01.796 06:34:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:01.796 06:34:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:01.796 [2024-12-09 06:34:56.201643] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:36:01.796 [2024-12-09 06:34:56.201711] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid594951 ] 00:36:01.796 [2024-12-09 06:34:56.291803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:36:01.796 [2024-12-09 06:34:56.344980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:01.796 [2024-12-09 06:34:56.344987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:02.741 06:34:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:02.741 06:34:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:36:02.741 06:34:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:36:02.741 06:34:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:02.741 06:34:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:02.741 06:34:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:36:02.741 06:34:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:36:02.741 06:34:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:36:02.741 06:34:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:02.741 06:34:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:02.741 06:34:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:36:02.741 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:36:02.741 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:36:02.741 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:36:02.741 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:36:02.741 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:36:02.741 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:36:02.741 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:36:02.741 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:36:02.741 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:36:02.741 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:36:02.741 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:02.741 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:36:02.741 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:36:02.741 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:02.741 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:36:02.741 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:36:02.741 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:36:02.741 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:36:02.741 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:02.741 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:36:02.741 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:36:02.741 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:36:02.741 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:36:02.741 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:02.741 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:36:02.741 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:36:02.741 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:36:02.741 ' 00:36:05.285 [2024-12-09 06:34:59.598490] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:06.671 [2024-12-09 06:35:00.834414] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:36:08.586 [2024-12-09 06:35:03.108738] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:36:10.501 [2024-12-09 06:35:05.062299] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:36:12.412 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:36:12.412 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:36:12.412 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:36:12.412 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:36:12.412 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:36:12.412 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:36:12.412 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:36:12.413 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:36:12.413 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:36:12.413 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:36:12.413 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:12.413 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:12.413 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:36:12.413 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:12.413 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:12.413 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:36:12.413 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:12.413 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:36:12.413 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:36:12.413 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:12.413 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:36:12.413 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:36:12.413 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:36:12.413 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:36:12.413 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:12.413 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:36:12.413 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:36:12.413 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:36:12.413 06:35:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:36:12.413 06:35:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:12.413 06:35:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:12.413 06:35:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:36:12.413 06:35:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:12.413 06:35:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:12.413 06:35:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:36:12.413 06:35:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:36:12.674 06:35:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:36:12.674 06:35:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:36:12.674 06:35:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:36:12.674 06:35:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:12.674 06:35:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:12.674 06:35:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:36:12.674 06:35:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:12.674 06:35:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:12.674 06:35:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:36:12.674 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:36:12.674 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:36:12.674 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:36:12.674 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:36:12.674 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:36:12.674 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:36:12.674 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:36:12.674 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:36:12.674 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:36:12.674 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:36:12.674 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:36:12.674 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:36:12.674 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:36:12.674 ' 00:36:17.975 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:36:17.975 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:36:17.975 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:36:17.975 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:36:17.975 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:36:17.975 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:36:17.975 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:36:17.975 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:36:17.975 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:36:17.975 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:36:17.975 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:36:17.975 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:36:17.976 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:36:17.976 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:36:17.976 06:35:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:36:17.976 06:35:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:17.976 06:35:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:17.976 06:35:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 594951 00:36:17.976 06:35:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 594951 ']' 00:36:17.976 06:35:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 594951 00:36:17.976 06:35:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:36:17.976 06:35:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:17.976 06:35:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 594951 00:36:17.976 06:35:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:17.976 06:35:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:17.976 06:35:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 594951' 00:36:17.976 killing process with pid 594951 00:36:17.976 06:35:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 594951 00:36:17.976 06:35:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 594951 00:36:17.976 06:35:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:36:17.976 06:35:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:36:17.976 06:35:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 594951 ']' 00:36:17.976 06:35:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 594951 00:36:17.976 06:35:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 594951 ']' 00:36:17.976 06:35:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 594951 00:36:17.976 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (594951) - No such process 00:36:17.976 06:35:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 594951 is not found' 00:36:17.976 Process with pid 594951 is not found 00:36:17.976 06:35:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:36:17.976 06:35:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:36:17.976 06:35:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:36:17.976 00:36:17.976 real 0m16.656s 00:36:17.976 user 0m35.203s 00:36:17.976 sys 0m0.885s 00:36:17.976 06:35:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:17.976 06:35:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:17.976 ************************************ 00:36:17.976 END TEST spdkcli_nvmf_tcp 00:36:17.976 ************************************ 00:36:18.238 06:35:12 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:36:18.238 06:35:12 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:18.238 06:35:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:18.238 06:35:12 -- common/autotest_common.sh@10 -- # set +x 00:36:18.238 ************************************ 00:36:18.238 START TEST nvmf_identify_passthru 00:36:18.238 ************************************ 00:36:18.238 06:35:12 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:36:18.238 * Looking for test storage... 00:36:18.238 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:18.238 06:35:12 nvmf_identify_passthru -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:18.238 06:35:12 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lcov --version 00:36:18.238 06:35:12 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:18.239 06:35:12 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:18.239 06:35:12 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:18.239 06:35:12 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:18.239 06:35:12 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:18.239 06:35:12 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:36:18.239 06:35:12 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:36:18.239 06:35:12 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:36:18.239 06:35:12 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:36:18.239 06:35:12 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:36:18.239 06:35:12 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:36:18.239 06:35:12 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:36:18.239 06:35:12 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:18.239 06:35:12 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:36:18.239 06:35:12 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:36:18.239 06:35:12 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:18.239 06:35:12 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:18.239 06:35:12 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:36:18.239 06:35:12 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:36:18.239 06:35:12 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:18.239 06:35:12 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:36:18.239 06:35:12 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:36:18.239 06:35:12 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:36:18.239 06:35:12 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:36:18.239 06:35:12 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:18.239 06:35:12 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:36:18.239 06:35:12 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:36:18.239 06:35:12 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:18.239 06:35:12 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:18.239 06:35:12 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:36:18.239 06:35:12 nvmf_identify_passthru -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:18.239 06:35:12 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:18.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:18.239 --rc genhtml_branch_coverage=1 00:36:18.239 --rc genhtml_function_coverage=1 00:36:18.239 --rc genhtml_legend=1 00:36:18.239 --rc geninfo_all_blocks=1 00:36:18.239 --rc geninfo_unexecuted_blocks=1 00:36:18.239 00:36:18.239 ' 00:36:18.239 06:35:12 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:18.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:18.239 --rc genhtml_branch_coverage=1 00:36:18.239 --rc genhtml_function_coverage=1 00:36:18.239 --rc genhtml_legend=1 00:36:18.239 --rc geninfo_all_blocks=1 00:36:18.239 --rc geninfo_unexecuted_blocks=1 00:36:18.239 00:36:18.239 ' 00:36:18.239 06:35:12 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:18.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:18.239 --rc genhtml_branch_coverage=1 00:36:18.239 --rc genhtml_function_coverage=1 00:36:18.239 --rc genhtml_legend=1 00:36:18.239 --rc geninfo_all_blocks=1 00:36:18.239 --rc geninfo_unexecuted_blocks=1 00:36:18.239 00:36:18.239 ' 00:36:18.239 06:35:12 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:18.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:18.239 --rc genhtml_branch_coverage=1 00:36:18.239 --rc genhtml_function_coverage=1 00:36:18.239 --rc genhtml_legend=1 00:36:18.239 --rc geninfo_all_blocks=1 00:36:18.239 --rc geninfo_unexecuted_blocks=1 00:36:18.239 00:36:18.239 ' 00:36:18.239 06:35:12 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:18.239 06:35:12 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:36:18.501 06:35:12 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:18.501 06:35:12 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:18.501 06:35:12 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:18.501 06:35:12 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:18.501 06:35:12 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:18.501 06:35:12 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:18.501 06:35:12 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:18.501 06:35:12 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:18.501 06:35:12 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:18.501 06:35:12 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:18.501 06:35:12 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:36:18.501 06:35:12 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:36:18.501 06:35:12 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:18.501 06:35:12 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:18.501 06:35:12 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:18.501 06:35:12 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:18.501 06:35:12 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:18.501 06:35:12 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:36:18.501 06:35:12 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:18.501 06:35:12 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:18.501 06:35:12 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:18.501 06:35:12 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:18.501 06:35:12 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:18.501 06:35:12 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:18.501 06:35:12 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:36:18.502 06:35:12 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:18.502 06:35:12 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:36:18.502 06:35:12 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:18.502 06:35:12 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:18.502 06:35:12 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:18.502 06:35:12 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:18.502 06:35:12 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:18.502 06:35:12 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:18.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:18.502 06:35:12 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:18.502 06:35:12 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:18.502 06:35:12 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:18.502 06:35:12 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:18.502 06:35:12 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:36:18.502 06:35:12 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:18.502 06:35:12 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:18.502 06:35:12 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:18.502 06:35:12 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:18.502 06:35:12 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:18.502 06:35:12 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:18.502 06:35:12 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:36:18.502 06:35:12 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:18.502 06:35:12 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:36:18.502 06:35:12 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:18.502 06:35:12 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:18.502 06:35:12 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:18.502 06:35:12 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:18.502 06:35:12 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:18.502 06:35:12 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:18.502 06:35:12 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:18.502 06:35:12 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:18.502 06:35:12 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:18.502 06:35:12 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:18.502 06:35:12 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:36:18.502 06:35:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:26.645 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:26.645 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:36:26.645 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:26.645 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:26.645 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:26.645 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:26.645 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:26.645 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:26.646 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:26.646 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:26.646 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:26.646 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:26.646 06:35:19 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:26.646 06:35:20 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:26.646 06:35:20 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:26.646 06:35:20 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:26.646 06:35:20 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:26.646 06:35:20 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:26.646 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:26.646 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.583 ms 00:36:26.646 00:36:26.646 --- 10.0.0.2 ping statistics --- 00:36:26.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:26.646 rtt min/avg/max/mdev = 0.583/0.583/0.583/0.000 ms 00:36:26.646 06:35:20 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:26.646 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:26.646 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:36:26.646 00:36:26.646 --- 10.0.0.1 ping statistics --- 00:36:26.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:26.646 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:36:26.646 06:35:20 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:26.646 06:35:20 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:36:26.646 06:35:20 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:26.646 06:35:20 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:26.646 06:35:20 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:26.646 06:35:20 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:26.646 06:35:20 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:26.646 06:35:20 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:26.646 06:35:20 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:26.646 06:35:20 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:36:26.646 06:35:20 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:26.646 06:35:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:26.646 06:35:20 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:36:26.646 06:35:20 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:36:26.646 06:35:20 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:36:26.646 06:35:20 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:36:26.646 06:35:20 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:36:26.646 06:35:20 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:36:26.646 06:35:20 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:36:26.646 06:35:20 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:36:26.646 06:35:20 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:36:26.647 06:35:20 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:36:26.647 06:35:20 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:36:26.647 06:35:20 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:36:26.647 06:35:20 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:65:00.0 00:36:26.647 06:35:20 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:36:26.647 06:35:20 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:36:26.647 06:35:20 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:36:26.647 06:35:20 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:36:26.647 06:35:20 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:36:30.857 06:35:25 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ9512038S2P0BGN 00:36:30.857 06:35:25 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:36:30.857 06:35:25 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:36:30.857 06:35:25 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:36:36.141 06:35:30 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:36:36.141 06:35:30 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:36:36.141 06:35:30 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:36.141 06:35:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:36.141 06:35:30 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:36:36.141 06:35:30 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:36.141 06:35:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:36.141 06:35:30 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=602951 00:36:36.141 06:35:30 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:36.141 06:35:30 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:36:36.141 06:35:30 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 602951 00:36:36.141 06:35:30 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 602951 ']' 00:36:36.141 06:35:30 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:36.141 06:35:30 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:36.141 06:35:30 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:36.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:36.141 06:35:30 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:36.141 06:35:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:36.141 [2024-12-09 06:35:30.572009] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:36:36.141 [2024-12-09 06:35:30.572064] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:36.141 [2024-12-09 06:35:30.660776] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:36.141 [2024-12-09 06:35:30.693783] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:36.141 [2024-12-09 06:35:30.693815] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:36.141 [2024-12-09 06:35:30.693821] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:36.141 [2024-12-09 06:35:30.693826] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:36.141 [2024-12-09 06:35:30.693830] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:36.141 [2024-12-09 06:35:30.695063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:36.141 [2024-12-09 06:35:30.695209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:36.141 [2024-12-09 06:35:30.695317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:36.141 [2024-12-09 06:35:30.695319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:37.083 06:35:31 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:37.083 06:35:31 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:36:37.083 06:35:31 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:36:37.083 06:35:31 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.083 06:35:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:37.083 INFO: Log level set to 20 00:36:37.083 INFO: Requests: 00:36:37.083 { 00:36:37.083 "jsonrpc": "2.0", 00:36:37.083 "method": "nvmf_set_config", 00:36:37.083 "id": 1, 00:36:37.083 "params": { 00:36:37.083 "admin_cmd_passthru": { 00:36:37.083 "identify_ctrlr": true 00:36:37.083 } 00:36:37.083 } 00:36:37.083 } 00:36:37.083 00:36:37.083 INFO: response: 00:36:37.083 { 00:36:37.083 "jsonrpc": "2.0", 00:36:37.083 "id": 1, 00:36:37.083 "result": true 00:36:37.083 } 00:36:37.083 00:36:37.083 06:35:31 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.083 06:35:31 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:36:37.083 06:35:31 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.083 06:35:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:37.083 INFO: Setting log level to 20 00:36:37.083 INFO: Setting log level to 20 00:36:37.083 INFO: Log level set to 20 00:36:37.083 INFO: Log level set to 20 00:36:37.083 INFO: Requests: 00:36:37.083 { 00:36:37.083 "jsonrpc": "2.0", 00:36:37.083 "method": "framework_start_init", 00:36:37.083 "id": 1 00:36:37.083 } 00:36:37.083 00:36:37.083 INFO: Requests: 00:36:37.083 { 00:36:37.083 "jsonrpc": "2.0", 00:36:37.083 "method": "framework_start_init", 00:36:37.083 "id": 1 00:36:37.083 } 00:36:37.083 00:36:37.083 [2024-12-09 06:35:31.450486] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:36:37.083 INFO: response: 00:36:37.083 { 00:36:37.083 "jsonrpc": "2.0", 00:36:37.083 "id": 1, 00:36:37.083 "result": true 00:36:37.083 } 00:36:37.083 00:36:37.083 INFO: response: 00:36:37.083 { 00:36:37.083 "jsonrpc": "2.0", 00:36:37.083 "id": 1, 00:36:37.083 "result": true 00:36:37.083 } 00:36:37.083 00:36:37.083 06:35:31 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.083 06:35:31 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:37.083 06:35:31 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.083 06:35:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:37.083 INFO: Setting log level to 40 00:36:37.083 INFO: Setting log level to 40 00:36:37.083 INFO: Setting log level to 40 00:36:37.083 [2024-12-09 06:35:31.463525] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:37.083 06:35:31 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.083 06:35:31 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:36:37.083 06:35:31 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:37.083 06:35:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:37.084 06:35:31 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:36:37.084 06:35:31 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.084 06:35:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:40.384 Nvme0n1 00:36:40.384 06:35:34 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.384 06:35:34 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:36:40.384 06:35:34 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.384 06:35:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:40.384 06:35:34 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.384 06:35:34 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:36:40.384 06:35:34 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.384 06:35:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:40.384 06:35:34 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.384 06:35:34 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:40.384 06:35:34 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.384 06:35:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:40.384 [2024-12-09 06:35:34.370069] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:40.384 06:35:34 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.384 06:35:34 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:36:40.384 06:35:34 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.384 06:35:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:40.384 [ 00:36:40.384 { 00:36:40.384 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:36:40.384 "subtype": "Discovery", 00:36:40.384 "listen_addresses": [], 00:36:40.384 "allow_any_host": true, 00:36:40.384 "hosts": [] 00:36:40.384 }, 00:36:40.384 { 00:36:40.384 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:36:40.384 "subtype": "NVMe", 00:36:40.384 "listen_addresses": [ 00:36:40.384 { 00:36:40.384 "trtype": "TCP", 00:36:40.384 "adrfam": "IPv4", 00:36:40.384 "traddr": "10.0.0.2", 00:36:40.384 "trsvcid": "4420" 00:36:40.384 } 00:36:40.384 ], 00:36:40.384 "allow_any_host": true, 00:36:40.384 "hosts": [], 00:36:40.384 "serial_number": "SPDK00000000000001", 00:36:40.384 "model_number": "SPDK bdev Controller", 00:36:40.384 "max_namespaces": 1, 00:36:40.384 "min_cntlid": 1, 00:36:40.384 "max_cntlid": 65519, 00:36:40.384 "namespaces": [ 00:36:40.384 { 00:36:40.384 "nsid": 1, 00:36:40.384 "bdev_name": "Nvme0n1", 00:36:40.384 "name": "Nvme0n1", 00:36:40.384 "nguid": "0C73175B563041C489F286E4972C1A20", 00:36:40.384 "uuid": "0c73175b-5630-41c4-89f2-86e4972c1a20" 00:36:40.384 } 00:36:40.384 ] 00:36:40.384 } 00:36:40.384 ] 00:36:40.384 06:35:34 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.384 06:35:34 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:40.384 06:35:34 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:36:40.384 06:35:34 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:36:40.384 06:35:34 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ9512038S2P0BGN 00:36:40.384 06:35:34 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:40.384 06:35:34 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:36:40.384 06:35:34 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:36:40.384 06:35:34 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:36:40.384 06:35:34 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ9512038S2P0BGN '!=' PHLJ9512038S2P0BGN ']' 00:36:40.384 06:35:34 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:36:40.384 06:35:34 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:40.384 06:35:34 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.384 06:35:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:40.384 06:35:34 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.384 06:35:34 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:36:40.384 06:35:34 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:36:40.384 06:35:34 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:40.384 06:35:34 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:36:40.384 06:35:34 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:40.384 06:35:34 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:36:40.384 06:35:34 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:40.384 06:35:34 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:40.384 rmmod nvme_tcp 00:36:40.384 rmmod nvme_fabrics 00:36:40.384 rmmod nvme_keyring 00:36:40.384 06:35:34 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:40.384 06:35:34 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:36:40.384 06:35:34 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:36:40.384 06:35:34 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 602951 ']' 00:36:40.384 06:35:34 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 602951 00:36:40.384 06:35:34 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 602951 ']' 00:36:40.384 06:35:34 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 602951 00:36:40.384 06:35:34 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:36:40.385 06:35:34 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:40.385 06:35:34 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 602951 00:36:40.385 06:35:34 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:40.385 06:35:34 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:40.385 06:35:34 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 602951' 00:36:40.385 killing process with pid 602951 00:36:40.385 06:35:34 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 602951 00:36:40.385 06:35:34 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 602951 00:36:42.932 06:35:37 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:42.932 06:35:37 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:42.932 06:35:37 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:42.932 06:35:37 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:36:42.932 06:35:37 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:42.932 06:35:37 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:36:42.932 06:35:37 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:36:42.932 06:35:37 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:42.932 06:35:37 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:42.932 06:35:37 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:42.932 06:35:37 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:42.932 06:35:37 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:44.842 06:35:39 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:44.842 00:36:44.842 real 0m26.786s 00:36:44.842 user 0m35.467s 00:36:44.842 sys 0m7.425s 00:36:44.842 06:35:39 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:44.842 06:35:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:44.842 ************************************ 00:36:44.842 END TEST nvmf_identify_passthru 00:36:44.842 ************************************ 00:36:45.102 06:35:39 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:36:45.102 06:35:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:45.102 06:35:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:45.102 06:35:39 -- common/autotest_common.sh@10 -- # set +x 00:36:45.102 ************************************ 00:36:45.102 START TEST nvmf_dif 00:36:45.102 ************************************ 00:36:45.102 06:35:39 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:36:45.102 * Looking for test storage... 00:36:45.102 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:45.102 06:35:39 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:45.102 06:35:39 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:36:45.102 06:35:39 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:45.102 06:35:39 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:45.102 06:35:39 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:45.102 06:35:39 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:45.102 06:35:39 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:45.102 06:35:39 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:36:45.102 06:35:39 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:36:45.102 06:35:39 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:36:45.102 06:35:39 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:36:45.102 06:35:39 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:36:45.102 06:35:39 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:36:45.102 06:35:39 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:36:45.102 06:35:39 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:45.102 06:35:39 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:36:45.102 06:35:39 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:36:45.102 06:35:39 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:45.102 06:35:39 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:45.102 06:35:39 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:36:45.102 06:35:39 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:36:45.102 06:35:39 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:45.102 06:35:39 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:36:45.102 06:35:39 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:36:45.102 06:35:39 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:36:45.102 06:35:39 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:36:45.102 06:35:39 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:45.102 06:35:39 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:36:45.102 06:35:39 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:36:45.102 06:35:39 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:45.102 06:35:39 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:45.102 06:35:39 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:36:45.102 06:35:39 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:45.102 06:35:39 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:45.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:45.102 --rc genhtml_branch_coverage=1 00:36:45.102 --rc genhtml_function_coverage=1 00:36:45.102 --rc genhtml_legend=1 00:36:45.102 --rc geninfo_all_blocks=1 00:36:45.102 --rc geninfo_unexecuted_blocks=1 00:36:45.102 00:36:45.102 ' 00:36:45.102 06:35:39 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:45.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:45.102 --rc genhtml_branch_coverage=1 00:36:45.102 --rc genhtml_function_coverage=1 00:36:45.102 --rc genhtml_legend=1 00:36:45.102 --rc geninfo_all_blocks=1 00:36:45.102 --rc geninfo_unexecuted_blocks=1 00:36:45.102 00:36:45.102 ' 00:36:45.102 06:35:39 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:45.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:45.102 --rc genhtml_branch_coverage=1 00:36:45.102 --rc genhtml_function_coverage=1 00:36:45.102 --rc genhtml_legend=1 00:36:45.102 --rc geninfo_all_blocks=1 00:36:45.102 --rc geninfo_unexecuted_blocks=1 00:36:45.102 00:36:45.102 ' 00:36:45.102 06:35:39 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:45.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:45.102 --rc genhtml_branch_coverage=1 00:36:45.102 --rc genhtml_function_coverage=1 00:36:45.102 --rc genhtml_legend=1 00:36:45.102 --rc geninfo_all_blocks=1 00:36:45.102 --rc geninfo_unexecuted_blocks=1 00:36:45.102 00:36:45.102 ' 00:36:45.102 06:35:39 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:45.102 06:35:39 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:36:45.102 06:35:39 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:45.102 06:35:39 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:45.102 06:35:39 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:45.102 06:35:39 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:45.102 06:35:39 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:45.102 06:35:39 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:45.102 06:35:39 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:45.102 06:35:39 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:45.362 06:35:39 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:45.362 06:35:39 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:45.362 06:35:39 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:36:45.362 06:35:39 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:36:45.362 06:35:39 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:45.362 06:35:39 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:45.362 06:35:39 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:45.362 06:35:39 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:45.362 06:35:39 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:45.362 06:35:39 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:36:45.362 06:35:39 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:45.362 06:35:39 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:45.362 06:35:39 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:45.362 06:35:39 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:45.362 06:35:39 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:45.362 06:35:39 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:45.362 06:35:39 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:36:45.362 06:35:39 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:45.362 06:35:39 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:36:45.362 06:35:39 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:45.362 06:35:39 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:45.362 06:35:39 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:45.362 06:35:39 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:45.362 06:35:39 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:45.362 06:35:39 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:45.362 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:45.362 06:35:39 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:45.362 06:35:39 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:45.362 06:35:39 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:45.362 06:35:39 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:36:45.362 06:35:39 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:36:45.362 06:35:39 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:36:45.362 06:35:39 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:36:45.362 06:35:39 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:36:45.362 06:35:39 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:45.362 06:35:39 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:45.362 06:35:39 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:45.362 06:35:39 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:45.362 06:35:39 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:45.362 06:35:39 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:45.362 06:35:39 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:45.362 06:35:39 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:45.362 06:35:39 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:45.362 06:35:39 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:45.362 06:35:39 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:36:45.362 06:35:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:53.500 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:53.500 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:53.500 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:53.500 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:53.500 06:35:46 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:53.500 06:35:47 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:53.500 06:35:47 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:53.500 06:35:47 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:53.500 06:35:47 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:53.500 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:53.500 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.643 ms 00:36:53.500 00:36:53.500 --- 10.0.0.2 ping statistics --- 00:36:53.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:53.500 rtt min/avg/max/mdev = 0.643/0.643/0.643/0.000 ms 00:36:53.500 06:35:47 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:53.500 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:53.501 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:36:53.501 00:36:53.501 --- 10.0.0.1 ping statistics --- 00:36:53.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:53.501 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:36:53.501 06:35:47 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:53.501 06:35:47 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:36:53.501 06:35:47 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:36:53.501 06:35:47 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:56.046 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:36:56.046 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:36:56.046 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:36:56.046 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:36:56.046 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:36:56.046 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:36:56.046 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:36:56.046 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:36:56.046 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:36:56.046 0000:65:00.0 (8086 0a54): Already using the vfio-pci driver 00:36:56.046 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:36:56.046 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:36:56.046 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:36:56.046 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:36:56.046 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:36:56.046 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:36:56.046 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:36:56.306 06:35:50 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:56.306 06:35:50 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:56.306 06:35:50 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:56.306 06:35:50 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:56.306 06:35:50 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:56.306 06:35:50 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:56.306 06:35:50 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:36:56.306 06:35:50 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:36:56.306 06:35:50 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:56.306 06:35:50 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:56.306 06:35:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:56.306 06:35:50 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=609312 00:36:56.306 06:35:50 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 609312 00:36:56.306 06:35:50 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:36:56.306 06:35:50 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 609312 ']' 00:36:56.306 06:35:50 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:56.306 06:35:50 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:56.306 06:35:50 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:56.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:56.306 06:35:50 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:56.306 06:35:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:56.566 [2024-12-09 06:35:50.906717] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:36:56.566 [2024-12-09 06:35:50.906834] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:56.566 [2024-12-09 06:35:51.006226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:56.566 [2024-12-09 06:35:51.056390] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:56.566 [2024-12-09 06:35:51.056440] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:56.567 [2024-12-09 06:35:51.056455] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:56.567 [2024-12-09 06:35:51.056462] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:56.567 [2024-12-09 06:35:51.056469] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:56.567 [2024-12-09 06:35:51.057011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:57.509 06:35:51 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:57.509 06:35:51 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:36:57.509 06:35:51 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:57.509 06:35:51 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:57.509 06:35:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:57.509 06:35:51 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:57.509 06:35:51 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:36:57.509 06:35:51 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:36:57.509 06:35:51 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.509 06:35:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:57.509 [2024-12-09 06:35:51.782769] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:57.509 06:35:51 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.509 06:35:51 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:36:57.509 06:35:51 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:57.509 06:35:51 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:57.509 06:35:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:57.509 ************************************ 00:36:57.509 START TEST fio_dif_1_default 00:36:57.509 ************************************ 00:36:57.509 06:35:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:36:57.509 06:35:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:36:57.509 06:35:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:36:57.509 06:35:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:36:57.509 06:35:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:36:57.509 06:35:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:36:57.509 06:35:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:57.509 06:35:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.509 06:35:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:57.509 bdev_null0 00:36:57.509 06:35:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.509 06:35:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:57.509 06:35:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.509 06:35:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:57.509 06:35:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.509 06:35:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:57.509 06:35:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.509 06:35:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:57.509 06:35:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.509 06:35:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:57.509 06:35:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.509 06:35:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:57.509 [2024-12-09 06:35:51.871234] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:57.509 06:35:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.509 06:35:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:36:57.509 06:35:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:36:57.509 06:35:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:57.509 06:35:51 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:36:57.509 06:35:51 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:36:57.509 06:35:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:57.509 06:35:51 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:57.509 06:35:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:57.509 06:35:51 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:57.509 { 00:36:57.509 "params": { 00:36:57.509 "name": "Nvme$subsystem", 00:36:57.509 "trtype": "$TEST_TRANSPORT", 00:36:57.509 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:57.509 "adrfam": "ipv4", 00:36:57.509 "trsvcid": "$NVMF_PORT", 00:36:57.509 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:57.509 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:57.509 "hdgst": ${hdgst:-false}, 00:36:57.509 "ddgst": ${ddgst:-false} 00:36:57.509 }, 00:36:57.509 "method": "bdev_nvme_attach_controller" 00:36:57.509 } 00:36:57.509 EOF 00:36:57.509 )") 00:36:57.509 06:35:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:36:57.509 06:35:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:57.509 06:35:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:57.509 06:35:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:36:57.509 06:35:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:57.509 06:35:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:36:57.509 06:35:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:57.509 06:35:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:36:57.509 06:35:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:57.509 06:35:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:57.509 06:35:51 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:36:57.509 06:35:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:57.509 06:35:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:36:57.509 06:35:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:57.509 06:35:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:36:57.509 06:35:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:36:57.509 06:35:51 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:36:57.509 06:35:51 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:36:57.509 06:35:51 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:57.509 "params": { 00:36:57.509 "name": "Nvme0", 00:36:57.509 "trtype": "tcp", 00:36:57.509 "traddr": "10.0.0.2", 00:36:57.509 "adrfam": "ipv4", 00:36:57.509 "trsvcid": "4420", 00:36:57.509 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:57.509 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:57.509 "hdgst": false, 00:36:57.509 "ddgst": false 00:36:57.509 }, 00:36:57.509 "method": "bdev_nvme_attach_controller" 00:36:57.509 }' 00:36:57.509 06:35:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:57.509 06:35:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:57.509 06:35:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:57.509 06:35:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:57.509 06:35:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:57.509 06:35:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:57.509 06:35:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:57.509 06:35:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:57.509 06:35:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:57.510 06:35:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:57.774 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:57.774 fio-3.35 00:36:57.774 Starting 1 thread 00:37:10.012 00:37:10.012 filename0: (groupid=0, jobs=1): err= 0: pid=609798: Mon Dec 9 06:36:02 2024 00:37:10.012 read: IOPS=190, BW=762KiB/s (780kB/s)(7648KiB/10041msec) 00:37:10.012 slat (nsec): min=5670, max=36076, avg=6257.91, stdev=1197.56 00:37:10.012 clat (usec): min=495, max=42630, avg=20988.50, stdev=20167.55 00:37:10.012 lat (usec): min=501, max=42666, avg=20994.76, stdev=20167.54 00:37:10.012 clat percentiles (usec): 00:37:10.012 | 1.00th=[ 611], 5.00th=[ 766], 10.00th=[ 791], 20.00th=[ 816], 00:37:10.012 | 30.00th=[ 832], 40.00th=[ 881], 50.00th=[ 988], 60.00th=[41157], 00:37:10.012 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:37:10.012 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42730], 99.95th=[42730], 00:37:10.012 | 99.99th=[42730] 00:37:10.012 bw ( KiB/s): min= 704, max= 768, per=100.00%, avg=763.20, stdev=15.66, samples=20 00:37:10.012 iops : min= 176, max= 192, avg=190.80, stdev= 3.91, samples=20 00:37:10.012 lat (usec) : 500=0.05%, 750=2.77%, 1000=47.18% 00:37:10.012 lat (msec) : 50=50.00% 00:37:10.012 cpu : usr=92.99%, sys=6.78%, ctx=7, majf=0, minf=219 00:37:10.012 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:10.012 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:10.012 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:10.012 issued rwts: total=1912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:10.012 latency : target=0, window=0, percentile=100.00%, depth=4 00:37:10.012 00:37:10.012 Run status group 0 (all jobs): 00:37:10.012 READ: bw=762KiB/s (780kB/s), 762KiB/s-762KiB/s (780kB/s-780kB/s), io=7648KiB (7832kB), run=10041-10041msec 00:37:10.012 06:36:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:37:10.012 06:36:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:37:10.012 06:36:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:37:10.012 06:36:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:10.012 06:36:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:37:10.012 06:36:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:10.012 06:36:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.012 06:36:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:10.012 06:36:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.012 06:36:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:10.012 06:36:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.012 06:36:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:10.012 06:36:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.012 00:37:10.012 real 0m11.275s 00:37:10.012 user 0m16.206s 00:37:10.012 sys 0m1.048s 00:37:10.012 06:36:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:10.012 06:36:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:10.012 ************************************ 00:37:10.012 END TEST fio_dif_1_default 00:37:10.012 ************************************ 00:37:10.012 06:36:03 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:37:10.012 06:36:03 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:10.012 06:36:03 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:10.012 06:36:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:10.012 ************************************ 00:37:10.012 START TEST fio_dif_1_multi_subsystems 00:37:10.012 ************************************ 00:37:10.012 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:37:10.012 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:37:10.012 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:37:10.012 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:37:10.012 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:37:10.012 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:37:10.012 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:37:10.012 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:10.012 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.012 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:10.012 bdev_null0 00:37:10.012 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.012 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:10.012 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.012 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:10.012 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.012 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:10.012 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.012 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:10.012 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.012 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:10.012 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.012 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:10.012 [2024-12-09 06:36:03.220856] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:10.012 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.012 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:37:10.012 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:37:10.012 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:37:10.012 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:37:10.012 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.012 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:10.012 bdev_null1 00:37:10.012 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.012 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:10.012 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.012 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:10.012 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.012 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:10.012 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.012 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:10.012 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.012 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:10.012 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.012 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:10.012 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.012 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:37:10.012 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:37:10.012 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:37:10.012 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:37:10.012 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:37:10.012 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:10.012 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:10.012 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:10.013 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:10.013 { 00:37:10.013 "params": { 00:37:10.013 "name": "Nvme$subsystem", 00:37:10.013 "trtype": "$TEST_TRANSPORT", 00:37:10.013 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:10.013 "adrfam": "ipv4", 00:37:10.013 "trsvcid": "$NVMF_PORT", 00:37:10.013 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:10.013 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:10.013 "hdgst": ${hdgst:-false}, 00:37:10.013 "ddgst": ${ddgst:-false} 00:37:10.013 }, 00:37:10.013 "method": "bdev_nvme_attach_controller" 00:37:10.013 } 00:37:10.013 EOF 00:37:10.013 )") 00:37:10.013 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:37:10.013 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:10.013 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:10.013 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:37:10.013 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:10.013 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:37:10.013 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:10.013 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:37:10.013 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:10.013 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:10.013 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:37:10.013 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:37:10.013 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:10.013 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:37:10.013 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:37:10.013 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:10.013 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:37:10.013 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:10.013 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:10.013 { 00:37:10.013 "params": { 00:37:10.013 "name": "Nvme$subsystem", 00:37:10.013 "trtype": "$TEST_TRANSPORT", 00:37:10.013 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:10.013 "adrfam": "ipv4", 00:37:10.013 "trsvcid": "$NVMF_PORT", 00:37:10.013 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:10.013 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:10.013 "hdgst": ${hdgst:-false}, 00:37:10.013 "ddgst": ${ddgst:-false} 00:37:10.013 }, 00:37:10.013 "method": "bdev_nvme_attach_controller" 00:37:10.013 } 00:37:10.013 EOF 00:37:10.013 )") 00:37:10.013 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:37:10.013 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:37:10.013 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:37:10.013 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:37:10.013 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:37:10.013 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:10.013 "params": { 00:37:10.013 "name": "Nvme0", 00:37:10.013 "trtype": "tcp", 00:37:10.013 "traddr": "10.0.0.2", 00:37:10.013 "adrfam": "ipv4", 00:37:10.013 "trsvcid": "4420", 00:37:10.013 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:10.013 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:10.013 "hdgst": false, 00:37:10.013 "ddgst": false 00:37:10.013 }, 00:37:10.013 "method": "bdev_nvme_attach_controller" 00:37:10.013 },{ 00:37:10.013 "params": { 00:37:10.013 "name": "Nvme1", 00:37:10.013 "trtype": "tcp", 00:37:10.013 "traddr": "10.0.0.2", 00:37:10.013 "adrfam": "ipv4", 00:37:10.013 "trsvcid": "4420", 00:37:10.013 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:10.013 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:10.013 "hdgst": false, 00:37:10.013 "ddgst": false 00:37:10.013 }, 00:37:10.013 "method": "bdev_nvme_attach_controller" 00:37:10.013 }' 00:37:10.013 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:10.013 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:10.013 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:10.013 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:10.013 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:10.013 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:10.013 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:10.013 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:10.013 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:10.013 06:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:10.013 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:37:10.013 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:37:10.013 fio-3.35 00:37:10.013 Starting 2 threads 00:37:20.020 00:37:20.020 filename0: (groupid=0, jobs=1): err= 0: pid=611909: Mon Dec 9 06:36:14 2024 00:37:20.020 read: IOPS=190, BW=760KiB/s (778kB/s)(7632KiB/10039msec) 00:37:20.020 slat (nsec): min=5656, max=32015, avg=6297.77, stdev=1119.77 00:37:20.020 clat (usec): min=393, max=42507, avg=21028.15, stdev=20179.21 00:37:20.020 lat (usec): min=399, max=42539, avg=21034.45, stdev=20179.19 00:37:20.020 clat percentiles (usec): 00:37:20.020 | 1.00th=[ 586], 5.00th=[ 775], 10.00th=[ 799], 20.00th=[ 824], 00:37:20.020 | 30.00th=[ 832], 40.00th=[ 848], 50.00th=[41157], 60.00th=[41157], 00:37:20.020 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:37:20.020 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42730], 99.95th=[42730], 00:37:20.020 | 99.99th=[42730] 00:37:20.020 bw ( KiB/s): min= 704, max= 768, per=49.89%, avg=761.60, stdev=19.70, samples=20 00:37:20.020 iops : min= 176, max= 192, avg=190.40, stdev= 4.92, samples=20 00:37:20.020 lat (usec) : 500=0.21%, 750=3.67%, 1000=46.02% 00:37:20.020 lat (msec) : 50=50.10% 00:37:20.020 cpu : usr=95.54%, sys=4.25%, ctx=9, majf=0, minf=162 00:37:20.020 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:20.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:20.020 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:20.020 issued rwts: total=1908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:20.020 latency : target=0, window=0, percentile=100.00%, depth=4 00:37:20.020 filename1: (groupid=0, jobs=1): err= 0: pid=611911: Mon Dec 9 06:36:14 2024 00:37:20.020 read: IOPS=191, BW=768KiB/s (786kB/s)(7680KiB/10001msec) 00:37:20.020 slat (nsec): min=5662, max=32608, avg=6498.50, stdev=1649.82 00:37:20.020 clat (usec): min=631, max=41355, avg=20816.45, stdev=20176.49 00:37:20.020 lat (usec): min=639, max=41363, avg=20822.95, stdev=20176.36 00:37:20.020 clat percentiles (usec): 00:37:20.020 | 1.00th=[ 693], 5.00th=[ 758], 10.00th=[ 783], 20.00th=[ 807], 00:37:20.020 | 30.00th=[ 824], 40.00th=[ 840], 50.00th=[ 1004], 60.00th=[41157], 00:37:20.020 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:37:20.020 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:37:20.020 | 99.99th=[41157] 00:37:20.020 bw ( KiB/s): min= 736, max= 832, per=50.29%, avg=768.00, stdev=18.48, samples=19 00:37:20.020 iops : min= 184, max= 208, avg=192.00, stdev= 4.62, samples=19 00:37:20.020 lat (usec) : 750=4.53%, 1000=45.42% 00:37:20.020 lat (msec) : 2=0.47%, 50=49.58% 00:37:20.020 cpu : usr=95.23%, sys=4.56%, ctx=12, majf=0, minf=105 00:37:20.020 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:20.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:20.020 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:20.020 issued rwts: total=1920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:20.020 latency : target=0, window=0, percentile=100.00%, depth=4 00:37:20.020 00:37:20.020 Run status group 0 (all jobs): 00:37:20.020 READ: bw=1525KiB/s (1562kB/s), 760KiB/s-768KiB/s (778kB/s-786kB/s), io=15.0MiB (15.7MB), run=10001-10039msec 00:37:20.281 06:36:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:37:20.281 06:36:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:37:20.281 06:36:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:37:20.281 06:36:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:20.281 06:36:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:37:20.281 06:36:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:20.281 06:36:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:20.281 06:36:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:20.281 06:36:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:20.281 06:36:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:20.281 06:36:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:20.281 06:36:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:20.281 06:36:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:20.281 06:36:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:37:20.281 06:36:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:20.281 06:36:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:37:20.281 06:36:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:20.281 06:36:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:20.281 06:36:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:20.281 06:36:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:20.281 06:36:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:20.281 06:36:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:20.281 06:36:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:20.281 06:36:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:20.281 00:37:20.281 real 0m11.516s 00:37:20.281 user 0m31.090s 00:37:20.281 sys 0m1.210s 00:37:20.281 06:36:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:20.281 06:36:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:20.281 ************************************ 00:37:20.281 END TEST fio_dif_1_multi_subsystems 00:37:20.281 ************************************ 00:37:20.281 06:36:14 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:37:20.281 06:36:14 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:20.281 06:36:14 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:20.281 06:36:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:20.281 ************************************ 00:37:20.281 START TEST fio_dif_rand_params 00:37:20.281 ************************************ 00:37:20.281 06:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:37:20.281 06:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:37:20.281 06:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:37:20.281 06:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:37:20.281 06:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:37:20.281 06:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:37:20.281 06:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:37:20.281 06:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:37:20.281 06:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:37:20.281 06:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:20.281 06:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:20.281 06:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:20.281 06:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:20.281 06:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:37:20.281 06:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:20.281 06:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:20.281 bdev_null0 00:37:20.281 06:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:20.281 06:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:20.281 06:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:20.281 06:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:20.281 06:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:20.281 06:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:20.281 06:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:20.281 06:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:20.281 06:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:20.282 06:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:20.282 06:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:20.282 06:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:20.282 [2024-12-09 06:36:14.815845] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:20.282 06:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:20.282 06:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:37:20.282 06:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:37:20.282 06:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:20.282 06:36:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:37:20.282 06:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:20.282 06:36:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:37:20.282 06:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:20.282 06:36:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:20.282 06:36:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:20.282 { 00:37:20.282 "params": { 00:37:20.282 "name": "Nvme$subsystem", 00:37:20.282 "trtype": "$TEST_TRANSPORT", 00:37:20.282 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:20.282 "adrfam": "ipv4", 00:37:20.282 "trsvcid": "$NVMF_PORT", 00:37:20.282 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:20.282 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:20.282 "hdgst": ${hdgst:-false}, 00:37:20.282 "ddgst": ${ddgst:-false} 00:37:20.282 }, 00:37:20.282 "method": "bdev_nvme_attach_controller" 00:37:20.282 } 00:37:20.282 EOF 00:37:20.282 )") 00:37:20.282 06:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:20.282 06:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:20.282 06:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:20.282 06:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:20.282 06:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:20.282 06:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:20.282 06:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:20.282 06:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:37:20.282 06:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:20.282 06:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:20.282 06:36:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:20.282 06:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:37:20.282 06:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:20.282 06:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:20.282 06:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:20.282 06:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:20.282 06:36:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:37:20.282 06:36:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:37:20.282 06:36:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:20.282 "params": { 00:37:20.282 "name": "Nvme0", 00:37:20.282 "trtype": "tcp", 00:37:20.282 "traddr": "10.0.0.2", 00:37:20.282 "adrfam": "ipv4", 00:37:20.282 "trsvcid": "4420", 00:37:20.282 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:20.282 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:20.282 "hdgst": false, 00:37:20.282 "ddgst": false 00:37:20.282 }, 00:37:20.282 "method": "bdev_nvme_attach_controller" 00:37:20.282 }' 00:37:20.282 06:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:20.282 06:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:20.282 06:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:20.555 06:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:20.555 06:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:20.555 06:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:20.555 06:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:20.555 06:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:20.555 06:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:20.555 06:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:20.817 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:37:20.817 ... 00:37:20.817 fio-3.35 00:37:20.817 Starting 3 threads 00:37:27.400 00:37:27.400 filename0: (groupid=0, jobs=1): err= 0: pid=614342: Mon Dec 9 06:36:20 2024 00:37:27.400 read: IOPS=315, BW=39.4MiB/s (41.3MB/s)(197MiB/5006msec) 00:37:27.400 slat (nsec): min=5751, max=31815, avg=6441.03, stdev=891.11 00:37:27.400 clat (usec): min=5044, max=89529, avg=9512.71, stdev=7363.61 00:37:27.400 lat (usec): min=5050, max=89534, avg=9519.16, stdev=7363.72 00:37:27.400 clat percentiles (usec): 00:37:27.400 | 1.00th=[ 5735], 5.00th=[ 6259], 10.00th=[ 6783], 20.00th=[ 7504], 00:37:27.400 | 30.00th=[ 7832], 40.00th=[ 8094], 50.00th=[ 8356], 60.00th=[ 8848], 00:37:27.400 | 70.00th=[ 9372], 80.00th=[ 9896], 90.00th=[10421], 95.00th=[10945], 00:37:27.400 | 99.00th=[48497], 99.50th=[52691], 99.90th=[89654], 99.95th=[89654], 00:37:27.400 | 99.99th=[89654] 00:37:27.400 bw ( KiB/s): min=25344, max=46848, per=38.79%, avg=40320.00, stdev=6947.46, samples=10 00:37:27.400 iops : min= 198, max= 366, avg=315.00, stdev=54.28, samples=10 00:37:27.400 lat (msec) : 10=80.98%, 20=16.99%, 50=1.40%, 100=0.63% 00:37:27.400 cpu : usr=94.29%, sys=5.49%, ctx=5, majf=0, minf=117 00:37:27.400 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:27.400 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:27.400 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:27.400 issued rwts: total=1577,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:27.400 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:27.400 filename0: (groupid=0, jobs=1): err= 0: pid=614343: Mon Dec 9 06:36:20 2024 00:37:27.400 read: IOPS=322, BW=40.3MiB/s (42.3MB/s)(203MiB/5045msec) 00:37:27.400 slat (nsec): min=6005, max=30432, avg=8074.89, stdev=912.62 00:37:27.400 clat (usec): min=4385, max=51189, avg=9265.71, stdev=5127.25 00:37:27.400 lat (usec): min=4393, max=51198, avg=9273.79, stdev=5127.36 00:37:27.400 clat percentiles (usec): 00:37:27.400 | 1.00th=[ 5473], 5.00th=[ 6325], 10.00th=[ 6718], 20.00th=[ 7439], 00:37:27.400 | 30.00th=[ 7767], 40.00th=[ 8094], 50.00th=[ 8455], 60.00th=[ 8979], 00:37:27.400 | 70.00th=[ 9634], 80.00th=[10290], 90.00th=[10814], 95.00th=[11207], 00:37:27.400 | 99.00th=[46400], 99.50th=[48497], 99.90th=[51119], 99.95th=[51119], 00:37:27.400 | 99.99th=[51119] 00:37:27.400 bw ( KiB/s): min=25344, max=46592, per=40.02%, avg=41600.00, stdev=6167.95, samples=10 00:37:27.400 iops : min= 198, max= 364, avg=325.00, stdev=48.19, samples=10 00:37:27.400 lat (msec) : 10=75.54%, 20=22.86%, 50=1.29%, 100=0.31% 00:37:27.400 cpu : usr=94.90%, sys=4.86%, ctx=11, majf=0, minf=100 00:37:27.400 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:27.400 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:27.400 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:27.400 issued rwts: total=1627,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:27.400 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:27.400 filename0: (groupid=0, jobs=1): err= 0: pid=614344: Mon Dec 9 06:36:20 2024 00:37:27.400 read: IOPS=177, BW=22.2MiB/s (23.3MB/s)(112MiB/5023msec) 00:37:27.400 slat (nsec): min=5741, max=32445, avg=7947.08, stdev=1341.41 00:37:27.400 clat (msec): min=5, max=132, avg=16.86, stdev=19.41 00:37:27.400 lat (msec): min=5, max=132, avg=16.87, stdev=19.41 00:37:27.400 clat percentiles (msec): 00:37:27.400 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 8], 00:37:27.400 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 10], 00:37:27.400 | 70.00th=[ 10], 80.00th=[ 12], 90.00th=[ 50], 95.00th=[ 52], 00:37:27.400 | 99.00th=[ 91], 99.50th=[ 92], 99.90th=[ 133], 99.95th=[ 133], 00:37:27.400 | 99.99th=[ 133] 00:37:27.400 bw ( KiB/s): min=14592, max=26880, per=21.92%, avg=22784.00, stdev=4213.43, samples=10 00:37:27.400 iops : min= 114, max= 210, avg=178.00, stdev=32.92, samples=10 00:37:27.400 lat (msec) : 10=70.44%, 20=12.77%, 50=7.95%, 100=8.62%, 250=0.22% 00:37:27.400 cpu : usr=95.14%, sys=4.62%, ctx=7, majf=0, minf=65 00:37:27.400 IO depths : 1=0.9%, 2=99.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:27.400 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:27.400 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:27.400 issued rwts: total=893,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:27.400 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:27.400 00:37:27.400 Run status group 0 (all jobs): 00:37:27.400 READ: bw=102MiB/s (106MB/s), 22.2MiB/s-40.3MiB/s (23.3MB/s-42.3MB/s), io=512MiB (537MB), run=5006-5045msec 00:37:27.400 06:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:37:27.400 06:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:27.400 06:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:27.400 06:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:27.400 06:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:27.400 06:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:27.400 06:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:27.400 06:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:27.400 06:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:27.400 06:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:27.400 06:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:27.400 06:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:27.400 06:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:27.400 06:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:37:27.400 06:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:37:27.400 06:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:37:27.400 06:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:37:27.400 06:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:37:27.400 06:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:37:27.400 06:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:37:27.400 06:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:27.400 06:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:27.400 06:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:27.400 06:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:27.400 06:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:37:27.400 06:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:27.400 06:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:27.400 bdev_null0 00:37:27.400 06:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:27.400 06:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:27.400 06:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:27.400 06:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:27.400 06:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:27.400 06:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:27.400 06:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:27.400 06:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:27.400 06:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:27.400 06:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:27.400 06:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:27.400 06:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:27.400 [2024-12-09 06:36:20.913101] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:27.400 06:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:27.400 06:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:27.400 06:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:37:27.400 06:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:37:27.400 06:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:37:27.400 06:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:27.400 06:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:27.400 bdev_null1 00:37:27.400 06:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:27.400 06:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:27.400 06:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:27.400 06:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:27.400 06:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:27.400 06:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:27.400 06:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:27.400 06:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:27.400 06:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:27.400 06:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:27.400 06:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:27.400 06:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:27.400 06:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:27.400 06:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:27.400 06:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:37:27.400 06:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:37:27.400 06:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:37:27.400 06:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:27.400 06:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:27.400 bdev_null2 00:37:27.400 06:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:27.400 06:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:37:27.400 06:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:27.400 06:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:27.400 06:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:27.400 06:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:37:27.400 06:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:27.400 06:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:27.400 06:36:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:27.400 06:36:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:37:27.400 06:36:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:27.400 06:36:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:27.400 06:36:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:27.400 06:36:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:37:27.400 06:36:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:37:27.400 06:36:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:37:27.400 06:36:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:27.400 06:36:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:37:27.400 06:36:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:37:27.400 06:36:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:27.400 06:36:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:27.400 06:36:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:27.400 06:36:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:27.400 06:36:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:27.400 { 00:37:27.400 "params": { 00:37:27.400 "name": "Nvme$subsystem", 00:37:27.400 "trtype": "$TEST_TRANSPORT", 00:37:27.400 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:27.400 "adrfam": "ipv4", 00:37:27.400 "trsvcid": "$NVMF_PORT", 00:37:27.400 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:27.400 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:27.400 "hdgst": ${hdgst:-false}, 00:37:27.400 "ddgst": ${ddgst:-false} 00:37:27.400 }, 00:37:27.400 "method": "bdev_nvme_attach_controller" 00:37:27.400 } 00:37:27.400 EOF 00:37:27.401 )") 00:37:27.401 06:36:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:27.401 06:36:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:27.401 06:36:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:27.401 06:36:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:27.401 06:36:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:27.401 06:36:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:37:27.401 06:36:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:27.401 06:36:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:27.401 06:36:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:27.401 06:36:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:27.401 06:36:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:37:27.401 06:36:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:27.401 06:36:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:27.401 06:36:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:27.401 06:36:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:27.401 06:36:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:27.401 06:36:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:27.401 { 00:37:27.401 "params": { 00:37:27.401 "name": "Nvme$subsystem", 00:37:27.401 "trtype": "$TEST_TRANSPORT", 00:37:27.401 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:27.401 "adrfam": "ipv4", 00:37:27.401 "trsvcid": "$NVMF_PORT", 00:37:27.401 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:27.401 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:27.401 "hdgst": ${hdgst:-false}, 00:37:27.401 "ddgst": ${ddgst:-false} 00:37:27.401 }, 00:37:27.401 "method": "bdev_nvme_attach_controller" 00:37:27.401 } 00:37:27.401 EOF 00:37:27.401 )") 00:37:27.401 06:36:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:27.401 06:36:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:27.401 06:36:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:27.401 06:36:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:27.401 06:36:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:27.401 06:36:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:27.401 06:36:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:27.401 06:36:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:27.401 { 00:37:27.401 "params": { 00:37:27.401 "name": "Nvme$subsystem", 00:37:27.401 "trtype": "$TEST_TRANSPORT", 00:37:27.401 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:27.401 "adrfam": "ipv4", 00:37:27.401 "trsvcid": "$NVMF_PORT", 00:37:27.401 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:27.401 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:27.401 "hdgst": ${hdgst:-false}, 00:37:27.401 "ddgst": ${ddgst:-false} 00:37:27.401 }, 00:37:27.401 "method": "bdev_nvme_attach_controller" 00:37:27.401 } 00:37:27.401 EOF 00:37:27.401 )") 00:37:27.401 06:36:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:27.401 06:36:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:37:27.401 06:36:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:37:27.401 06:36:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:27.401 "params": { 00:37:27.401 "name": "Nvme0", 00:37:27.401 "trtype": "tcp", 00:37:27.401 "traddr": "10.0.0.2", 00:37:27.401 "adrfam": "ipv4", 00:37:27.401 "trsvcid": "4420", 00:37:27.401 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:27.401 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:27.401 "hdgst": false, 00:37:27.401 "ddgst": false 00:37:27.401 }, 00:37:27.401 "method": "bdev_nvme_attach_controller" 00:37:27.401 },{ 00:37:27.401 "params": { 00:37:27.401 "name": "Nvme1", 00:37:27.401 "trtype": "tcp", 00:37:27.401 "traddr": "10.0.0.2", 00:37:27.401 "adrfam": "ipv4", 00:37:27.401 "trsvcid": "4420", 00:37:27.401 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:27.401 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:27.401 "hdgst": false, 00:37:27.401 "ddgst": false 00:37:27.401 }, 00:37:27.401 "method": "bdev_nvme_attach_controller" 00:37:27.401 },{ 00:37:27.401 "params": { 00:37:27.401 "name": "Nvme2", 00:37:27.401 "trtype": "tcp", 00:37:27.401 "traddr": "10.0.0.2", 00:37:27.401 "adrfam": "ipv4", 00:37:27.401 "trsvcid": "4420", 00:37:27.401 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:37:27.401 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:37:27.401 "hdgst": false, 00:37:27.401 "ddgst": false 00:37:27.401 }, 00:37:27.401 "method": "bdev_nvme_attach_controller" 00:37:27.401 }' 00:37:27.401 06:36:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:27.401 06:36:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:27.401 06:36:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:27.401 06:36:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:27.401 06:36:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:27.401 06:36:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:27.401 06:36:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:27.401 06:36:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:27.401 06:36:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:27.401 06:36:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:27.401 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:27.401 ... 00:37:27.401 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:27.401 ... 00:37:27.401 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:27.401 ... 00:37:27.401 fio-3.35 00:37:27.401 Starting 24 threads 00:37:39.627 00:37:39.627 filename0: (groupid=0, jobs=1): err= 0: pid=615645: Mon Dec 9 06:36:32 2024 00:37:39.627 read: IOPS=712, BW=2849KiB/s (2918kB/s)(27.9MiB/10024msec) 00:37:39.627 slat (usec): min=3, max=1607, avg=13.82, stdev=22.40 00:37:39.627 clat (usec): min=2967, max=42001, avg=22366.02, stdev=5193.03 00:37:39.627 lat (usec): min=2975, max=42028, avg=22379.84, stdev=5195.99 00:37:39.627 clat percentiles (usec): 00:37:39.627 | 1.00th=[ 8586], 5.00th=[14746], 10.00th=[16057], 20.00th=[17695], 00:37:39.627 | 30.00th=[20055], 40.00th=[23462], 50.00th=[23725], 60.00th=[23987], 00:37:39.627 | 70.00th=[24249], 80.00th=[24511], 90.00th=[27132], 95.00th=[31065], 00:37:39.627 | 99.00th=[38011], 99.50th=[40109], 99.90th=[41681], 99.95th=[41681], 00:37:39.628 | 99.99th=[42206] 00:37:39.628 bw ( KiB/s): min= 2560, max= 3424, per=4.45%, avg=2848.95, stdev=196.92, samples=20 00:37:39.628 iops : min= 640, max= 856, avg=712.20, stdev=49.24, samples=20 00:37:39.628 lat (msec) : 4=0.41%, 10=1.16%, 20=27.73%, 50=70.70% 00:37:39.628 cpu : usr=98.73%, sys=0.98%, ctx=30, majf=0, minf=100 00:37:39.628 IO depths : 1=2.7%, 2=5.5%, 4=14.6%, 8=67.0%, 16=10.2%, 32=0.0%, >=64=0.0% 00:37:39.628 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.628 complete : 0=0.0%, 4=91.4%, 8=3.3%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.628 issued rwts: total=7140,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:39.628 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:39.628 filename0: (groupid=0, jobs=1): err= 0: pid=615646: Mon Dec 9 06:36:32 2024 00:37:39.628 read: IOPS=726, BW=2906KiB/s (2976kB/s)(28.4MiB/10015msec) 00:37:39.628 slat (nsec): min=5833, max=77132, avg=9837.14, stdev=8136.18 00:37:39.628 clat (usec): min=8131, max=39920, avg=21948.68, stdev=4755.47 00:37:39.628 lat (usec): min=8139, max=39952, avg=21958.52, stdev=4756.72 00:37:39.628 clat percentiles (usec): 00:37:39.628 | 1.00th=[10159], 5.00th=[14877], 10.00th=[15926], 20.00th=[17695], 00:37:39.628 | 30.00th=[19268], 40.00th=[20841], 50.00th=[23725], 60.00th=[23987], 00:37:39.628 | 70.00th=[23987], 80.00th=[24249], 90.00th=[26870], 95.00th=[29754], 00:37:39.628 | 99.00th=[35390], 99.50th=[36439], 99.90th=[38536], 99.95th=[40109], 00:37:39.628 | 99.99th=[40109] 00:37:39.628 bw ( KiB/s): min= 2608, max= 3254, per=4.54%, avg=2908.30, stdev=174.57, samples=20 00:37:39.628 iops : min= 652, max= 813, avg=727.05, stdev=43.59, samples=20 00:37:39.628 lat (msec) : 10=0.88%, 20=33.69%, 50=65.43% 00:37:39.628 cpu : usr=98.77%, sys=0.83%, ctx=62, majf=0, minf=90 00:37:39.628 IO depths : 1=1.3%, 2=3.2%, 4=10.8%, 8=72.6%, 16=12.0%, 32=0.0%, >=64=0.0% 00:37:39.628 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.628 complete : 0=0.0%, 4=90.4%, 8=4.9%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.628 issued rwts: total=7276,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:39.628 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:39.628 filename0: (groupid=0, jobs=1): err= 0: pid=615647: Mon Dec 9 06:36:32 2024 00:37:39.628 read: IOPS=676, BW=2708KiB/s (2773kB/s)(26.5MiB/10021msec) 00:37:39.628 slat (nsec): min=3759, max=64582, avg=8269.76, stdev=5249.62 00:37:39.628 clat (usec): min=2034, max=25509, avg=23563.84, stdev=3175.05 00:37:39.628 lat (usec): min=2041, max=25515, avg=23572.11, stdev=3174.95 00:37:39.628 clat percentiles (usec): 00:37:39.628 | 1.00th=[ 3621], 5.00th=[22676], 10.00th=[23725], 20.00th=[23987], 00:37:39.628 | 30.00th=[23987], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:37:39.628 | 70.00th=[24511], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:37:39.628 | 99.00th=[25297], 99.50th=[25297], 99.90th=[25560], 99.95th=[25560], 00:37:39.628 | 99.99th=[25560] 00:37:39.628 bw ( KiB/s): min= 2554, max= 3712, per=4.23%, avg=2707.15, stdev=246.82, samples=20 00:37:39.628 iops : min= 638, max= 928, avg=676.75, stdev=61.72, samples=20 00:37:39.628 lat (msec) : 4=1.05%, 10=1.22%, 20=2.45%, 50=95.28% 00:37:39.628 cpu : usr=98.75%, sys=0.83%, ctx=70, majf=0, minf=71 00:37:39.628 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:37:39.628 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.628 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.628 issued rwts: total=6784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:39.628 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:39.628 filename0: (groupid=0, jobs=1): err= 0: pid=615648: Mon Dec 9 06:36:32 2024 00:37:39.628 read: IOPS=661, BW=2646KiB/s (2709kB/s)(25.9MiB/10015msec) 00:37:39.628 slat (nsec): min=3788, max=57353, avg=11983.83, stdev=7895.53 00:37:39.628 clat (usec): min=8276, max=33556, avg=24093.04, stdev=1167.28 00:37:39.628 lat (usec): min=8306, max=33563, avg=24105.03, stdev=1167.38 00:37:39.628 clat percentiles (usec): 00:37:39.628 | 1.00th=[18220], 5.00th=[23462], 10.00th=[23725], 20.00th=[23725], 00:37:39.628 | 30.00th=[23987], 40.00th=[23987], 50.00th=[24249], 60.00th=[24249], 00:37:39.628 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:37:39.628 | 99.00th=[25297], 99.50th=[25560], 99.90th=[30540], 99.95th=[32900], 00:37:39.628 | 99.99th=[33817] 00:37:39.628 bw ( KiB/s): min= 2560, max= 2688, per=4.14%, avg=2647.26, stdev=60.92, samples=19 00:37:39.628 iops : min= 640, max= 672, avg=661.79, stdev=15.22, samples=19 00:37:39.628 lat (msec) : 10=0.09%, 20=1.18%, 50=98.73% 00:37:39.628 cpu : usr=98.96%, sys=0.78%, ctx=14, majf=0, minf=47 00:37:39.628 IO depths : 1=5.8%, 2=12.0%, 4=25.0%, 8=50.5%, 16=6.7%, 32=0.0%, >=64=0.0% 00:37:39.628 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.628 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.628 issued rwts: total=6624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:39.628 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:39.628 filename0: (groupid=0, jobs=1): err= 0: pid=615649: Mon Dec 9 06:36:32 2024 00:37:39.628 read: IOPS=658, BW=2635KiB/s (2698kB/s)(25.8MiB/10008msec) 00:37:39.628 slat (nsec): min=5854, max=62149, avg=14511.85, stdev=8670.28 00:37:39.628 clat (usec): min=10335, max=45945, avg=24145.92, stdev=1233.41 00:37:39.628 lat (usec): min=10344, max=45962, avg=24160.43, stdev=1233.44 00:37:39.628 clat percentiles (usec): 00:37:39.628 | 1.00th=[22676], 5.00th=[23462], 10.00th=[23725], 20.00th=[23725], 00:37:39.628 | 30.00th=[23987], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:37:39.628 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:37:39.628 | 99.00th=[25297], 99.50th=[26346], 99.90th=[38536], 99.95th=[38536], 00:37:39.628 | 99.99th=[45876] 00:37:39.628 bw ( KiB/s): min= 2560, max= 2688, per=4.12%, avg=2633.79, stdev=64.67, samples=19 00:37:39.628 iops : min= 640, max= 672, avg=658.42, stdev=16.15, samples=19 00:37:39.628 lat (msec) : 20=0.59%, 50=99.41% 00:37:39.628 cpu : usr=98.98%, sys=0.75%, ctx=14, majf=0, minf=46 00:37:39.628 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:37:39.628 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.628 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.628 issued rwts: total=6592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:39.628 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:39.628 filename0: (groupid=0, jobs=1): err= 0: pid=615650: Mon Dec 9 06:36:32 2024 00:37:39.628 read: IOPS=660, BW=2640KiB/s (2703kB/s)(25.8MiB/10003msec) 00:37:39.628 slat (usec): min=5, max=101, avg=18.31, stdev=12.80 00:37:39.628 clat (usec): min=3635, max=52075, avg=24063.36, stdev=1593.03 00:37:39.628 lat (usec): min=3641, max=52113, avg=24081.67, stdev=1593.72 00:37:39.628 clat percentiles (usec): 00:37:39.628 | 1.00th=[22676], 5.00th=[23462], 10.00th=[23725], 20.00th=[23725], 00:37:39.628 | 30.00th=[23987], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:37:39.628 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[24773], 00:37:39.628 | 99.00th=[25297], 99.50th=[25297], 99.90th=[43779], 99.95th=[43779], 00:37:39.628 | 99.99th=[52167] 00:37:39.628 bw ( KiB/s): min= 2432, max= 2688, per=4.10%, avg=2626.74, stdev=77.81, samples=19 00:37:39.628 iops : min= 608, max= 672, avg=656.63, stdev=19.41, samples=19 00:37:39.628 lat (msec) : 4=0.15%, 10=0.09%, 20=0.42%, 50=99.30%, 100=0.03% 00:37:39.628 cpu : usr=97.28%, sys=1.82%, ctx=874, majf=0, minf=62 00:37:39.628 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:39.628 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.628 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.628 issued rwts: total=6602,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:39.628 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:39.628 filename0: (groupid=0, jobs=1): err= 0: pid=615651: Mon Dec 9 06:36:32 2024 00:37:39.628 read: IOPS=659, BW=2640KiB/s (2703kB/s)(25.8MiB/10009msec) 00:37:39.628 slat (usec): min=5, max=456, avg=25.61, stdev=23.41 00:37:39.628 clat (usec): min=10409, max=32752, avg=24016.04, stdev=1226.25 00:37:39.628 lat (usec): min=10421, max=32822, avg=24041.65, stdev=1219.20 00:37:39.628 clat percentiles (usec): 00:37:39.628 | 1.00th=[18744], 5.00th=[23462], 10.00th=[23462], 20.00th=[23725], 00:37:39.628 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:37:39.628 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:37:39.628 | 99.00th=[25297], 99.50th=[30016], 99.90th=[32113], 99.95th=[32637], 00:37:39.628 | 99.99th=[32637] 00:37:39.628 bw ( KiB/s): min= 2560, max= 2792, per=4.12%, avg=2635.60, stdev=73.76, samples=20 00:37:39.628 iops : min= 640, max= 698, avg=658.90, stdev=18.44, samples=20 00:37:39.628 lat (msec) : 20=1.09%, 50=98.91% 00:37:39.628 cpu : usr=98.31%, sys=1.08%, ctx=299, majf=0, minf=49 00:37:39.628 IO depths : 1=6.2%, 2=12.3%, 4=24.8%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:37:39.628 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.628 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.628 issued rwts: total=6605,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:39.628 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:39.628 filename0: (groupid=0, jobs=1): err= 0: pid=615652: Mon Dec 9 06:36:32 2024 00:37:39.628 read: IOPS=660, BW=2641KiB/s (2704kB/s)(25.8MiB/10010msec) 00:37:39.628 slat (nsec): min=5861, max=61003, avg=16188.98, stdev=9371.15 00:37:39.628 clat (usec): min=15132, max=25532, avg=24083.12, stdev=726.42 00:37:39.628 lat (usec): min=15145, max=25560, avg=24099.31, stdev=726.67 00:37:39.628 clat percentiles (usec): 00:37:39.628 | 1.00th=[22938], 5.00th=[23462], 10.00th=[23725], 20.00th=[23725], 00:37:39.628 | 30.00th=[23987], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:37:39.628 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:37:39.628 | 99.00th=[25297], 99.50th=[25297], 99.90th=[25560], 99.95th=[25560], 00:37:39.628 | 99.99th=[25560] 00:37:39.628 bw ( KiB/s): min= 2560, max= 2688, per=4.12%, avg=2633.79, stdev=64.67, samples=19 00:37:39.628 iops : min= 640, max= 672, avg=658.42, stdev=16.15, samples=19 00:37:39.628 lat (msec) : 20=0.48%, 50=99.52% 00:37:39.628 cpu : usr=98.92%, sys=0.80%, ctx=14, majf=0, minf=57 00:37:39.628 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:39.628 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.629 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.629 issued rwts: total=6608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:39.629 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:39.629 filename1: (groupid=0, jobs=1): err= 0: pid=615653: Mon Dec 9 06:36:32 2024 00:37:39.629 read: IOPS=664, BW=2657KiB/s (2721kB/s)(26.0MiB/10020msec) 00:37:39.629 slat (nsec): min=5845, max=58024, avg=15257.67, stdev=9646.76 00:37:39.629 clat (usec): min=7756, max=34669, avg=23957.81, stdev=1712.42 00:37:39.629 lat (usec): min=7776, max=34682, avg=23973.06, stdev=1711.16 00:37:39.629 clat percentiles (usec): 00:37:39.629 | 1.00th=[11994], 5.00th=[23462], 10.00th=[23725], 20.00th=[23725], 00:37:39.629 | 30.00th=[23987], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:37:39.629 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:37:39.629 | 99.00th=[25297], 99.50th=[25297], 99.90th=[25560], 99.95th=[33817], 00:37:39.629 | 99.99th=[34866] 00:37:39.629 bw ( KiB/s): min= 2560, max= 2949, per=4.15%, avg=2656.25, stdev=92.52, samples=20 00:37:39.629 iops : min= 640, max= 737, avg=664.05, stdev=23.09, samples=20 00:37:39.629 lat (msec) : 10=0.72%, 20=0.92%, 50=98.36% 00:37:39.629 cpu : usr=98.86%, sys=0.87%, ctx=27, majf=0, minf=57 00:37:39.629 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:39.629 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.629 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.629 issued rwts: total=6656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:39.629 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:39.629 filename1: (groupid=0, jobs=1): err= 0: pid=615655: Mon Dec 9 06:36:32 2024 00:37:39.629 read: IOPS=659, BW=2636KiB/s (2699kB/s)(25.8MiB/10003msec) 00:37:39.629 slat (nsec): min=5530, max=83752, avg=26214.89, stdev=15199.90 00:37:39.629 clat (usec): min=10349, max=43040, avg=24050.42, stdev=1282.75 00:37:39.629 lat (usec): min=10369, max=43056, avg=24076.63, stdev=1281.56 00:37:39.629 clat percentiles (usec): 00:37:39.629 | 1.00th=[22414], 5.00th=[23462], 10.00th=[23462], 20.00th=[23725], 00:37:39.629 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[23987], 00:37:39.629 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24511], 95.00th=[24773], 00:37:39.629 | 99.00th=[25297], 99.50th=[25560], 99.90th=[43254], 99.95th=[43254], 00:37:39.629 | 99.99th=[43254] 00:37:39.629 bw ( KiB/s): min= 2432, max= 2688, per=4.11%, avg=2627.05, stdev=78.06, samples=19 00:37:39.629 iops : min= 608, max= 672, avg=656.74, stdev=19.50, samples=19 00:37:39.629 lat (msec) : 20=0.52%, 50=99.48% 00:37:39.629 cpu : usr=98.51%, sys=1.03%, ctx=111, majf=0, minf=48 00:37:39.629 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:39.629 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.629 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.629 issued rwts: total=6592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:39.629 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:39.629 filename1: (groupid=0, jobs=1): err= 0: pid=615656: Mon Dec 9 06:36:32 2024 00:37:39.629 read: IOPS=670, BW=2682KiB/s (2746kB/s)(26.2MiB/10020msec) 00:37:39.629 slat (nsec): min=5841, max=98036, avg=20238.92, stdev=14698.19 00:37:39.629 clat (usec): min=11256, max=37662, avg=23678.86, stdev=2214.24 00:37:39.629 lat (usec): min=11263, max=37668, avg=23699.10, stdev=2215.70 00:37:39.629 clat percentiles (usec): 00:37:39.629 | 1.00th=[15008], 5.00th=[18220], 10.00th=[23462], 20.00th=[23725], 00:37:39.629 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[23987], 00:37:39.629 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[24773], 00:37:39.629 | 99.00th=[28705], 99.50th=[32375], 99.90th=[37487], 99.95th=[37487], 00:37:39.629 | 99.99th=[37487] 00:37:39.629 bw ( KiB/s): min= 2560, max= 2919, per=4.19%, avg=2680.05, stdev=95.66, samples=20 00:37:39.629 iops : min= 640, max= 729, avg=669.95, stdev=23.82, samples=20 00:37:39.629 lat (msec) : 20=6.28%, 50=93.72% 00:37:39.629 cpu : usr=98.61%, sys=1.11%, ctx=43, majf=0, minf=61 00:37:39.629 IO depths : 1=4.9%, 2=9.7%, 4=20.2%, 8=56.8%, 16=8.4%, 32=0.0%, >=64=0.0% 00:37:39.629 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.629 complete : 0=0.0%, 4=92.4%, 8=2.5%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.629 issued rwts: total=6718,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:39.629 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:39.629 filename1: (groupid=0, jobs=1): err= 0: pid=615657: Mon Dec 9 06:36:32 2024 00:37:39.629 read: IOPS=673, BW=2693KiB/s (2758kB/s)(26.3MiB/10002msec) 00:37:39.629 slat (nsec): min=5832, max=85184, avg=16741.93, stdev=13939.09 00:37:39.629 clat (usec): min=10526, max=44648, avg=23678.48, stdev=4022.45 00:37:39.629 lat (usec): min=10541, max=44666, avg=23695.22, stdev=4023.26 00:37:39.629 clat percentiles (usec): 00:37:39.629 | 1.00th=[13435], 5.00th=[16319], 10.00th=[19006], 20.00th=[21103], 00:37:39.629 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:37:39.629 | 70.00th=[24511], 80.00th=[24773], 90.00th=[27919], 95.00th=[30278], 00:37:39.629 | 99.00th=[38011], 99.50th=[40109], 99.90th=[44827], 99.95th=[44827], 00:37:39.629 | 99.99th=[44827] 00:37:39.629 bw ( KiB/s): min= 2452, max= 2848, per=4.20%, avg=2685.58, stdev=91.76, samples=19 00:37:39.629 iops : min= 613, max= 712, avg=671.32, stdev=22.95, samples=19 00:37:39.629 lat (msec) : 20=14.95%, 50=85.05% 00:37:39.629 cpu : usr=98.10%, sys=1.30%, ctx=116, majf=0, minf=64 00:37:39.629 IO depths : 1=0.5%, 2=1.2%, 4=6.1%, 8=77.4%, 16=14.8%, 32=0.0%, >=64=0.0% 00:37:39.629 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.629 complete : 0=0.0%, 4=89.9%, 8=6.9%, 16=3.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.629 issued rwts: total=6734,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:39.629 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:39.629 filename1: (groupid=0, jobs=1): err= 0: pid=615658: Mon Dec 9 06:36:32 2024 00:37:39.629 read: IOPS=661, BW=2645KiB/s (2708kB/s)(25.8MiB/10004msec) 00:37:39.629 slat (nsec): min=5763, max=73105, avg=11318.07, stdev=7975.04 00:37:39.629 clat (usec): min=15059, max=37784, avg=24107.31, stdev=1646.14 00:37:39.629 lat (usec): min=15065, max=37791, avg=24118.63, stdev=1646.36 00:37:39.629 clat percentiles (usec): 00:37:39.629 | 1.00th=[15926], 5.00th=[23462], 10.00th=[23725], 20.00th=[23725], 00:37:39.629 | 30.00th=[23987], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:37:39.629 | 70.00th=[24511], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:37:39.629 | 99.00th=[31327], 99.50th=[33162], 99.90th=[36963], 99.95th=[37487], 00:37:39.629 | 99.99th=[38011] 00:37:39.629 bw ( KiB/s): min= 2560, max= 2736, per=4.13%, avg=2643.05, stdev=64.58, samples=19 00:37:39.629 iops : min= 640, max= 684, avg=660.74, stdev=16.13, samples=19 00:37:39.629 lat (msec) : 20=2.40%, 50=97.60% 00:37:39.629 cpu : usr=98.88%, sys=0.85%, ctx=14, majf=0, minf=52 00:37:39.629 IO depths : 1=5.1%, 2=11.0%, 4=24.2%, 8=52.3%, 16=7.4%, 32=0.0%, >=64=0.0% 00:37:39.629 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.629 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.629 issued rwts: total=6614,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:39.629 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:39.629 filename1: (groupid=0, jobs=1): err= 0: pid=615659: Mon Dec 9 06:36:32 2024 00:37:39.629 read: IOPS=664, BW=2657KiB/s (2721kB/s)(26.0MiB/10020msec) 00:37:39.629 slat (nsec): min=5887, max=61039, avg=13971.26, stdev=8630.28 00:37:39.629 clat (usec): min=7767, max=25423, avg=23970.76, stdev=1674.35 00:37:39.629 lat (usec): min=7786, max=25434, avg=23984.73, stdev=1673.07 00:37:39.629 clat percentiles (usec): 00:37:39.629 | 1.00th=[12256], 5.00th=[23462], 10.00th=[23725], 20.00th=[23725], 00:37:39.629 | 30.00th=[23987], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:37:39.629 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:37:39.629 | 99.00th=[25297], 99.50th=[25297], 99.90th=[25297], 99.95th=[25297], 00:37:39.629 | 99.99th=[25297] 00:37:39.629 bw ( KiB/s): min= 2560, max= 2949, per=4.15%, avg=2656.25, stdev=92.52, samples=20 00:37:39.629 iops : min= 640, max= 737, avg=664.05, stdev=23.09, samples=20 00:37:39.629 lat (msec) : 10=0.72%, 20=0.89%, 50=98.39% 00:37:39.629 cpu : usr=98.78%, sys=0.94%, ctx=17, majf=0, minf=50 00:37:39.629 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:39.629 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.629 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.629 issued rwts: total=6656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:39.629 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:39.629 filename1: (groupid=0, jobs=1): err= 0: pid=615660: Mon Dec 9 06:36:32 2024 00:37:39.629 read: IOPS=660, BW=2641KiB/s (2704kB/s)(25.8MiB/10005msec) 00:37:39.629 slat (nsec): min=5006, max=80833, avg=22695.81, stdev=12767.98 00:37:39.629 clat (usec): min=5256, max=51306, avg=24033.46, stdev=1521.45 00:37:39.629 lat (usec): min=5262, max=51320, avg=24056.16, stdev=1521.18 00:37:39.629 clat percentiles (usec): 00:37:39.629 | 1.00th=[21365], 5.00th=[23462], 10.00th=[23725], 20.00th=[23725], 00:37:39.629 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:37:39.629 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24511], 95.00th=[24773], 00:37:39.629 | 99.00th=[25560], 99.50th=[25560], 99.90th=[39584], 99.95th=[39584], 00:37:39.629 | 99.99th=[51119] 00:37:39.629 bw ( KiB/s): min= 2560, max= 2688, per=4.12%, avg=2633.16, stdev=60.85, samples=19 00:37:39.629 iops : min= 640, max= 672, avg=658.21, stdev=15.17, samples=19 00:37:39.629 lat (msec) : 10=0.21%, 20=0.68%, 50=99.08%, 100=0.03% 00:37:39.629 cpu : usr=98.79%, sys=0.93%, ctx=11, majf=0, minf=56 00:37:39.629 IO depths : 1=6.1%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:37:39.629 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.629 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.629 issued rwts: total=6606,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:39.629 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:39.629 filename1: (groupid=0, jobs=1): err= 0: pid=615661: Mon Dec 9 06:36:32 2024 00:37:39.629 read: IOPS=658, BW=2635KiB/s (2698kB/s)(25.8MiB/10008msec) 00:37:39.629 slat (usec): min=5, max=109, avg=16.35, stdev=12.53 00:37:39.629 clat (usec): min=5943, max=39278, avg=24133.44, stdev=1213.38 00:37:39.629 lat (usec): min=5950, max=39294, avg=24149.78, stdev=1212.88 00:37:39.629 clat percentiles (usec): 00:37:39.629 | 1.00th=[22676], 5.00th=[23462], 10.00th=[23725], 20.00th=[23725], 00:37:39.629 | 30.00th=[23987], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:37:39.630 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:37:39.630 | 99.00th=[25297], 99.50th=[25560], 99.90th=[39060], 99.95th=[39060], 00:37:39.630 | 99.99th=[39060] 00:37:39.630 bw ( KiB/s): min= 2560, max= 2688, per=4.12%, avg=2633.79, stdev=64.67, samples=19 00:37:39.630 iops : min= 640, max= 672, avg=658.42, stdev=16.15, samples=19 00:37:39.630 lat (msec) : 10=0.05%, 20=0.50%, 50=99.45% 00:37:39.630 cpu : usr=98.99%, sys=0.74%, ctx=15, majf=0, minf=47 00:37:39.630 IO depths : 1=6.1%, 2=12.4%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:37:39.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.630 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.630 issued rwts: total=6592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:39.630 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:39.630 filename2: (groupid=0, jobs=1): err= 0: pid=615662: Mon Dec 9 06:36:32 2024 00:37:39.630 read: IOPS=660, BW=2641KiB/s (2705kB/s)(25.8MiB/10003msec) 00:37:39.630 slat (usec): min=5, max=105, avg=14.61, stdev=12.07 00:37:39.630 clat (usec): min=2283, max=57437, avg=24178.75, stdev=1885.15 00:37:39.630 lat (usec): min=2289, max=57483, avg=24193.37, stdev=1886.14 00:37:39.630 clat percentiles (usec): 00:37:39.630 | 1.00th=[22938], 5.00th=[23725], 10.00th=[23725], 20.00th=[23987], 00:37:39.630 | 30.00th=[23987], 40.00th=[23987], 50.00th=[24249], 60.00th=[24249], 00:37:39.630 | 70.00th=[24511], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:37:39.630 | 99.00th=[25297], 99.50th=[25560], 99.90th=[43254], 99.95th=[56886], 00:37:39.630 | 99.99th=[57410] 00:37:39.630 bw ( KiB/s): min= 2432, max= 2682, per=4.10%, avg=2626.74, stdev=51.31, samples=19 00:37:39.630 iops : min= 608, max= 670, avg=656.63, stdev=12.79, samples=19 00:37:39.630 lat (msec) : 4=0.23%, 10=0.12%, 20=0.48%, 50=99.09%, 100=0.08% 00:37:39.630 cpu : usr=98.48%, sys=1.07%, ctx=113, majf=0, minf=115 00:37:39.630 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=81.1%, 16=18.7%, 32=0.0%, >=64=0.0% 00:37:39.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.630 complete : 0=0.0%, 4=84.3%, 8=15.7%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.630 issued rwts: total=6605,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:39.630 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:39.630 filename2: (groupid=0, jobs=1): err= 0: pid=615663: Mon Dec 9 06:36:32 2024 00:37:39.630 read: IOPS=661, BW=2645KiB/s (2708kB/s)(25.8MiB/10006msec) 00:37:39.630 slat (nsec): min=5856, max=80137, avg=22493.62, stdev=13251.29 00:37:39.630 clat (usec): min=7227, max=44097, avg=23997.54, stdev=1792.89 00:37:39.630 lat (usec): min=7233, max=44115, avg=24020.03, stdev=1793.25 00:37:39.630 clat percentiles (usec): 00:37:39.630 | 1.00th=[17695], 5.00th=[23200], 10.00th=[23462], 20.00th=[23725], 00:37:39.630 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[23987], 00:37:39.630 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:37:39.630 | 99.00th=[30540], 99.50th=[34341], 99.90th=[36439], 99.95th=[36439], 00:37:39.630 | 99.99th=[44303] 00:37:39.630 bw ( KiB/s): min= 2528, max= 2736, per=4.12%, avg=2637.16, stdev=71.58, samples=19 00:37:39.630 iops : min= 632, max= 684, avg=659.26, stdev=17.88, samples=19 00:37:39.630 lat (msec) : 10=0.03%, 20=2.63%, 50=97.34% 00:37:39.630 cpu : usr=98.12%, sys=1.29%, ctx=174, majf=0, minf=58 00:37:39.630 IO depths : 1=4.9%, 2=10.7%, 4=23.7%, 8=53.0%, 16=7.6%, 32=0.0%, >=64=0.0% 00:37:39.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.630 complete : 0=0.0%, 4=93.9%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.630 issued rwts: total=6616,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:39.630 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:39.630 filename2: (groupid=0, jobs=1): err= 0: pid=615664: Mon Dec 9 06:36:32 2024 00:37:39.630 read: IOPS=659, BW=2640KiB/s (2703kB/s)(25.8MiB/10004msec) 00:37:39.630 slat (nsec): min=5879, max=89520, avg=26349.80, stdev=14729.31 00:37:39.630 clat (usec): min=3685, max=38394, avg=23993.67, stdev=1307.50 00:37:39.630 lat (usec): min=3691, max=38412, avg=24020.01, stdev=1307.38 00:37:39.630 clat percentiles (usec): 00:37:39.630 | 1.00th=[22414], 5.00th=[23462], 10.00th=[23462], 20.00th=[23725], 00:37:39.630 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[23987], 00:37:39.630 | 70.00th=[24249], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:37:39.630 | 99.00th=[25297], 99.50th=[25560], 99.90th=[38536], 99.95th=[38536], 00:37:39.630 | 99.99th=[38536] 00:37:39.630 bw ( KiB/s): min= 2554, max= 2688, per=4.12%, avg=2633.42, stdev=64.49, samples=19 00:37:39.630 iops : min= 638, max= 672, avg=658.26, stdev=16.13, samples=19 00:37:39.630 lat (msec) : 4=0.05%, 10=0.11%, 20=0.48%, 50=99.36% 00:37:39.630 cpu : usr=98.67%, sys=0.90%, ctx=54, majf=0, minf=44 00:37:39.630 IO depths : 1=6.2%, 2=12.5%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:39.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.630 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.630 issued rwts: total=6602,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:39.630 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:39.630 filename2: (groupid=0, jobs=1): err= 0: pid=615665: Mon Dec 9 06:36:32 2024 00:37:39.630 read: IOPS=659, BW=2636KiB/s (2700kB/s)(25.8MiB/10002msec) 00:37:39.630 slat (nsec): min=5685, max=90827, avg=18031.03, stdev=14642.80 00:37:39.630 clat (usec): min=16466, max=40812, avg=24135.62, stdev=706.80 00:37:39.630 lat (usec): min=16475, max=40828, avg=24153.65, stdev=704.83 00:37:39.630 clat percentiles (usec): 00:37:39.630 | 1.00th=[22938], 5.00th=[23462], 10.00th=[23725], 20.00th=[23725], 00:37:39.630 | 30.00th=[23987], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:37:39.630 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:37:39.630 | 99.00th=[25297], 99.50th=[25560], 99.90th=[29754], 99.95th=[30016], 00:37:39.630 | 99.99th=[40633] 00:37:39.630 bw ( KiB/s): min= 2560, max= 2688, per=4.12%, avg=2633.79, stdev=64.67, samples=19 00:37:39.630 iops : min= 640, max= 672, avg=658.42, stdev=16.15, samples=19 00:37:39.630 lat (msec) : 20=0.33%, 50=99.67% 00:37:39.630 cpu : usr=99.01%, sys=0.72%, ctx=16, majf=0, minf=43 00:37:39.630 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:39.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.630 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.630 issued rwts: total=6592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:39.630 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:39.630 filename2: (groupid=0, jobs=1): err= 0: pid=615666: Mon Dec 9 06:36:32 2024 00:37:39.630 read: IOPS=656, BW=2626KiB/s (2689kB/s)(25.7MiB/10004msec) 00:37:39.630 slat (nsec): min=5270, max=86183, avg=21216.88, stdev=14773.98 00:37:39.630 clat (usec): min=5230, max=54853, avg=24180.49, stdev=3071.05 00:37:39.630 lat (usec): min=5235, max=54869, avg=24201.70, stdev=3071.13 00:37:39.630 clat percentiles (usec): 00:37:39.630 | 1.00th=[15139], 5.00th=[19792], 10.00th=[23462], 20.00th=[23725], 00:37:39.630 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:37:39.630 | 70.00th=[24249], 80.00th=[24511], 90.00th=[25035], 95.00th=[29492], 00:37:39.630 | 99.00th=[33817], 99.50th=[39060], 99.90th=[54789], 99.95th=[54789], 00:37:39.630 | 99.99th=[54789] 00:37:39.630 bw ( KiB/s): min= 2432, max= 2832, per=4.08%, avg=2612.95, stdev=96.32, samples=19 00:37:39.630 iops : min= 608, max= 708, avg=653.16, stdev=24.06, samples=19 00:37:39.630 lat (msec) : 10=0.15%, 20=5.12%, 50=94.49%, 100=0.24% 00:37:39.630 cpu : usr=97.80%, sys=1.40%, ctx=377, majf=0, minf=40 00:37:39.630 IO depths : 1=4.7%, 2=9.5%, 4=20.1%, 8=57.4%, 16=8.3%, 32=0.0%, >=64=0.0% 00:37:39.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.630 complete : 0=0.0%, 4=92.9%, 8=1.8%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.630 issued rwts: total=6568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:39.630 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:39.630 filename2: (groupid=0, jobs=1): err= 0: pid=615667: Mon Dec 9 06:36:32 2024 00:37:39.630 read: IOPS=674, BW=2699KiB/s (2763kB/s)(26.4MiB/10017msec) 00:37:39.630 slat (nsec): min=5850, max=60291, avg=10468.13, stdev=6702.51 00:37:39.630 clat (usec): min=1715, max=32626, avg=23629.34, stdev=3194.70 00:37:39.630 lat (usec): min=1726, max=32633, avg=23639.81, stdev=3193.83 00:37:39.630 clat percentiles (usec): 00:37:39.630 | 1.00th=[ 3621], 5.00th=[23462], 10.00th=[23725], 20.00th=[23725], 00:37:39.630 | 30.00th=[23987], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:37:39.630 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:37:39.630 | 99.00th=[25297], 99.50th=[25297], 99.90th=[30802], 99.95th=[31851], 00:37:39.630 | 99.99th=[32637] 00:37:39.630 bw ( KiB/s): min= 2560, max= 3752, per=4.21%, avg=2696.10, stdev=256.03, samples=20 00:37:39.630 iops : min= 640, max= 938, avg=674.00, stdev=64.01, samples=20 00:37:39.630 lat (msec) : 2=0.15%, 4=0.93%, 10=1.46%, 20=1.18%, 50=96.27% 00:37:39.630 cpu : usr=98.90%, sys=0.82%, ctx=14, majf=0, minf=70 00:37:39.630 IO depths : 1=5.9%, 2=12.0%, 4=24.5%, 8=50.9%, 16=6.7%, 32=0.0%, >=64=0.0% 00:37:39.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.630 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.630 issued rwts: total=6758,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:39.630 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:39.630 filename2: (groupid=0, jobs=1): err= 0: pid=615668: Mon Dec 9 06:36:32 2024 00:37:39.630 read: IOPS=660, BW=2641KiB/s (2704kB/s)(25.8MiB/10010msec) 00:37:39.630 slat (nsec): min=5847, max=86942, avg=13189.15, stdev=11403.40 00:37:39.630 clat (usec): min=11203, max=26701, avg=24133.43, stdev=759.10 00:37:39.630 lat (usec): min=11213, max=26724, avg=24146.62, stdev=757.50 00:37:39.630 clat percentiles (usec): 00:37:39.630 | 1.00th=[22676], 5.00th=[23462], 10.00th=[23725], 20.00th=[23725], 00:37:39.630 | 30.00th=[23987], 40.00th=[23987], 50.00th=[24249], 60.00th=[24249], 00:37:39.630 | 70.00th=[24511], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:37:39.630 | 99.00th=[25560], 99.50th=[25822], 99.90th=[25822], 99.95th=[25822], 00:37:39.630 | 99.99th=[26608] 00:37:39.630 bw ( KiB/s): min= 2554, max= 2688, per=4.12%, avg=2636.50, stdev=64.73, samples=20 00:37:39.630 iops : min= 638, max= 672, avg=659.10, stdev=16.22, samples=20 00:37:39.630 lat (msec) : 20=0.51%, 50=99.49% 00:37:39.630 cpu : usr=98.89%, sys=0.84%, ctx=14, majf=0, minf=52 00:37:39.630 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:39.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.630 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.631 issued rwts: total=6608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:39.631 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:39.631 filename2: (groupid=0, jobs=1): err= 0: pid=615670: Mon Dec 9 06:36:32 2024 00:37:39.631 read: IOPS=659, BW=2636KiB/s (2699kB/s)(25.8MiB/10003msec) 00:37:39.631 slat (nsec): min=5854, max=87831, avg=27206.58, stdev=15534.44 00:37:39.631 clat (usec): min=10363, max=43003, avg=24017.77, stdev=1309.39 00:37:39.631 lat (usec): min=10369, max=43025, avg=24044.98, stdev=1308.93 00:37:39.631 clat percentiles (usec): 00:37:39.631 | 1.00th=[22414], 5.00th=[23462], 10.00th=[23462], 20.00th=[23725], 00:37:39.631 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[23987], 00:37:39.631 | 70.00th=[24249], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:37:39.631 | 99.00th=[25297], 99.50th=[25560], 99.90th=[42730], 99.95th=[42730], 00:37:39.631 | 99.99th=[43254] 00:37:39.631 bw ( KiB/s): min= 2432, max= 2688, per=4.11%, avg=2627.05, stdev=78.06, samples=19 00:37:39.631 iops : min= 608, max= 672, avg=656.74, stdev=19.50, samples=19 00:37:39.631 lat (msec) : 20=0.61%, 50=99.39% 00:37:39.631 cpu : usr=98.66%, sys=0.89%, ctx=157, majf=0, minf=42 00:37:39.631 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:37:39.631 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.631 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.631 issued rwts: total=6592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:39.631 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:39.631 00:37:39.631 Run status group 0 (all jobs): 00:37:39.631 READ: bw=62.5MiB/s (65.5MB/s), 2626KiB/s-2906KiB/s (2689kB/s-2976kB/s), io=626MiB (657MB), run=10002-10024msec 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:39.631 bdev_null0 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:39.631 [2024-12-09 06:36:32.658963] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:39.631 bdev_null1 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:39.631 06:36:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:39.631 { 00:37:39.631 "params": { 00:37:39.631 "name": "Nvme$subsystem", 00:37:39.631 "trtype": "$TEST_TRANSPORT", 00:37:39.631 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:39.631 "adrfam": "ipv4", 00:37:39.631 "trsvcid": "$NVMF_PORT", 00:37:39.631 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:39.631 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:39.631 "hdgst": ${hdgst:-false}, 00:37:39.631 "ddgst": ${ddgst:-false} 00:37:39.631 }, 00:37:39.632 "method": "bdev_nvme_attach_controller" 00:37:39.632 } 00:37:39.632 EOF 00:37:39.632 )") 00:37:39.632 06:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:39.632 06:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:39.632 06:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:39.632 06:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:39.632 06:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:39.632 06:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:39.632 06:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:37:39.632 06:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:39.632 06:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:39.632 06:36:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:39.632 06:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:39.632 06:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:39.632 06:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:37:39.632 06:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:39.632 06:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:39.632 06:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:39.632 06:36:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:39.632 06:36:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:39.632 { 00:37:39.632 "params": { 00:37:39.632 "name": "Nvme$subsystem", 00:37:39.632 "trtype": "$TEST_TRANSPORT", 00:37:39.632 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:39.632 "adrfam": "ipv4", 00:37:39.632 "trsvcid": "$NVMF_PORT", 00:37:39.632 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:39.632 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:39.632 "hdgst": ${hdgst:-false}, 00:37:39.632 "ddgst": ${ddgst:-false} 00:37:39.632 }, 00:37:39.632 "method": "bdev_nvme_attach_controller" 00:37:39.632 } 00:37:39.632 EOF 00:37:39.632 )") 00:37:39.632 06:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:39.632 06:36:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:39.632 06:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:39.632 06:36:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:37:39.632 06:36:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:37:39.632 06:36:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:39.632 "params": { 00:37:39.632 "name": "Nvme0", 00:37:39.632 "trtype": "tcp", 00:37:39.632 "traddr": "10.0.0.2", 00:37:39.632 "adrfam": "ipv4", 00:37:39.632 "trsvcid": "4420", 00:37:39.632 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:39.632 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:39.632 "hdgst": false, 00:37:39.632 "ddgst": false 00:37:39.632 }, 00:37:39.632 "method": "bdev_nvme_attach_controller" 00:37:39.632 },{ 00:37:39.632 "params": { 00:37:39.632 "name": "Nvme1", 00:37:39.632 "trtype": "tcp", 00:37:39.632 "traddr": "10.0.0.2", 00:37:39.632 "adrfam": "ipv4", 00:37:39.632 "trsvcid": "4420", 00:37:39.632 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:39.632 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:39.632 "hdgst": false, 00:37:39.632 "ddgst": false 00:37:39.632 }, 00:37:39.632 "method": "bdev_nvme_attach_controller" 00:37:39.632 }' 00:37:39.632 06:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:39.632 06:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:39.632 06:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:39.632 06:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:39.632 06:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:39.632 06:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:39.632 06:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:39.632 06:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:39.632 06:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:39.632 06:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:39.632 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:37:39.632 ... 00:37:39.632 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:37:39.632 ... 00:37:39.632 fio-3.35 00:37:39.632 Starting 4 threads 00:37:44.918 00:37:44.919 filename0: (groupid=0, jobs=1): err= 0: pid=617666: Mon Dec 9 06:36:38 2024 00:37:44.919 read: IOPS=2851, BW=22.3MiB/s (23.4MB/s)(111MiB/5002msec) 00:37:44.919 slat (nsec): min=5664, max=80449, avg=6722.92, stdev=2794.04 00:37:44.919 clat (usec): min=1263, max=5474, avg=2787.84, stdev=213.71 00:37:44.919 lat (usec): min=1269, max=5554, avg=2794.56, stdev=213.85 00:37:44.919 clat percentiles (usec): 00:37:44.919 | 1.00th=[ 2212], 5.00th=[ 2507], 10.00th=[ 2638], 20.00th=[ 2737], 00:37:44.919 | 30.00th=[ 2769], 40.00th=[ 2769], 50.00th=[ 2802], 60.00th=[ 2802], 00:37:44.919 | 70.00th=[ 2802], 80.00th=[ 2835], 90.00th=[ 2868], 95.00th=[ 3032], 00:37:44.919 | 99.00th=[ 3785], 99.50th=[ 4047], 99.90th=[ 4228], 99.95th=[ 4490], 00:37:44.919 | 99.99th=[ 5473] 00:37:44.919 bw ( KiB/s): min=22672, max=22912, per=25.08%, avg=22821.33, stdev=94.99, samples=9 00:37:44.919 iops : min= 2834, max= 2864, avg=2852.67, stdev=11.87, samples=9 00:37:44.919 lat (msec) : 2=0.25%, 4=99.08%, 10=0.67% 00:37:44.919 cpu : usr=96.12%, sys=3.62%, ctx=51, majf=0, minf=0 00:37:44.919 IO depths : 1=0.1%, 2=0.1%, 4=70.4%, 8=29.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:44.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:44.919 complete : 0=0.0%, 4=93.9%, 8=6.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:44.919 issued rwts: total=14262,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:44.919 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:44.919 filename0: (groupid=0, jobs=1): err= 0: pid=617667: Mon Dec 9 06:36:38 2024 00:37:44.919 read: IOPS=2832, BW=22.1MiB/s (23.2MB/s)(111MiB/5001msec) 00:37:44.919 slat (nsec): min=5656, max=66230, avg=6414.01, stdev=2318.63 00:37:44.919 clat (usec): min=1550, max=5044, avg=2807.40, stdev=220.28 00:37:44.919 lat (usec): min=1556, max=5049, avg=2813.82, stdev=220.36 00:37:44.919 clat percentiles (usec): 00:37:44.919 | 1.00th=[ 2311], 5.00th=[ 2573], 10.00th=[ 2671], 20.00th=[ 2769], 00:37:44.919 | 30.00th=[ 2769], 40.00th=[ 2769], 50.00th=[ 2802], 60.00th=[ 2802], 00:37:44.919 | 70.00th=[ 2802], 80.00th=[ 2802], 90.00th=[ 2900], 95.00th=[ 3064], 00:37:44.919 | 99.00th=[ 3982], 99.50th=[ 4178], 99.90th=[ 4555], 99.95th=[ 4686], 00:37:44.919 | 99.99th=[ 5014] 00:37:44.919 bw ( KiB/s): min=22275, max=22928, per=24.92%, avg=22679.44, stdev=181.08, samples=9 00:37:44.919 iops : min= 2784, max= 2866, avg=2834.89, stdev=22.74, samples=9 00:37:44.919 lat (msec) : 2=0.23%, 4=98.81%, 10=0.95% 00:37:44.919 cpu : usr=95.74%, sys=3.84%, ctx=159, majf=0, minf=9 00:37:44.919 IO depths : 1=0.1%, 2=0.1%, 4=72.3%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:44.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:44.919 complete : 0=0.0%, 4=92.3%, 8=7.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:44.919 issued rwts: total=14164,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:44.919 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:44.919 filename1: (groupid=0, jobs=1): err= 0: pid=617668: Mon Dec 9 06:36:38 2024 00:37:44.919 read: IOPS=2846, BW=22.2MiB/s (23.3MB/s)(111MiB/5002msec) 00:37:44.919 slat (nsec): min=5662, max=74749, avg=6419.77, stdev=2383.60 00:37:44.919 clat (usec): min=1358, max=4582, avg=2794.24, stdev=197.22 00:37:44.919 lat (usec): min=1374, max=4607, avg=2800.66, stdev=197.25 00:37:44.919 clat percentiles (usec): 00:37:44.919 | 1.00th=[ 2278], 5.00th=[ 2573], 10.00th=[ 2638], 20.00th=[ 2737], 00:37:44.919 | 30.00th=[ 2769], 40.00th=[ 2769], 50.00th=[ 2802], 60.00th=[ 2802], 00:37:44.919 | 70.00th=[ 2802], 80.00th=[ 2835], 90.00th=[ 2868], 95.00th=[ 3032], 00:37:44.919 | 99.00th=[ 3785], 99.50th=[ 4113], 99.90th=[ 4424], 99.95th=[ 4424], 00:37:44.919 | 99.99th=[ 4555] 00:37:44.919 bw ( KiB/s): min=22653, max=22896, per=25.03%, avg=22776.56, stdev=76.54, samples=9 00:37:44.919 iops : min= 2831, max= 2862, avg=2847.00, stdev= 9.70, samples=9 00:37:44.919 lat (msec) : 2=0.17%, 4=99.21%, 10=0.63% 00:37:44.919 cpu : usr=97.06%, sys=2.70%, ctx=6, majf=0, minf=0 00:37:44.919 IO depths : 1=0.1%, 2=0.1%, 4=70.0%, 8=29.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:44.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:44.919 complete : 0=0.0%, 4=94.2%, 8=5.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:44.919 issued rwts: total=14236,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:44.919 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:44.919 filename1: (groupid=0, jobs=1): err= 0: pid=617669: Mon Dec 9 06:36:38 2024 00:37:44.919 read: IOPS=2845, BW=22.2MiB/s (23.3MB/s)(111MiB/5001msec) 00:37:44.919 slat (nsec): min=5661, max=51909, avg=6610.91, stdev=2806.95 00:37:44.919 clat (usec): min=1443, max=4777, avg=2794.60, stdev=196.86 00:37:44.919 lat (usec): min=1449, max=4783, avg=2801.22, stdev=196.91 00:37:44.919 clat percentiles (usec): 00:37:44.919 | 1.00th=[ 2311], 5.00th=[ 2540], 10.00th=[ 2638], 20.00th=[ 2769], 00:37:44.919 | 30.00th=[ 2769], 40.00th=[ 2769], 50.00th=[ 2802], 60.00th=[ 2802], 00:37:44.919 | 70.00th=[ 2802], 80.00th=[ 2802], 90.00th=[ 2868], 95.00th=[ 3032], 00:37:44.919 | 99.00th=[ 3687], 99.50th=[ 4015], 99.90th=[ 4424], 99.95th=[ 4424], 00:37:44.919 | 99.99th=[ 4752] 00:37:44.919 bw ( KiB/s): min=22672, max=22848, per=25.02%, avg=22769.78, stdev=68.09, samples=9 00:37:44.919 iops : min= 2834, max= 2856, avg=2846.22, stdev= 8.51, samples=9 00:37:44.919 lat (msec) : 2=0.31%, 4=99.14%, 10=0.56% 00:37:44.919 cpu : usr=96.28%, sys=3.48%, ctx=7, majf=0, minf=9 00:37:44.919 IO depths : 1=0.1%, 2=0.1%, 4=69.4%, 8=30.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:44.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:44.919 complete : 0=0.0%, 4=94.7%, 8=5.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:44.919 issued rwts: total=14232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:44.919 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:44.919 00:37:44.919 Run status group 0 (all jobs): 00:37:44.919 READ: bw=88.9MiB/s (93.2MB/s), 22.1MiB/s-22.3MiB/s (23.2MB/s-23.4MB/s), io=444MiB (466MB), run=5001-5002msec 00:37:44.919 06:36:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:37:44.919 06:36:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:44.919 06:36:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:44.919 06:36:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:44.919 06:36:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:44.919 06:36:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:44.919 06:36:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:44.919 06:36:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:44.919 06:36:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:44.919 06:36:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:44.919 06:36:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:44.919 06:36:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:44.919 06:36:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:44.919 06:36:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:44.919 06:36:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:44.919 06:36:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:37:44.919 06:36:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:44.919 06:36:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:44.919 06:36:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:44.919 06:36:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:44.919 06:36:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:44.919 06:36:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:44.919 06:36:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:44.919 06:36:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:44.919 00:37:44.919 real 0m24.240s 00:37:44.919 user 5m1.826s 00:37:44.919 sys 0m4.838s 00:37:44.919 06:36:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:44.919 06:36:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:44.919 ************************************ 00:37:44.919 END TEST fio_dif_rand_params 00:37:44.919 ************************************ 00:37:44.919 06:36:39 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:37:44.919 06:36:39 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:44.919 06:36:39 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:44.919 06:36:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:44.919 ************************************ 00:37:44.919 START TEST fio_dif_digest 00:37:44.919 ************************************ 00:37:44.919 06:36:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:37:44.919 06:36:39 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:37:44.919 06:36:39 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:37:44.919 06:36:39 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:37:44.919 06:36:39 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:37:44.919 06:36:39 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:37:44.919 06:36:39 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:37:44.919 06:36:39 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:37:44.919 06:36:39 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:37:44.919 06:36:39 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:37:44.919 06:36:39 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:37:44.919 06:36:39 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:37:44.919 06:36:39 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:37:44.919 06:36:39 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:37:44.919 06:36:39 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:37:44.919 06:36:39 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:37:44.919 06:36:39 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:37:44.919 06:36:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:44.919 06:36:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:44.919 bdev_null0 00:37:44.919 06:36:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:44.920 06:36:39 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:44.920 06:36:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:44.920 06:36:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:44.920 06:36:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:44.920 06:36:39 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:44.920 06:36:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:44.920 06:36:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:44.920 06:36:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:44.920 06:36:39 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:44.920 06:36:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:44.920 06:36:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:44.920 [2024-12-09 06:36:39.132860] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:44.920 06:36:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:44.920 06:36:39 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:37:44.920 06:36:39 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:37:44.920 06:36:39 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:44.920 06:36:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:37:44.920 06:36:39 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:44.920 06:36:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:37:44.920 06:36:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:44.920 06:36:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:44.920 06:36:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:44.920 { 00:37:44.920 "params": { 00:37:44.920 "name": "Nvme$subsystem", 00:37:44.920 "trtype": "$TEST_TRANSPORT", 00:37:44.920 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:44.920 "adrfam": "ipv4", 00:37:44.920 "trsvcid": "$NVMF_PORT", 00:37:44.920 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:44.920 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:44.920 "hdgst": ${hdgst:-false}, 00:37:44.920 "ddgst": ${ddgst:-false} 00:37:44.920 }, 00:37:44.920 "method": "bdev_nvme_attach_controller" 00:37:44.920 } 00:37:44.920 EOF 00:37:44.920 )") 00:37:44.920 06:36:39 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:37:44.920 06:36:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:44.920 06:36:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:44.920 06:36:39 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:37:44.920 06:36:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:44.920 06:36:39 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:37:44.920 06:36:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:44.920 06:36:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:37:44.920 06:36:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:44.920 06:36:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:44.920 06:36:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:37:44.920 06:36:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:44.920 06:36:39 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:37:44.920 06:36:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:37:44.920 06:36:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:44.920 06:36:39 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:37:44.920 06:36:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:37:44.920 06:36:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:37:44.920 06:36:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:44.920 "params": { 00:37:44.920 "name": "Nvme0", 00:37:44.920 "trtype": "tcp", 00:37:44.920 "traddr": "10.0.0.2", 00:37:44.920 "adrfam": "ipv4", 00:37:44.920 "trsvcid": "4420", 00:37:44.920 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:44.920 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:44.920 "hdgst": true, 00:37:44.920 "ddgst": true 00:37:44.920 }, 00:37:44.920 "method": "bdev_nvme_attach_controller" 00:37:44.920 }' 00:37:44.920 06:36:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:44.920 06:36:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:44.920 06:36:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:44.920 06:36:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:44.920 06:36:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:44.920 06:36:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:44.920 06:36:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:44.920 06:36:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:44.920 06:36:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:44.920 06:36:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:45.181 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:37:45.181 ... 00:37:45.181 fio-3.35 00:37:45.181 Starting 3 threads 00:37:57.415 00:37:57.415 filename0: (groupid=0, jobs=1): err= 0: pid=618790: Mon Dec 9 06:36:50 2024 00:37:57.415 read: IOPS=366, BW=45.9MiB/s (48.1MB/s)(461MiB/10045msec) 00:37:57.415 slat (nsec): min=3155, max=19861, avg=7098.88, stdev=606.46 00:37:57.415 clat (usec): min=4756, max=56067, avg=8154.92, stdev=2083.33 00:37:57.415 lat (usec): min=4763, max=56077, avg=8162.02, stdev=2083.30 00:37:57.415 clat percentiles (usec): 00:37:57.415 | 1.00th=[ 5735], 5.00th=[ 6128], 10.00th=[ 6325], 20.00th=[ 6718], 00:37:57.415 | 30.00th=[ 7046], 40.00th=[ 7832], 50.00th=[ 8356], 60.00th=[ 8717], 00:37:57.415 | 70.00th=[ 8979], 80.00th=[ 9241], 90.00th=[ 9634], 95.00th=[ 9896], 00:37:57.415 | 99.00th=[10421], 99.50th=[10552], 99.90th=[50070], 99.95th=[54789], 00:37:57.415 | 99.99th=[55837] 00:37:57.415 bw ( KiB/s): min=42324, max=51200, per=43.29%, avg=47146.60, stdev=2159.56, samples=20 00:37:57.416 iops : min= 330, max= 400, avg=368.30, stdev=16.95, samples=20 00:37:57.416 lat (msec) : 10=95.90%, 20=3.96%, 50=0.03%, 100=0.11% 00:37:57.416 cpu : usr=95.64%, sys=4.11%, ctx=31, majf=0, minf=158 00:37:57.416 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:57.416 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.416 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.416 issued rwts: total=3686,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:57.416 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:57.416 filename0: (groupid=0, jobs=1): err= 0: pid=618791: Mon Dec 9 06:36:50 2024 00:37:57.416 read: IOPS=299, BW=37.4MiB/s (39.2MB/s)(376MiB/10043msec) 00:37:57.416 slat (nsec): min=3204, max=21325, avg=7033.33, stdev=891.29 00:37:57.416 clat (usec): min=5895, max=51511, avg=10000.72, stdev=1893.75 00:37:57.416 lat (usec): min=5901, max=51519, avg=10007.76, stdev=1893.75 00:37:57.416 clat percentiles (usec): 00:37:57.416 | 1.00th=[ 7111], 5.00th=[ 7570], 10.00th=[ 7832], 20.00th=[ 8225], 00:37:57.416 | 30.00th=[ 8717], 40.00th=[ 9634], 50.00th=[10159], 60.00th=[10683], 00:37:57.416 | 70.00th=[11076], 80.00th=[11469], 90.00th=[11994], 95.00th=[12387], 00:37:57.416 | 99.00th=[13042], 99.50th=[13435], 99.90th=[13960], 99.95th=[49546], 00:37:57.416 | 99.99th=[51643] 00:37:57.416 bw ( KiB/s): min=35840, max=41216, per=35.30%, avg=38451.20, stdev=1461.43, samples=20 00:37:57.416 iops : min= 280, max= 322, avg=300.40, stdev=11.42, samples=20 00:37:57.416 lat (msec) : 10=45.81%, 20=54.13%, 50=0.03%, 100=0.03% 00:37:57.416 cpu : usr=94.11%, sys=5.64%, ctx=75, majf=0, minf=140 00:37:57.416 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:57.416 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.416 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.416 issued rwts: total=3006,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:57.416 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:57.416 filename0: (groupid=0, jobs=1): err= 0: pid=618792: Mon Dec 9 06:36:50 2024 00:37:57.416 read: IOPS=184, BW=23.1MiB/s (24.2MB/s)(232MiB/10047msec) 00:37:57.416 slat (nsec): min=2997, max=21170, avg=7067.37, stdev=906.26 00:37:57.416 clat (usec): min=7945, max=93981, avg=16199.32, stdev=14589.57 00:37:57.416 lat (usec): min=7952, max=93989, avg=16206.39, stdev=14589.55 00:37:57.416 clat percentiles (usec): 00:37:57.416 | 1.00th=[ 8979], 5.00th=[ 9634], 10.00th=[ 9896], 20.00th=[10290], 00:37:57.416 | 30.00th=[10552], 40.00th=[10814], 50.00th=[11076], 60.00th=[11469], 00:37:57.416 | 70.00th=[11731], 80.00th=[12387], 90.00th=[50594], 95.00th=[52167], 00:37:57.416 | 99.00th=[53740], 99.50th=[91751], 99.90th=[93848], 99.95th=[93848], 00:37:57.416 | 99.99th=[93848] 00:37:57.416 bw ( KiB/s): min=17152, max=29184, per=21.79%, avg=23731.20, stdev=3886.96, samples=20 00:37:57.416 iops : min= 134, max= 228, avg=185.40, stdev=30.37, samples=20 00:37:57.416 lat (msec) : 10=12.28%, 20=75.88%, 50=0.75%, 100=11.09% 00:37:57.416 cpu : usr=94.75%, sys=4.98%, ctx=102, majf=0, minf=81 00:37:57.416 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:57.416 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.416 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.416 issued rwts: total=1857,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:57.416 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:57.416 00:37:57.416 Run status group 0 (all jobs): 00:37:57.416 READ: bw=106MiB/s (112MB/s), 23.1MiB/s-45.9MiB/s (24.2MB/s-48.1MB/s), io=1069MiB (1121MB), run=10043-10047msec 00:37:57.416 06:36:50 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:37:57.416 06:36:50 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:37:57.416 06:36:50 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:37:57.416 06:36:50 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:57.416 06:36:50 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:37:57.416 06:36:50 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:57.416 06:36:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:57.416 06:36:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:57.416 06:36:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:57.416 06:36:50 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:57.416 06:36:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:57.416 06:36:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:57.416 06:36:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:57.416 00:37:57.416 real 0m11.150s 00:37:57.416 user 0m37.435s 00:37:57.416 sys 0m1.750s 00:37:57.416 06:36:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:57.416 06:36:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:57.416 ************************************ 00:37:57.416 END TEST fio_dif_digest 00:37:57.416 ************************************ 00:37:57.416 06:36:50 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:37:57.416 06:36:50 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:37:57.416 06:36:50 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:57.416 06:36:50 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:37:57.416 06:36:50 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:57.416 06:36:50 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:37:57.416 06:36:50 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:57.416 06:36:50 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:57.416 rmmod nvme_tcp 00:37:57.416 rmmod nvme_fabrics 00:37:57.416 rmmod nvme_keyring 00:37:57.416 06:36:50 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:57.416 06:36:50 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:37:57.416 06:36:50 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:37:57.416 06:36:50 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 609312 ']' 00:37:57.416 06:36:50 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 609312 00:37:57.416 06:36:50 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 609312 ']' 00:37:57.416 06:36:50 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 609312 00:37:57.416 06:36:50 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:37:57.416 06:36:50 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:57.416 06:36:50 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 609312 00:37:57.416 06:36:50 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:57.416 06:36:50 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:57.416 06:36:50 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 609312' 00:37:57.416 killing process with pid 609312 00:37:57.416 06:36:50 nvmf_dif -- common/autotest_common.sh@973 -- # kill 609312 00:37:57.416 06:36:50 nvmf_dif -- common/autotest_common.sh@978 -- # wait 609312 00:37:57.416 06:36:50 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:37:57.416 06:36:50 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:59.418 Waiting for block devices as requested 00:37:59.418 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:59.418 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:59.725 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:59.725 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:59.725 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:59.725 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:00.009 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:00.009 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:00.009 0000:65:00.0 (8086 0a54): vfio-pci -> nvme 00:38:00.291 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:00.291 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:00.292 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:00.600 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:00.600 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:00.600 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:00.921 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:00.921 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:01.212 06:36:55 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:01.212 06:36:55 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:01.212 06:36:55 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:38:01.212 06:36:55 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:38:01.212 06:36:55 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:01.212 06:36:55 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:38:01.212 06:36:55 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:01.212 06:36:55 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:01.212 06:36:55 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:01.212 06:36:55 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:01.212 06:36:55 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:03.122 06:36:57 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:03.122 00:38:03.123 real 1m18.161s 00:38:03.123 user 7m29.056s 00:38:03.123 sys 0m22.051s 00:38:03.123 06:36:57 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:03.123 06:36:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:03.123 ************************************ 00:38:03.123 END TEST nvmf_dif 00:38:03.123 ************************************ 00:38:03.123 06:36:57 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:38:03.123 06:36:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:03.123 06:36:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:03.123 06:36:57 -- common/autotest_common.sh@10 -- # set +x 00:38:03.384 ************************************ 00:38:03.384 START TEST nvmf_abort_qd_sizes 00:38:03.384 ************************************ 00:38:03.384 06:36:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:38:03.384 * Looking for test storage... 00:38:03.384 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:03.384 06:36:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:03.384 06:36:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:38:03.384 06:36:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:03.384 06:36:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:03.384 06:36:57 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:03.384 06:36:57 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:03.384 06:36:57 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:03.384 06:36:57 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:38:03.384 06:36:57 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:38:03.384 06:36:57 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:38:03.384 06:36:57 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:38:03.384 06:36:57 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:38:03.384 06:36:57 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:38:03.384 06:36:57 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:38:03.384 06:36:57 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:03.384 06:36:57 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:38:03.384 06:36:57 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:38:03.384 06:36:57 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:03.384 06:36:57 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:03.384 06:36:57 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:38:03.384 06:36:57 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:38:03.384 06:36:57 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:03.384 06:36:57 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:38:03.384 06:36:57 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:38:03.384 06:36:57 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:38:03.384 06:36:57 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:38:03.384 06:36:57 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:03.384 06:36:57 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:38:03.384 06:36:57 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:38:03.384 06:36:57 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:03.384 06:36:57 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:03.384 06:36:57 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:38:03.384 06:36:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:03.384 06:36:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:03.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:03.384 --rc genhtml_branch_coverage=1 00:38:03.384 --rc genhtml_function_coverage=1 00:38:03.384 --rc genhtml_legend=1 00:38:03.384 --rc geninfo_all_blocks=1 00:38:03.384 --rc geninfo_unexecuted_blocks=1 00:38:03.384 00:38:03.384 ' 00:38:03.384 06:36:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:03.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:03.384 --rc genhtml_branch_coverage=1 00:38:03.384 --rc genhtml_function_coverage=1 00:38:03.384 --rc genhtml_legend=1 00:38:03.385 --rc geninfo_all_blocks=1 00:38:03.385 --rc geninfo_unexecuted_blocks=1 00:38:03.385 00:38:03.385 ' 00:38:03.385 06:36:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:03.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:03.385 --rc genhtml_branch_coverage=1 00:38:03.385 --rc genhtml_function_coverage=1 00:38:03.385 --rc genhtml_legend=1 00:38:03.385 --rc geninfo_all_blocks=1 00:38:03.385 --rc geninfo_unexecuted_blocks=1 00:38:03.385 00:38:03.385 ' 00:38:03.385 06:36:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:03.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:03.385 --rc genhtml_branch_coverage=1 00:38:03.385 --rc genhtml_function_coverage=1 00:38:03.385 --rc genhtml_legend=1 00:38:03.385 --rc geninfo_all_blocks=1 00:38:03.385 --rc geninfo_unexecuted_blocks=1 00:38:03.385 00:38:03.385 ' 00:38:03.385 06:36:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:03.385 06:36:57 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:38:03.385 06:36:57 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:03.385 06:36:57 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:03.385 06:36:57 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:03.385 06:36:57 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:03.385 06:36:57 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:03.385 06:36:57 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:03.385 06:36:57 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:03.385 06:36:57 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:03.385 06:36:57 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:03.385 06:36:57 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:03.385 06:36:57 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:38:03.385 06:36:57 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:38:03.385 06:36:57 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:03.385 06:36:57 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:03.385 06:36:57 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:03.385 06:36:57 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:03.385 06:36:57 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:03.385 06:36:57 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:38:03.385 06:36:57 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:03.385 06:36:57 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:03.385 06:36:57 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:03.385 06:36:57 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:03.385 06:36:57 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:03.385 06:36:57 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:03.385 06:36:57 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:38:03.385 06:36:57 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:03.385 06:36:57 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:38:03.385 06:36:57 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:03.385 06:36:57 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:03.385 06:36:57 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:03.385 06:36:57 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:03.385 06:36:57 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:03.385 06:36:57 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:03.385 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:03.385 06:36:57 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:03.385 06:36:57 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:03.385 06:36:57 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:03.385 06:36:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:38:03.385 06:36:57 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:03.385 06:36:57 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:03.385 06:36:57 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:03.385 06:36:57 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:03.385 06:36:57 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:03.385 06:36:57 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:03.385 06:36:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:03.385 06:36:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:03.646 06:36:57 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:03.646 06:36:57 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:03.646 06:36:57 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:38:03.646 06:36:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:11.819 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:11.819 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:38:11.819 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:11.819 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:11.819 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:11.819 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:11.819 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:11.819 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:38:11.819 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:11.819 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:38:11.819 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:38:11.819 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:38:11.819 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:38:11.819 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:38:11.819 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:38:11.819 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:11.819 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:11.819 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:11.819 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:11.819 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:11.819 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:11.819 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:11.819 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:11.819 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:11.819 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:11.819 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:11.819 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:11.819 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:11.819 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:11.819 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:11.819 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:11.819 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:11.819 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:11.819 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:11.820 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:38:11.820 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:38:11.820 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:11.820 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:11.820 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:11.820 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:11.820 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:11.820 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:11.820 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:38:11.820 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:38:11.820 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:11.820 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:11.820 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:11.820 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:11.820 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:11.820 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:11.820 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:11.820 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:11.820 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:11.820 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:11.820 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:11.820 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:11.820 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:11.820 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:11.820 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:11.820 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:38:11.820 Found net devices under 0000:4b:00.0: cvl_0_0 00:38:11.820 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:11.820 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:11.820 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:11.820 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:11.820 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:11.820 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:11.820 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:11.820 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:11.820 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:38:11.820 Found net devices under 0000:4b:00.1: cvl_0_1 00:38:11.820 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:11.820 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:11.820 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:38:11.820 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:11.820 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:11.820 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:11.820 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:11.820 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:11.820 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:11.820 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:11.820 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:11.820 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:11.820 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:11.820 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:11.820 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:11.820 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:11.820 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:11.820 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:11.820 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:11.820 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:11.820 06:37:04 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:11.820 06:37:05 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:11.820 06:37:05 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:11.820 06:37:05 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:11.820 06:37:05 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:11.820 06:37:05 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:11.820 06:37:05 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:11.820 06:37:05 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:11.820 06:37:05 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:11.820 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:11.820 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.530 ms 00:38:11.820 00:38:11.820 --- 10.0.0.2 ping statistics --- 00:38:11.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:11.820 rtt min/avg/max/mdev = 0.530/0.530/0.530/0.000 ms 00:38:11.820 06:37:05 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:11.820 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:11.820 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:38:11.820 00:38:11.820 --- 10.0.0.1 ping statistics --- 00:38:11.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:11.820 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:38:11.820 06:37:05 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:11.820 06:37:05 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:38:11.820 06:37:05 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:38:11.820 06:37:05 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:14.359 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:38:14.359 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:38:14.359 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:38:14.359 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:38:14.359 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:38:14.359 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:38:14.359 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:38:14.359 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:38:14.359 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:38:14.359 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:38:14.359 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:38:14.359 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:38:14.359 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:38:14.359 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:38:14.359 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:38:14.359 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:38:16.270 0000:65:00.0 (8086 0a54): nvme -> vfio-pci 00:38:16.530 06:37:10 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:16.530 06:37:10 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:16.530 06:37:10 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:16.530 06:37:10 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:16.530 06:37:10 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:16.530 06:37:10 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:16.530 06:37:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:38:16.530 06:37:10 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:16.530 06:37:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:16.530 06:37:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:16.530 06:37:11 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=627639 00:38:16.530 06:37:11 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 627639 00:38:16.530 06:37:11 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:38:16.530 06:37:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 627639 ']' 00:38:16.530 06:37:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:16.530 06:37:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:16.530 06:37:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:16.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:16.530 06:37:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:16.530 06:37:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:16.530 [2024-12-09 06:37:11.054622] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:38:16.531 [2024-12-09 06:37:11.054676] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:16.791 [2024-12-09 06:37:11.148231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:16.791 [2024-12-09 06:37:11.202545] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:16.791 [2024-12-09 06:37:11.202601] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:16.791 [2024-12-09 06:37:11.202610] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:16.791 [2024-12-09 06:37:11.202617] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:16.791 [2024-12-09 06:37:11.202623] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:16.791 [2024-12-09 06:37:11.204860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:16.791 [2024-12-09 06:37:11.204989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:16.791 [2024-12-09 06:37:11.205150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:16.791 [2024-12-09 06:37:11.205150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:17.362 06:37:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:17.362 06:37:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:38:17.362 06:37:11 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:17.362 06:37:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:17.362 06:37:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:17.362 06:37:11 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:17.362 06:37:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:38:17.362 06:37:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:38:17.362 06:37:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:38:17.362 06:37:11 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:38:17.362 06:37:11 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:38:17.362 06:37:11 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:38:17.362 06:37:11 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:38:17.362 06:37:11 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:38:17.362 06:37:11 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:38:17.362 06:37:11 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:38:17.362 06:37:11 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:38:17.362 06:37:11 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:38:17.362 06:37:11 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:38:17.362 06:37:11 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:38:17.362 06:37:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:38:17.362 06:37:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:38:17.362 06:37:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:38:17.362 06:37:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:17.362 06:37:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:17.362 06:37:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:17.622 ************************************ 00:38:17.622 START TEST spdk_target_abort 00:38:17.622 ************************************ 00:38:17.622 06:37:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:38:17.622 06:37:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:38:17.622 06:37:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:38:17.622 06:37:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:17.622 06:37:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:20.920 spdk_targetn1 00:38:20.920 06:37:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:20.920 06:37:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:20.920 06:37:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:20.920 06:37:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:20.920 [2024-12-09 06:37:14.801630] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:20.920 06:37:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:20.920 06:37:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:38:20.920 06:37:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:20.920 06:37:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:20.920 06:37:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:20.920 06:37:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:38:20.920 06:37:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:20.920 06:37:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:20.920 06:37:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:20.920 06:37:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:38:20.920 06:37:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:20.920 06:37:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:20.920 [2024-12-09 06:37:14.850485] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:20.920 06:37:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:20.920 06:37:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:38:20.920 06:37:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:38:20.920 06:37:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:38:20.920 06:37:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:38:20.920 06:37:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:38:20.920 06:37:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:38:20.920 06:37:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:38:20.920 06:37:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:38:20.920 06:37:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:38:20.920 06:37:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:20.920 06:37:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:38:20.920 06:37:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:20.920 06:37:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:38:20.920 06:37:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:20.920 06:37:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:38:20.920 06:37:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:20.920 06:37:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:20.920 06:37:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:20.920 06:37:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:20.920 06:37:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:20.920 06:37:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:23.498 Initializing NVMe Controllers 00:38:23.498 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:38:23.498 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:23.498 Initialization complete. Launching workers. 00:38:23.498 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12263, failed: 0 00:38:23.498 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2045, failed to submit 10218 00:38:23.498 success 731, unsuccessful 1314, failed 0 00:38:23.498 06:37:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:23.498 06:37:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:27.697 Initializing NVMe Controllers 00:38:27.697 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:38:27.697 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:27.697 Initialization complete. Launching workers. 00:38:27.697 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9011, failed: 0 00:38:27.697 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1229, failed to submit 7782 00:38:27.697 success 337, unsuccessful 892, failed 0 00:38:27.697 06:37:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:27.697 06:37:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:30.240 Initializing NVMe Controllers 00:38:30.240 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:38:30.240 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:30.240 Initialization complete. Launching workers. 00:38:30.241 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 43082, failed: 0 00:38:30.241 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2460, failed to submit 40622 00:38:30.241 success 583, unsuccessful 1877, failed 0 00:38:30.241 06:37:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:38:30.241 06:37:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:30.241 06:37:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:30.241 06:37:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:30.241 06:37:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:38:30.241 06:37:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:30.241 06:37:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:32.786 06:37:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:32.786 06:37:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 627639 00:38:32.786 06:37:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 627639 ']' 00:38:32.786 06:37:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 627639 00:38:32.786 06:37:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:38:32.786 06:37:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:32.786 06:37:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 627639 00:38:32.786 06:37:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:32.786 06:37:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:32.786 06:37:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 627639' 00:38:32.786 killing process with pid 627639 00:38:32.786 06:37:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 627639 00:38:32.786 06:37:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 627639 00:38:32.786 00:38:32.786 real 0m15.153s 00:38:32.786 user 1m0.736s 00:38:32.786 sys 0m2.219s 00:38:32.786 06:37:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:32.786 06:37:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:32.786 ************************************ 00:38:32.786 END TEST spdk_target_abort 00:38:32.786 ************************************ 00:38:32.786 06:37:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:38:32.786 06:37:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:32.786 06:37:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:32.786 06:37:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:32.786 ************************************ 00:38:32.786 START TEST kernel_target_abort 00:38:32.786 ************************************ 00:38:32.786 06:37:27 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:38:32.786 06:37:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:38:32.786 06:37:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:38:32.786 06:37:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:32.786 06:37:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:32.786 06:37:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:32.786 06:37:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:32.786 06:37:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:32.786 06:37:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:32.786 06:37:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:32.786 06:37:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:32.786 06:37:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:32.786 06:37:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:38:32.786 06:37:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:38:32.786 06:37:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:38:32.786 06:37:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:32.786 06:37:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:32.786 06:37:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:38:32.786 06:37:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:38:32.786 06:37:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:38:32.786 06:37:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:38:32.786 06:37:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:38:32.786 06:37:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:36.085 Waiting for block devices as requested 00:38:36.345 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:36.345 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:36.345 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:36.345 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:36.605 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:36.605 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:36.605 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:36.865 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:36.865 0000:65:00.0 (8086 0a54): vfio-pci -> nvme 00:38:36.865 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:37.125 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:37.125 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:37.125 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:37.385 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:37.385 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:37.385 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:37.646 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:37.907 06:37:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:38:37.907 06:37:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:38:37.907 06:37:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:38:37.907 06:37:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:38:37.907 06:37:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:38:37.907 06:37:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:38:37.907 06:37:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:38:37.907 06:37:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:38:37.907 06:37:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:38:37.907 No valid GPT data, bailing 00:38:37.907 06:37:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:38:37.907 06:37:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:38:37.907 06:37:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:38:37.907 06:37:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:38:37.907 06:37:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:38:37.907 06:37:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:37.907 06:37:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:37.907 06:37:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:38:37.907 06:37:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:38:37.907 06:37:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:38:37.907 06:37:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:38:37.907 06:37:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:38:37.907 06:37:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:38:37.907 06:37:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:38:37.907 06:37:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:38:37.907 06:37:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:38:37.907 06:37:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:38:37.907 06:37:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -a 10.0.0.1 -t tcp -s 4420 00:38:38.168 00:38:38.168 Discovery Log Number of Records 2, Generation counter 2 00:38:38.168 =====Discovery Log Entry 0====== 00:38:38.168 trtype: tcp 00:38:38.168 adrfam: ipv4 00:38:38.168 subtype: current discovery subsystem 00:38:38.168 treq: not specified, sq flow control disable supported 00:38:38.168 portid: 1 00:38:38.168 trsvcid: 4420 00:38:38.168 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:38:38.168 traddr: 10.0.0.1 00:38:38.168 eflags: none 00:38:38.168 sectype: none 00:38:38.168 =====Discovery Log Entry 1====== 00:38:38.168 trtype: tcp 00:38:38.168 adrfam: ipv4 00:38:38.168 subtype: nvme subsystem 00:38:38.168 treq: not specified, sq flow control disable supported 00:38:38.168 portid: 1 00:38:38.168 trsvcid: 4420 00:38:38.168 subnqn: nqn.2016-06.io.spdk:testnqn 00:38:38.168 traddr: 10.0.0.1 00:38:38.168 eflags: none 00:38:38.168 sectype: none 00:38:38.168 06:37:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:38:38.168 06:37:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:38:38.168 06:37:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:38:38.168 06:37:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:38:38.168 06:37:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:38:38.168 06:37:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:38:38.168 06:37:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:38:38.168 06:37:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:38:38.168 06:37:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:38:38.168 06:37:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:38.168 06:37:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:38:38.168 06:37:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:38.168 06:37:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:38:38.168 06:37:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:38.168 06:37:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:38:38.168 06:37:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:38.168 06:37:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:38:38.168 06:37:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:38.168 06:37:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:38.168 06:37:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:38.168 06:37:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:41.475 Initializing NVMe Controllers 00:38:41.475 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:41.475 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:41.475 Initialization complete. Launching workers. 00:38:41.475 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 65718, failed: 0 00:38:41.475 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 65718, failed to submit 0 00:38:41.475 success 0, unsuccessful 65718, failed 0 00:38:41.475 06:37:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:41.475 06:37:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:44.779 Initializing NVMe Controllers 00:38:44.779 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:44.779 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:44.779 Initialization complete. Launching workers. 00:38:44.779 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 120129, failed: 0 00:38:44.779 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 24746, failed to submit 95383 00:38:44.779 success 0, unsuccessful 24746, failed 0 00:38:44.779 06:37:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:44.779 06:37:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:47.320 Initializing NVMe Controllers 00:38:47.320 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:47.320 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:47.320 Initialization complete. Launching workers. 00:38:47.320 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 134694, failed: 0 00:38:47.320 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 33710, failed to submit 100984 00:38:47.320 success 0, unsuccessful 33710, failed 0 00:38:47.320 06:37:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:38:47.320 06:37:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:38:47.320 06:37:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:38:47.320 06:37:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:47.320 06:37:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:47.320 06:37:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:38:47.579 06:37:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:47.579 06:37:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:38:47.579 06:37:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:38:47.579 06:37:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:50.878 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:38:50.878 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:38:50.878 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:38:50.878 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:38:50.878 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:38:50.878 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:38:50.878 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:38:50.878 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:38:50.878 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:38:50.878 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:38:51.149 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:38:51.149 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:38:51.149 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:38:51.149 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:38:51.149 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:38:51.149 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:38:53.064 0000:65:00.0 (8086 0a54): nvme -> vfio-pci 00:38:53.326 00:38:53.326 real 0m20.498s 00:38:53.326 user 0m9.621s 00:38:53.326 sys 0m6.345s 00:38:53.326 06:37:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:53.326 06:37:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:53.326 ************************************ 00:38:53.326 END TEST kernel_target_abort 00:38:53.326 ************************************ 00:38:53.326 06:37:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:38:53.326 06:37:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:38:53.326 06:37:47 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:53.326 06:37:47 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:38:53.326 06:37:47 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:53.326 06:37:47 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:38:53.326 06:37:47 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:53.326 06:37:47 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:53.326 rmmod nvme_tcp 00:38:53.326 rmmod nvme_fabrics 00:38:53.326 rmmod nvme_keyring 00:38:53.326 06:37:47 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:53.326 06:37:47 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:38:53.326 06:37:47 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:38:53.326 06:37:47 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 627639 ']' 00:38:53.326 06:37:47 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 627639 00:38:53.326 06:37:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 627639 ']' 00:38:53.326 06:37:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 627639 00:38:53.326 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (627639) - No such process 00:38:53.326 06:37:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 627639 is not found' 00:38:53.326 Process with pid 627639 is not found 00:38:53.326 06:37:47 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:38:53.326 06:37:47 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:56.626 Waiting for block devices as requested 00:38:56.626 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:56.887 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:56.887 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:56.887 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:57.146 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:57.146 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:57.146 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:57.405 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:57.405 0000:65:00.0 (8086 0a54): vfio-pci -> nvme 00:38:57.665 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:57.665 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:57.665 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:57.925 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:57.925 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:57.925 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:57.925 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:58.185 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:58.444 06:37:52 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:58.444 06:37:52 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:58.444 06:37:52 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:38:58.444 06:37:52 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:38:58.444 06:37:52 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:58.444 06:37:52 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:38:58.444 06:37:52 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:58.444 06:37:52 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:58.444 06:37:52 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:58.444 06:37:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:58.444 06:37:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:00.983 06:37:54 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:00.983 00:39:00.983 real 0m57.245s 00:39:00.983 user 1m15.964s 00:39:00.983 sys 0m19.420s 00:39:00.983 06:37:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:00.983 06:37:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:00.983 ************************************ 00:39:00.983 END TEST nvmf_abort_qd_sizes 00:39:00.983 ************************************ 00:39:00.983 06:37:55 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:39:00.983 06:37:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:00.983 06:37:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:00.983 06:37:55 -- common/autotest_common.sh@10 -- # set +x 00:39:00.983 ************************************ 00:39:00.983 START TEST keyring_file 00:39:00.983 ************************************ 00:39:00.983 06:37:55 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:39:00.983 * Looking for test storage... 00:39:00.983 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:39:00.983 06:37:55 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:00.983 06:37:55 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:39:00.983 06:37:55 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:00.983 06:37:55 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:00.983 06:37:55 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:00.983 06:37:55 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:00.984 06:37:55 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:00.984 06:37:55 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:39:00.984 06:37:55 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:39:00.984 06:37:55 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:39:00.984 06:37:55 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:39:00.984 06:37:55 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:39:00.984 06:37:55 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:39:00.984 06:37:55 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:39:00.984 06:37:55 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:00.984 06:37:55 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:39:00.984 06:37:55 keyring_file -- scripts/common.sh@345 -- # : 1 00:39:00.984 06:37:55 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:00.984 06:37:55 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:00.984 06:37:55 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:39:00.984 06:37:55 keyring_file -- scripts/common.sh@353 -- # local d=1 00:39:00.984 06:37:55 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:00.984 06:37:55 keyring_file -- scripts/common.sh@355 -- # echo 1 00:39:00.984 06:37:55 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:39:00.984 06:37:55 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:39:00.984 06:37:55 keyring_file -- scripts/common.sh@353 -- # local d=2 00:39:00.984 06:37:55 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:00.984 06:37:55 keyring_file -- scripts/common.sh@355 -- # echo 2 00:39:00.984 06:37:55 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:39:00.984 06:37:55 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:00.984 06:37:55 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:00.984 06:37:55 keyring_file -- scripts/common.sh@368 -- # return 0 00:39:00.984 06:37:55 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:00.984 06:37:55 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:00.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:00.984 --rc genhtml_branch_coverage=1 00:39:00.984 --rc genhtml_function_coverage=1 00:39:00.984 --rc genhtml_legend=1 00:39:00.984 --rc geninfo_all_blocks=1 00:39:00.984 --rc geninfo_unexecuted_blocks=1 00:39:00.984 00:39:00.984 ' 00:39:00.984 06:37:55 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:00.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:00.984 --rc genhtml_branch_coverage=1 00:39:00.984 --rc genhtml_function_coverage=1 00:39:00.984 --rc genhtml_legend=1 00:39:00.984 --rc geninfo_all_blocks=1 00:39:00.984 --rc geninfo_unexecuted_blocks=1 00:39:00.984 00:39:00.984 ' 00:39:00.984 06:37:55 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:00.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:00.984 --rc genhtml_branch_coverage=1 00:39:00.984 --rc genhtml_function_coverage=1 00:39:00.984 --rc genhtml_legend=1 00:39:00.984 --rc geninfo_all_blocks=1 00:39:00.984 --rc geninfo_unexecuted_blocks=1 00:39:00.984 00:39:00.984 ' 00:39:00.984 06:37:55 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:00.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:00.984 --rc genhtml_branch_coverage=1 00:39:00.984 --rc genhtml_function_coverage=1 00:39:00.984 --rc genhtml_legend=1 00:39:00.984 --rc geninfo_all_blocks=1 00:39:00.984 --rc geninfo_unexecuted_blocks=1 00:39:00.984 00:39:00.984 ' 00:39:00.984 06:37:55 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:39:00.984 06:37:55 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:00.984 06:37:55 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:39:00.984 06:37:55 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:00.984 06:37:55 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:00.984 06:37:55 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:00.984 06:37:55 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:00.984 06:37:55 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:00.984 06:37:55 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:00.984 06:37:55 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:00.984 06:37:55 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:00.984 06:37:55 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:00.984 06:37:55 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:00.984 06:37:55 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:39:00.984 06:37:55 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:39:00.984 06:37:55 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:00.984 06:37:55 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:00.984 06:37:55 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:00.984 06:37:55 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:00.984 06:37:55 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:00.984 06:37:55 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:39:00.984 06:37:55 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:00.984 06:37:55 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:00.984 06:37:55 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:00.984 06:37:55 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:00.984 06:37:55 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:00.984 06:37:55 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:00.984 06:37:55 keyring_file -- paths/export.sh@5 -- # export PATH 00:39:00.984 06:37:55 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:00.984 06:37:55 keyring_file -- nvmf/common.sh@51 -- # : 0 00:39:00.984 06:37:55 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:00.984 06:37:55 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:00.984 06:37:55 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:00.984 06:37:55 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:00.984 06:37:55 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:00.984 06:37:55 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:00.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:00.984 06:37:55 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:00.984 06:37:55 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:00.984 06:37:55 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:00.984 06:37:55 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:39:00.984 06:37:55 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:39:00.984 06:37:55 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:39:00.984 06:37:55 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:39:00.984 06:37:55 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:39:00.984 06:37:55 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:39:00.984 06:37:55 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:39:00.984 06:37:55 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:39:00.984 06:37:55 keyring_file -- keyring/common.sh@17 -- # name=key0 00:39:00.984 06:37:55 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:39:00.984 06:37:55 keyring_file -- keyring/common.sh@17 -- # digest=0 00:39:00.984 06:37:55 keyring_file -- keyring/common.sh@18 -- # mktemp 00:39:00.984 06:37:55 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.3rYCH0utdS 00:39:00.984 06:37:55 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:39:00.984 06:37:55 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:39:00.984 06:37:55 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:39:00.984 06:37:55 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:39:00.984 06:37:55 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:39:00.984 06:37:55 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:39:00.984 06:37:55 keyring_file -- nvmf/common.sh@733 -- # python - 00:39:00.984 06:37:55 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.3rYCH0utdS 00:39:00.984 06:37:55 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.3rYCH0utdS 00:39:00.984 06:37:55 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.3rYCH0utdS 00:39:00.984 06:37:55 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:39:00.984 06:37:55 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:39:00.984 06:37:55 keyring_file -- keyring/common.sh@17 -- # name=key1 00:39:00.984 06:37:55 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:39:00.984 06:37:55 keyring_file -- keyring/common.sh@17 -- # digest=0 00:39:00.984 06:37:55 keyring_file -- keyring/common.sh@18 -- # mktemp 00:39:00.984 06:37:55 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.RilmiSP3Gm 00:39:00.984 06:37:55 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:39:00.984 06:37:55 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:39:00.984 06:37:55 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:39:00.984 06:37:55 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:39:00.984 06:37:55 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:39:00.984 06:37:55 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:39:00.985 06:37:55 keyring_file -- nvmf/common.sh@733 -- # python - 00:39:00.985 06:37:55 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.RilmiSP3Gm 00:39:00.985 06:37:55 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.RilmiSP3Gm 00:39:00.985 06:37:55 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.RilmiSP3Gm 00:39:00.985 06:37:55 keyring_file -- keyring/file.sh@30 -- # tgtpid=637610 00:39:00.985 06:37:55 keyring_file -- keyring/file.sh@32 -- # waitforlisten 637610 00:39:00.985 06:37:55 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 637610 ']' 00:39:00.985 06:37:55 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:39:00.985 06:37:55 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:00.985 06:37:55 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:00.985 06:37:55 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:00.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:00.985 06:37:55 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:00.985 06:37:55 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:00.985 [2024-12-09 06:37:55.435888] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:39:00.985 [2024-12-09 06:37:55.435962] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid637610 ] 00:39:00.985 [2024-12-09 06:37:55.525031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:01.244 [2024-12-09 06:37:55.577152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:01.816 06:37:56 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:01.816 06:37:56 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:39:01.816 06:37:56 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:39:01.816 06:37:56 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:01.816 06:37:56 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:01.816 [2024-12-09 06:37:56.259478] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:01.816 null0 00:39:01.816 [2024-12-09 06:37:56.291518] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:39:01.816 [2024-12-09 06:37:56.291956] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:39:01.816 06:37:56 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:01.816 06:37:56 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:39:01.816 06:37:56 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:39:01.816 06:37:56 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:39:01.816 06:37:56 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:39:01.816 06:37:56 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:01.816 06:37:56 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:39:01.816 06:37:56 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:01.816 06:37:56 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:39:01.816 06:37:56 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:01.816 06:37:56 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:01.816 [2024-12-09 06:37:56.319571] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:39:01.816 request: 00:39:01.816 { 00:39:01.816 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:39:01.816 "secure_channel": false, 00:39:01.816 "listen_address": { 00:39:01.816 "trtype": "tcp", 00:39:01.816 "traddr": "127.0.0.1", 00:39:01.816 "trsvcid": "4420" 00:39:01.816 }, 00:39:01.816 "method": "nvmf_subsystem_add_listener", 00:39:01.816 "req_id": 1 00:39:01.816 } 00:39:01.816 Got JSON-RPC error response 00:39:01.816 response: 00:39:01.816 { 00:39:01.816 "code": -32602, 00:39:01.816 "message": "Invalid parameters" 00:39:01.816 } 00:39:01.816 06:37:56 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:39:01.816 06:37:56 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:39:01.816 06:37:56 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:01.816 06:37:56 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:01.816 06:37:56 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:01.816 06:37:56 keyring_file -- keyring/file.sh@47 -- # bperfpid=637773 00:39:01.816 06:37:56 keyring_file -- keyring/file.sh@49 -- # waitforlisten 637773 /var/tmp/bperf.sock 00:39:01.816 06:37:56 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 637773 ']' 00:39:01.816 06:37:56 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:01.816 06:37:56 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:01.816 06:37:56 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:01.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:01.816 06:37:56 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:01.816 06:37:56 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:01.816 06:37:56 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:39:01.816 [2024-12-09 06:37:56.378965] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:39:01.816 [2024-12-09 06:37:56.379040] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid637773 ] 00:39:02.077 [2024-12-09 06:37:56.452742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:02.077 [2024-12-09 06:37:56.503759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:02.647 06:37:57 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:02.647 06:37:57 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:39:02.647 06:37:57 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.3rYCH0utdS 00:39:02.647 06:37:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.3rYCH0utdS 00:39:02.908 06:37:57 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.RilmiSP3Gm 00:39:02.908 06:37:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.RilmiSP3Gm 00:39:03.169 06:37:57 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:39:03.169 06:37:57 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:39:03.169 06:37:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:03.169 06:37:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:03.169 06:37:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:03.169 06:37:57 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.3rYCH0utdS == \/\t\m\p\/\t\m\p\.\3\r\Y\C\H\0\u\t\d\S ]] 00:39:03.169 06:37:57 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:39:03.169 06:37:57 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:39:03.169 06:37:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:03.169 06:37:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:03.169 06:37:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:03.429 06:37:57 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.RilmiSP3Gm == \/\t\m\p\/\t\m\p\.\R\i\l\m\i\S\P\3\G\m ]] 00:39:03.429 06:37:57 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:39:03.429 06:37:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:03.429 06:37:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:03.429 06:37:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:03.429 06:37:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:03.429 06:37:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:03.689 06:37:58 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:39:03.689 06:37:58 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:39:03.689 06:37:58 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:03.689 06:37:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:03.689 06:37:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:03.689 06:37:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:03.689 06:37:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:03.950 06:37:58 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:39:03.950 06:37:58 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:03.950 06:37:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:03.950 [2024-12-09 06:37:58.483254] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:39:04.209 nvme0n1 00:39:04.209 06:37:58 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:39:04.209 06:37:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:04.209 06:37:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:04.209 06:37:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:04.209 06:37:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:04.209 06:37:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:04.209 06:37:58 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:39:04.209 06:37:58 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:39:04.209 06:37:58 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:04.209 06:37:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:04.209 06:37:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:04.209 06:37:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:04.209 06:37:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:04.469 06:37:58 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:39:04.469 06:37:58 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:04.469 Running I/O for 1 seconds... 00:39:05.851 19612.00 IOPS, 76.61 MiB/s 00:39:05.851 Latency(us) 00:39:05.851 [2024-12-09T05:38:00.438Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:05.851 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:39:05.851 nvme0n1 : 1.00 19664.27 76.81 0.00 0.00 6498.06 3831.34 14518.74 00:39:05.851 [2024-12-09T05:38:00.438Z] =================================================================================================================== 00:39:05.851 [2024-12-09T05:38:00.438Z] Total : 19664.27 76.81 0.00 0.00 6498.06 3831.34 14518.74 00:39:05.851 { 00:39:05.851 "results": [ 00:39:05.851 { 00:39:05.851 "job": "nvme0n1", 00:39:05.851 "core_mask": "0x2", 00:39:05.851 "workload": "randrw", 00:39:05.852 "percentage": 50, 00:39:05.852 "status": "finished", 00:39:05.852 "queue_depth": 128, 00:39:05.852 "io_size": 4096, 00:39:05.852 "runtime": 1.003851, 00:39:05.852 "iops": 19664.272885119404, 00:39:05.852 "mibps": 76.81356595749767, 00:39:05.852 "io_failed": 0, 00:39:05.852 "io_timeout": 0, 00:39:05.852 "avg_latency_us": 6498.064739459122, 00:39:05.852 "min_latency_us": 3831.3353846153846, 00:39:05.852 "max_latency_us": 14518.744615384616 00:39:05.852 } 00:39:05.852 ], 00:39:05.852 "core_count": 1 00:39:05.852 } 00:39:05.852 06:38:00 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:39:05.852 06:38:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:39:05.852 06:38:00 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:39:05.852 06:38:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:05.852 06:38:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:05.852 06:38:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:05.852 06:38:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:05.852 06:38:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:05.852 06:38:00 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:39:05.852 06:38:00 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:39:05.852 06:38:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:05.852 06:38:00 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:05.852 06:38:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:05.852 06:38:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:05.852 06:38:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:06.112 06:38:00 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:39:06.112 06:38:00 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:06.112 06:38:00 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:39:06.112 06:38:00 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:06.112 06:38:00 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:39:06.112 06:38:00 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:06.112 06:38:00 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:39:06.112 06:38:00 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:06.112 06:38:00 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:06.112 06:38:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:06.372 [2024-12-09 06:38:00.724149] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:39:06.372 [2024-12-09 06:38:00.724717] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb61490 (107): Transport endpoint is not connected 00:39:06.372 [2024-12-09 06:38:00.725713] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb61490 (9): Bad file descriptor 00:39:06.372 [2024-12-09 06:38:00.726714] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:39:06.372 [2024-12-09 06:38:00.726726] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:39:06.372 [2024-12-09 06:38:00.726732] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:39:06.372 [2024-12-09 06:38:00.726739] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:39:06.372 request: 00:39:06.372 { 00:39:06.372 "name": "nvme0", 00:39:06.372 "trtype": "tcp", 00:39:06.372 "traddr": "127.0.0.1", 00:39:06.372 "adrfam": "ipv4", 00:39:06.372 "trsvcid": "4420", 00:39:06.372 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:06.372 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:06.372 "prchk_reftag": false, 00:39:06.372 "prchk_guard": false, 00:39:06.372 "hdgst": false, 00:39:06.372 "ddgst": false, 00:39:06.372 "psk": "key1", 00:39:06.372 "allow_unrecognized_csi": false, 00:39:06.372 "method": "bdev_nvme_attach_controller", 00:39:06.372 "req_id": 1 00:39:06.372 } 00:39:06.372 Got JSON-RPC error response 00:39:06.372 response: 00:39:06.372 { 00:39:06.372 "code": -5, 00:39:06.372 "message": "Input/output error" 00:39:06.372 } 00:39:06.372 06:38:00 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:39:06.372 06:38:00 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:06.372 06:38:00 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:06.372 06:38:00 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:06.372 06:38:00 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:39:06.372 06:38:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:06.372 06:38:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:06.372 06:38:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:06.372 06:38:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:06.372 06:38:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:06.372 06:38:00 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:39:06.372 06:38:00 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:39:06.372 06:38:00 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:06.372 06:38:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:06.372 06:38:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:06.372 06:38:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:06.372 06:38:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:06.632 06:38:01 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:39:06.632 06:38:01 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:39:06.632 06:38:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:39:06.892 06:38:01 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:39:06.892 06:38:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:39:06.892 06:38:01 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:39:06.892 06:38:01 keyring_file -- keyring/file.sh@78 -- # jq length 00:39:06.892 06:38:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:07.152 06:38:01 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:39:07.152 06:38:01 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.3rYCH0utdS 00:39:07.152 06:38:01 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.3rYCH0utdS 00:39:07.152 06:38:01 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:39:07.152 06:38:01 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.3rYCH0utdS 00:39:07.152 06:38:01 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:39:07.152 06:38:01 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:07.152 06:38:01 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:39:07.152 06:38:01 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:07.152 06:38:01 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.3rYCH0utdS 00:39:07.152 06:38:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.3rYCH0utdS 00:39:07.412 [2024-12-09 06:38:01.779500] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.3rYCH0utdS': 0100660 00:39:07.412 [2024-12-09 06:38:01.779521] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:39:07.412 request: 00:39:07.412 { 00:39:07.412 "name": "key0", 00:39:07.412 "path": "/tmp/tmp.3rYCH0utdS", 00:39:07.412 "method": "keyring_file_add_key", 00:39:07.412 "req_id": 1 00:39:07.412 } 00:39:07.412 Got JSON-RPC error response 00:39:07.412 response: 00:39:07.412 { 00:39:07.412 "code": -1, 00:39:07.412 "message": "Operation not permitted" 00:39:07.412 } 00:39:07.412 06:38:01 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:39:07.412 06:38:01 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:07.412 06:38:01 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:07.412 06:38:01 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:07.412 06:38:01 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.3rYCH0utdS 00:39:07.412 06:38:01 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.3rYCH0utdS 00:39:07.412 06:38:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.3rYCH0utdS 00:39:07.412 06:38:01 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.3rYCH0utdS 00:39:07.412 06:38:01 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:39:07.412 06:38:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:07.412 06:38:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:07.412 06:38:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:07.412 06:38:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:07.412 06:38:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:07.701 06:38:02 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:39:07.701 06:38:02 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:07.701 06:38:02 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:39:07.701 06:38:02 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:07.701 06:38:02 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:39:07.701 06:38:02 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:07.701 06:38:02 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:39:07.701 06:38:02 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:07.701 06:38:02 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:07.701 06:38:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:07.701 [2024-12-09 06:38:02.284791] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.3rYCH0utdS': No such file or directory 00:39:07.701 [2024-12-09 06:38:02.284804] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:39:07.701 [2024-12-09 06:38:02.284817] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:39:07.701 [2024-12-09 06:38:02.284823] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:39:07.701 [2024-12-09 06:38:02.284829] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:39:07.701 [2024-12-09 06:38:02.284834] bdev_nvme.c:6796:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:39:07.960 request: 00:39:07.960 { 00:39:07.960 "name": "nvme0", 00:39:07.960 "trtype": "tcp", 00:39:07.960 "traddr": "127.0.0.1", 00:39:07.960 "adrfam": "ipv4", 00:39:07.960 "trsvcid": "4420", 00:39:07.960 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:07.960 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:07.960 "prchk_reftag": false, 00:39:07.960 "prchk_guard": false, 00:39:07.960 "hdgst": false, 00:39:07.960 "ddgst": false, 00:39:07.960 "psk": "key0", 00:39:07.960 "allow_unrecognized_csi": false, 00:39:07.960 "method": "bdev_nvme_attach_controller", 00:39:07.960 "req_id": 1 00:39:07.960 } 00:39:07.960 Got JSON-RPC error response 00:39:07.960 response: 00:39:07.960 { 00:39:07.960 "code": -19, 00:39:07.960 "message": "No such device" 00:39:07.960 } 00:39:07.960 06:38:02 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:39:07.960 06:38:02 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:07.960 06:38:02 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:07.960 06:38:02 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:07.960 06:38:02 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:39:07.960 06:38:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:39:07.960 06:38:02 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:39:07.960 06:38:02 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:39:07.960 06:38:02 keyring_file -- keyring/common.sh@17 -- # name=key0 00:39:07.960 06:38:02 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:39:07.960 06:38:02 keyring_file -- keyring/common.sh@17 -- # digest=0 00:39:07.960 06:38:02 keyring_file -- keyring/common.sh@18 -- # mktemp 00:39:07.960 06:38:02 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.1jrwHgiAxd 00:39:07.960 06:38:02 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:39:07.960 06:38:02 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:39:07.960 06:38:02 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:39:07.960 06:38:02 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:39:07.960 06:38:02 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:39:07.960 06:38:02 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:39:07.960 06:38:02 keyring_file -- nvmf/common.sh@733 -- # python - 00:39:07.960 06:38:02 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.1jrwHgiAxd 00:39:07.960 06:38:02 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.1jrwHgiAxd 00:39:07.960 06:38:02 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.1jrwHgiAxd 00:39:07.960 06:38:02 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.1jrwHgiAxd 00:39:07.960 06:38:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.1jrwHgiAxd 00:39:08.220 06:38:02 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:08.220 06:38:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:08.480 nvme0n1 00:39:08.480 06:38:02 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:39:08.480 06:38:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:08.480 06:38:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:08.480 06:38:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:08.480 06:38:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:08.480 06:38:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:08.739 06:38:03 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:39:08.739 06:38:03 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:39:08.739 06:38:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:39:08.739 06:38:03 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:39:08.739 06:38:03 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:39:08.739 06:38:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:08.739 06:38:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:08.739 06:38:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:08.998 06:38:03 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:39:08.998 06:38:03 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:39:08.998 06:38:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:08.998 06:38:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:08.998 06:38:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:08.998 06:38:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:08.998 06:38:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:09.257 06:38:03 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:39:09.257 06:38:03 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:39:09.257 06:38:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:39:09.257 06:38:03 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:39:09.257 06:38:03 keyring_file -- keyring/file.sh@105 -- # jq length 00:39:09.257 06:38:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:09.518 06:38:03 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:39:09.518 06:38:03 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.1jrwHgiAxd 00:39:09.518 06:38:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.1jrwHgiAxd 00:39:09.779 06:38:04 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.RilmiSP3Gm 00:39:09.779 06:38:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.RilmiSP3Gm 00:39:09.779 06:38:04 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:09.779 06:38:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:10.039 nvme0n1 00:39:10.039 06:38:04 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:39:10.039 06:38:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:39:10.318 06:38:04 keyring_file -- keyring/file.sh@113 -- # config='{ 00:39:10.318 "subsystems": [ 00:39:10.318 { 00:39:10.318 "subsystem": "keyring", 00:39:10.318 "config": [ 00:39:10.318 { 00:39:10.318 "method": "keyring_file_add_key", 00:39:10.318 "params": { 00:39:10.318 "name": "key0", 00:39:10.318 "path": "/tmp/tmp.1jrwHgiAxd" 00:39:10.318 } 00:39:10.318 }, 00:39:10.318 { 00:39:10.318 "method": "keyring_file_add_key", 00:39:10.318 "params": { 00:39:10.318 "name": "key1", 00:39:10.318 "path": "/tmp/tmp.RilmiSP3Gm" 00:39:10.318 } 00:39:10.318 } 00:39:10.318 ] 00:39:10.318 }, 00:39:10.318 { 00:39:10.318 "subsystem": "iobuf", 00:39:10.318 "config": [ 00:39:10.318 { 00:39:10.318 "method": "iobuf_set_options", 00:39:10.318 "params": { 00:39:10.318 "small_pool_count": 8192, 00:39:10.318 "large_pool_count": 1024, 00:39:10.319 "small_bufsize": 8192, 00:39:10.319 "large_bufsize": 135168, 00:39:10.319 "enable_numa": false 00:39:10.319 } 00:39:10.319 } 00:39:10.319 ] 00:39:10.319 }, 00:39:10.319 { 00:39:10.319 "subsystem": "sock", 00:39:10.319 "config": [ 00:39:10.319 { 00:39:10.319 "method": "sock_set_default_impl", 00:39:10.319 "params": { 00:39:10.319 "impl_name": "posix" 00:39:10.319 } 00:39:10.319 }, 00:39:10.319 { 00:39:10.319 "method": "sock_impl_set_options", 00:39:10.319 "params": { 00:39:10.319 "impl_name": "ssl", 00:39:10.319 "recv_buf_size": 4096, 00:39:10.319 "send_buf_size": 4096, 00:39:10.319 "enable_recv_pipe": true, 00:39:10.319 "enable_quickack": false, 00:39:10.319 "enable_placement_id": 0, 00:39:10.319 "enable_zerocopy_send_server": true, 00:39:10.319 "enable_zerocopy_send_client": false, 00:39:10.319 "zerocopy_threshold": 0, 00:39:10.319 "tls_version": 0, 00:39:10.319 "enable_ktls": false 00:39:10.319 } 00:39:10.319 }, 00:39:10.319 { 00:39:10.319 "method": "sock_impl_set_options", 00:39:10.319 "params": { 00:39:10.319 "impl_name": "posix", 00:39:10.319 "recv_buf_size": 2097152, 00:39:10.319 "send_buf_size": 2097152, 00:39:10.319 "enable_recv_pipe": true, 00:39:10.319 "enable_quickack": false, 00:39:10.319 "enable_placement_id": 0, 00:39:10.319 "enable_zerocopy_send_server": true, 00:39:10.319 "enable_zerocopy_send_client": false, 00:39:10.319 "zerocopy_threshold": 0, 00:39:10.319 "tls_version": 0, 00:39:10.319 "enable_ktls": false 00:39:10.319 } 00:39:10.319 } 00:39:10.319 ] 00:39:10.319 }, 00:39:10.319 { 00:39:10.319 "subsystem": "vmd", 00:39:10.319 "config": [] 00:39:10.319 }, 00:39:10.319 { 00:39:10.319 "subsystem": "accel", 00:39:10.319 "config": [ 00:39:10.319 { 00:39:10.319 "method": "accel_set_options", 00:39:10.319 "params": { 00:39:10.319 "small_cache_size": 128, 00:39:10.319 "large_cache_size": 16, 00:39:10.319 "task_count": 2048, 00:39:10.319 "sequence_count": 2048, 00:39:10.319 "buf_count": 2048 00:39:10.319 } 00:39:10.319 } 00:39:10.319 ] 00:39:10.319 }, 00:39:10.319 { 00:39:10.319 "subsystem": "bdev", 00:39:10.319 "config": [ 00:39:10.319 { 00:39:10.319 "method": "bdev_set_options", 00:39:10.319 "params": { 00:39:10.319 "bdev_io_pool_size": 65535, 00:39:10.319 "bdev_io_cache_size": 256, 00:39:10.319 "bdev_auto_examine": true, 00:39:10.319 "iobuf_small_cache_size": 128, 00:39:10.319 "iobuf_large_cache_size": 16 00:39:10.319 } 00:39:10.319 }, 00:39:10.319 { 00:39:10.319 "method": "bdev_raid_set_options", 00:39:10.319 "params": { 00:39:10.319 "process_window_size_kb": 1024, 00:39:10.319 "process_max_bandwidth_mb_sec": 0 00:39:10.319 } 00:39:10.319 }, 00:39:10.319 { 00:39:10.319 "method": "bdev_iscsi_set_options", 00:39:10.319 "params": { 00:39:10.319 "timeout_sec": 30 00:39:10.319 } 00:39:10.319 }, 00:39:10.319 { 00:39:10.319 "method": "bdev_nvme_set_options", 00:39:10.319 "params": { 00:39:10.319 "action_on_timeout": "none", 00:39:10.319 "timeout_us": 0, 00:39:10.319 "timeout_admin_us": 0, 00:39:10.319 "keep_alive_timeout_ms": 10000, 00:39:10.319 "arbitration_burst": 0, 00:39:10.319 "low_priority_weight": 0, 00:39:10.319 "medium_priority_weight": 0, 00:39:10.319 "high_priority_weight": 0, 00:39:10.319 "nvme_adminq_poll_period_us": 10000, 00:39:10.319 "nvme_ioq_poll_period_us": 0, 00:39:10.319 "io_queue_requests": 512, 00:39:10.319 "delay_cmd_submit": true, 00:39:10.319 "transport_retry_count": 4, 00:39:10.319 "bdev_retry_count": 3, 00:39:10.319 "transport_ack_timeout": 0, 00:39:10.319 "ctrlr_loss_timeout_sec": 0, 00:39:10.319 "reconnect_delay_sec": 0, 00:39:10.319 "fast_io_fail_timeout_sec": 0, 00:39:10.319 "disable_auto_failback": false, 00:39:10.319 "generate_uuids": false, 00:39:10.319 "transport_tos": 0, 00:39:10.319 "nvme_error_stat": false, 00:39:10.319 "rdma_srq_size": 0, 00:39:10.319 "io_path_stat": false, 00:39:10.319 "allow_accel_sequence": false, 00:39:10.319 "rdma_max_cq_size": 0, 00:39:10.319 "rdma_cm_event_timeout_ms": 0, 00:39:10.319 "dhchap_digests": [ 00:39:10.319 "sha256", 00:39:10.319 "sha384", 00:39:10.319 "sha512" 00:39:10.319 ], 00:39:10.319 "dhchap_dhgroups": [ 00:39:10.319 "null", 00:39:10.319 "ffdhe2048", 00:39:10.319 "ffdhe3072", 00:39:10.319 "ffdhe4096", 00:39:10.319 "ffdhe6144", 00:39:10.319 "ffdhe8192" 00:39:10.319 ] 00:39:10.319 } 00:39:10.319 }, 00:39:10.319 { 00:39:10.319 "method": "bdev_nvme_attach_controller", 00:39:10.319 "params": { 00:39:10.319 "name": "nvme0", 00:39:10.319 "trtype": "TCP", 00:39:10.319 "adrfam": "IPv4", 00:39:10.319 "traddr": "127.0.0.1", 00:39:10.319 "trsvcid": "4420", 00:39:10.319 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:10.319 "prchk_reftag": false, 00:39:10.319 "prchk_guard": false, 00:39:10.319 "ctrlr_loss_timeout_sec": 0, 00:39:10.319 "reconnect_delay_sec": 0, 00:39:10.319 "fast_io_fail_timeout_sec": 0, 00:39:10.319 "psk": "key0", 00:39:10.319 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:10.319 "hdgst": false, 00:39:10.319 "ddgst": false, 00:39:10.319 "multipath": "multipath" 00:39:10.319 } 00:39:10.319 }, 00:39:10.319 { 00:39:10.319 "method": "bdev_nvme_set_hotplug", 00:39:10.319 "params": { 00:39:10.319 "period_us": 100000, 00:39:10.319 "enable": false 00:39:10.319 } 00:39:10.319 }, 00:39:10.319 { 00:39:10.319 "method": "bdev_wait_for_examine" 00:39:10.319 } 00:39:10.319 ] 00:39:10.319 }, 00:39:10.319 { 00:39:10.319 "subsystem": "nbd", 00:39:10.319 "config": [] 00:39:10.319 } 00:39:10.319 ] 00:39:10.319 }' 00:39:10.319 06:38:04 keyring_file -- keyring/file.sh@115 -- # killprocess 637773 00:39:10.319 06:38:04 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 637773 ']' 00:39:10.319 06:38:04 keyring_file -- common/autotest_common.sh@958 -- # kill -0 637773 00:39:10.319 06:38:04 keyring_file -- common/autotest_common.sh@959 -- # uname 00:39:10.319 06:38:04 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:10.319 06:38:04 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 637773 00:39:10.319 06:38:04 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:10.319 06:38:04 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:10.319 06:38:04 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 637773' 00:39:10.319 killing process with pid 637773 00:39:10.319 06:38:04 keyring_file -- common/autotest_common.sh@973 -- # kill 637773 00:39:10.319 Received shutdown signal, test time was about 1.000000 seconds 00:39:10.319 00:39:10.319 Latency(us) 00:39:10.319 [2024-12-09T05:38:04.906Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:10.319 [2024-12-09T05:38:04.906Z] =================================================================================================================== 00:39:10.319 [2024-12-09T05:38:04.906Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:10.319 06:38:04 keyring_file -- common/autotest_common.sh@978 -- # wait 637773 00:39:10.581 06:38:04 keyring_file -- keyring/file.sh@118 -- # bperfpid=639161 00:39:10.581 06:38:04 keyring_file -- keyring/file.sh@120 -- # waitforlisten 639161 /var/tmp/bperf.sock 00:39:10.581 06:38:04 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 639161 ']' 00:39:10.581 06:38:04 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:10.581 06:38:04 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:10.581 06:38:04 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:39:10.581 06:38:04 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:10.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:10.581 06:38:04 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:10.581 06:38:04 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:10.581 06:38:04 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:39:10.581 "subsystems": [ 00:39:10.581 { 00:39:10.581 "subsystem": "keyring", 00:39:10.581 "config": [ 00:39:10.581 { 00:39:10.581 "method": "keyring_file_add_key", 00:39:10.581 "params": { 00:39:10.581 "name": "key0", 00:39:10.581 "path": "/tmp/tmp.1jrwHgiAxd" 00:39:10.581 } 00:39:10.581 }, 00:39:10.581 { 00:39:10.581 "method": "keyring_file_add_key", 00:39:10.581 "params": { 00:39:10.581 "name": "key1", 00:39:10.581 "path": "/tmp/tmp.RilmiSP3Gm" 00:39:10.581 } 00:39:10.581 } 00:39:10.581 ] 00:39:10.581 }, 00:39:10.581 { 00:39:10.581 "subsystem": "iobuf", 00:39:10.581 "config": [ 00:39:10.581 { 00:39:10.581 "method": "iobuf_set_options", 00:39:10.581 "params": { 00:39:10.581 "small_pool_count": 8192, 00:39:10.582 "large_pool_count": 1024, 00:39:10.582 "small_bufsize": 8192, 00:39:10.582 "large_bufsize": 135168, 00:39:10.582 "enable_numa": false 00:39:10.582 } 00:39:10.582 } 00:39:10.582 ] 00:39:10.582 }, 00:39:10.582 { 00:39:10.582 "subsystem": "sock", 00:39:10.582 "config": [ 00:39:10.582 { 00:39:10.582 "method": "sock_set_default_impl", 00:39:10.582 "params": { 00:39:10.582 "impl_name": "posix" 00:39:10.582 } 00:39:10.582 }, 00:39:10.582 { 00:39:10.582 "method": "sock_impl_set_options", 00:39:10.582 "params": { 00:39:10.582 "impl_name": "ssl", 00:39:10.582 "recv_buf_size": 4096, 00:39:10.582 "send_buf_size": 4096, 00:39:10.582 "enable_recv_pipe": true, 00:39:10.582 "enable_quickack": false, 00:39:10.582 "enable_placement_id": 0, 00:39:10.582 "enable_zerocopy_send_server": true, 00:39:10.582 "enable_zerocopy_send_client": false, 00:39:10.582 "zerocopy_threshold": 0, 00:39:10.582 "tls_version": 0, 00:39:10.582 "enable_ktls": false 00:39:10.582 } 00:39:10.582 }, 00:39:10.582 { 00:39:10.582 "method": "sock_impl_set_options", 00:39:10.582 "params": { 00:39:10.582 "impl_name": "posix", 00:39:10.582 "recv_buf_size": 2097152, 00:39:10.582 "send_buf_size": 2097152, 00:39:10.582 "enable_recv_pipe": true, 00:39:10.582 "enable_quickack": false, 00:39:10.582 "enable_placement_id": 0, 00:39:10.582 "enable_zerocopy_send_server": true, 00:39:10.582 "enable_zerocopy_send_client": false, 00:39:10.582 "zerocopy_threshold": 0, 00:39:10.582 "tls_version": 0, 00:39:10.582 "enable_ktls": false 00:39:10.582 } 00:39:10.582 } 00:39:10.582 ] 00:39:10.582 }, 00:39:10.582 { 00:39:10.582 "subsystem": "vmd", 00:39:10.582 "config": [] 00:39:10.582 }, 00:39:10.582 { 00:39:10.582 "subsystem": "accel", 00:39:10.582 "config": [ 00:39:10.582 { 00:39:10.582 "method": "accel_set_options", 00:39:10.582 "params": { 00:39:10.582 "small_cache_size": 128, 00:39:10.582 "large_cache_size": 16, 00:39:10.582 "task_count": 2048, 00:39:10.582 "sequence_count": 2048, 00:39:10.582 "buf_count": 2048 00:39:10.582 } 00:39:10.582 } 00:39:10.582 ] 00:39:10.582 }, 00:39:10.582 { 00:39:10.582 "subsystem": "bdev", 00:39:10.582 "config": [ 00:39:10.582 { 00:39:10.582 "method": "bdev_set_options", 00:39:10.582 "params": { 00:39:10.582 "bdev_io_pool_size": 65535, 00:39:10.582 "bdev_io_cache_size": 256, 00:39:10.582 "bdev_auto_examine": true, 00:39:10.582 "iobuf_small_cache_size": 128, 00:39:10.582 "iobuf_large_cache_size": 16 00:39:10.582 } 00:39:10.582 }, 00:39:10.582 { 00:39:10.582 "method": "bdev_raid_set_options", 00:39:10.582 "params": { 00:39:10.582 "process_window_size_kb": 1024, 00:39:10.582 "process_max_bandwidth_mb_sec": 0 00:39:10.582 } 00:39:10.582 }, 00:39:10.582 { 00:39:10.582 "method": "bdev_iscsi_set_options", 00:39:10.582 "params": { 00:39:10.582 "timeout_sec": 30 00:39:10.582 } 00:39:10.582 }, 00:39:10.582 { 00:39:10.582 "method": "bdev_nvme_set_options", 00:39:10.582 "params": { 00:39:10.582 "action_on_timeout": "none", 00:39:10.582 "timeout_us": 0, 00:39:10.582 "timeout_admin_us": 0, 00:39:10.582 "keep_alive_timeout_ms": 10000, 00:39:10.582 "arbitration_burst": 0, 00:39:10.582 "low_priority_weight": 0, 00:39:10.582 "medium_priority_weight": 0, 00:39:10.582 "high_priority_weight": 0, 00:39:10.582 "nvme_adminq_poll_period_us": 10000, 00:39:10.582 "nvme_ioq_poll_period_us": 0, 00:39:10.582 "io_queue_requests": 512, 00:39:10.582 "delay_cmd_submit": true, 00:39:10.582 "transport_retry_count": 4, 00:39:10.582 "bdev_retry_count": 3, 00:39:10.582 "transport_ack_timeout": 0, 00:39:10.582 "ctrlr_loss_timeout_sec": 0, 00:39:10.582 "reconnect_delay_sec": 0, 00:39:10.582 "fast_io_fail_timeout_sec": 0, 00:39:10.582 "disable_auto_failback": false, 00:39:10.582 "generate_uuids": false, 00:39:10.582 "transport_tos": 0, 00:39:10.582 "nvme_error_stat": false, 00:39:10.582 "rdma_srq_size": 0, 00:39:10.582 "io_path_stat": false, 00:39:10.582 "allow_accel_sequence": false, 00:39:10.582 "rdma_max_cq_size": 0, 00:39:10.582 "rdma_cm_event_timeout_ms": 0, 00:39:10.582 "dhchap_digests": [ 00:39:10.582 "sha256", 00:39:10.582 "sha384", 00:39:10.582 "sha512" 00:39:10.582 ], 00:39:10.582 "dhchap_dhgroups": [ 00:39:10.582 "null", 00:39:10.582 "ffdhe2048", 00:39:10.582 "ffdhe3072", 00:39:10.582 "ffdhe4096", 00:39:10.582 "ffdhe6144", 00:39:10.582 "ffdhe8192" 00:39:10.582 ] 00:39:10.582 } 00:39:10.582 }, 00:39:10.582 { 00:39:10.582 "method": "bdev_nvme_attach_controller", 00:39:10.582 "params": { 00:39:10.582 "name": "nvme0", 00:39:10.582 "trtype": "TCP", 00:39:10.582 "adrfam": "IPv4", 00:39:10.582 "traddr": "127.0.0.1", 00:39:10.582 "trsvcid": "4420", 00:39:10.582 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:10.582 "prchk_reftag": false, 00:39:10.582 "prchk_guard": false, 00:39:10.582 "ctrlr_loss_timeout_sec": 0, 00:39:10.582 "reconnect_delay_sec": 0, 00:39:10.582 "fast_io_fail_timeout_sec": 0, 00:39:10.582 "psk": "key0", 00:39:10.582 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:10.582 "hdgst": false, 00:39:10.582 "ddgst": false, 00:39:10.582 "multipath": "multipath" 00:39:10.582 } 00:39:10.582 }, 00:39:10.582 { 00:39:10.582 "method": "bdev_nvme_set_hotplug", 00:39:10.582 "params": { 00:39:10.582 "period_us": 100000, 00:39:10.582 "enable": false 00:39:10.582 } 00:39:10.582 }, 00:39:10.582 { 00:39:10.582 "method": "bdev_wait_for_examine" 00:39:10.582 } 00:39:10.582 ] 00:39:10.582 }, 00:39:10.582 { 00:39:10.582 "subsystem": "nbd", 00:39:10.582 "config": [] 00:39:10.582 } 00:39:10.582 ] 00:39:10.582 }' 00:39:10.582 [2024-12-09 06:38:04.974063] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:39:10.582 [2024-12-09 06:38:04.974114] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid639161 ] 00:39:10.582 [2024-12-09 06:38:05.032251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:10.582 [2024-12-09 06:38:05.062207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:10.843 [2024-12-09 06:38:05.205389] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:39:11.413 06:38:05 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:11.413 06:38:05 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:39:11.413 06:38:05 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:39:11.413 06:38:05 keyring_file -- keyring/file.sh@121 -- # jq length 00:39:11.413 06:38:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:11.413 06:38:05 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:39:11.413 06:38:05 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:39:11.413 06:38:05 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:11.413 06:38:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:11.413 06:38:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:11.413 06:38:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:11.413 06:38:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:11.673 06:38:06 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:39:11.673 06:38:06 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:39:11.673 06:38:06 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:11.673 06:38:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:11.673 06:38:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:11.673 06:38:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:11.673 06:38:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:11.933 06:38:06 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:39:11.933 06:38:06 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:39:11.933 06:38:06 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:39:11.933 06:38:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:39:11.933 06:38:06 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:39:11.933 06:38:06 keyring_file -- keyring/file.sh@1 -- # cleanup 00:39:11.933 06:38:06 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.1jrwHgiAxd /tmp/tmp.RilmiSP3Gm 00:39:11.933 06:38:06 keyring_file -- keyring/file.sh@20 -- # killprocess 639161 00:39:11.933 06:38:06 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 639161 ']' 00:39:11.933 06:38:06 keyring_file -- common/autotest_common.sh@958 -- # kill -0 639161 00:39:11.933 06:38:06 keyring_file -- common/autotest_common.sh@959 -- # uname 00:39:11.933 06:38:06 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:11.933 06:38:06 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 639161 00:39:12.193 06:38:06 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:12.194 06:38:06 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:12.194 06:38:06 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 639161' 00:39:12.194 killing process with pid 639161 00:39:12.194 06:38:06 keyring_file -- common/autotest_common.sh@973 -- # kill 639161 00:39:12.194 Received shutdown signal, test time was about 1.000000 seconds 00:39:12.194 00:39:12.194 Latency(us) 00:39:12.194 [2024-12-09T05:38:06.781Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:12.194 [2024-12-09T05:38:06.781Z] =================================================================================================================== 00:39:12.194 [2024-12-09T05:38:06.781Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:39:12.194 06:38:06 keyring_file -- common/autotest_common.sh@978 -- # wait 639161 00:39:12.194 06:38:06 keyring_file -- keyring/file.sh@21 -- # killprocess 637610 00:39:12.194 06:38:06 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 637610 ']' 00:39:12.194 06:38:06 keyring_file -- common/autotest_common.sh@958 -- # kill -0 637610 00:39:12.194 06:38:06 keyring_file -- common/autotest_common.sh@959 -- # uname 00:39:12.194 06:38:06 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:12.194 06:38:06 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 637610 00:39:12.194 06:38:06 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:12.194 06:38:06 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:12.194 06:38:06 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 637610' 00:39:12.194 killing process with pid 637610 00:39:12.194 06:38:06 keyring_file -- common/autotest_common.sh@973 -- # kill 637610 00:39:12.194 06:38:06 keyring_file -- common/autotest_common.sh@978 -- # wait 637610 00:39:12.454 00:39:12.454 real 0m11.866s 00:39:12.454 user 0m28.641s 00:39:12.454 sys 0m2.625s 00:39:12.454 06:38:06 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:12.454 06:38:06 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:12.454 ************************************ 00:39:12.454 END TEST keyring_file 00:39:12.454 ************************************ 00:39:12.454 06:38:06 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:39:12.454 06:38:06 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:39:12.454 06:38:06 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:39:12.454 06:38:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:12.454 06:38:06 -- common/autotest_common.sh@10 -- # set +x 00:39:12.454 ************************************ 00:39:12.454 START TEST keyring_linux 00:39:12.454 ************************************ 00:39:12.454 06:38:06 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:39:12.454 Joined session keyring: 817088419 00:39:12.715 * Looking for test storage... 00:39:12.715 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:39:12.715 06:38:07 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:12.715 06:38:07 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:39:12.715 06:38:07 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:12.715 06:38:07 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:12.715 06:38:07 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:12.715 06:38:07 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:12.715 06:38:07 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:12.715 06:38:07 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:39:12.715 06:38:07 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:39:12.715 06:38:07 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:39:12.715 06:38:07 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:39:12.715 06:38:07 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:39:12.715 06:38:07 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:39:12.715 06:38:07 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:39:12.715 06:38:07 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:12.715 06:38:07 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:39:12.715 06:38:07 keyring_linux -- scripts/common.sh@345 -- # : 1 00:39:12.715 06:38:07 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:12.715 06:38:07 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:12.715 06:38:07 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:39:12.715 06:38:07 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:39:12.715 06:38:07 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:12.715 06:38:07 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:39:12.715 06:38:07 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:39:12.715 06:38:07 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:39:12.715 06:38:07 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:39:12.715 06:38:07 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:12.715 06:38:07 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:39:12.715 06:38:07 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:39:12.715 06:38:07 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:12.715 06:38:07 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:12.715 06:38:07 keyring_linux -- scripts/common.sh@368 -- # return 0 00:39:12.715 06:38:07 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:12.715 06:38:07 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:12.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:12.715 --rc genhtml_branch_coverage=1 00:39:12.715 --rc genhtml_function_coverage=1 00:39:12.715 --rc genhtml_legend=1 00:39:12.715 --rc geninfo_all_blocks=1 00:39:12.715 --rc geninfo_unexecuted_blocks=1 00:39:12.715 00:39:12.715 ' 00:39:12.715 06:38:07 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:12.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:12.715 --rc genhtml_branch_coverage=1 00:39:12.715 --rc genhtml_function_coverage=1 00:39:12.715 --rc genhtml_legend=1 00:39:12.715 --rc geninfo_all_blocks=1 00:39:12.715 --rc geninfo_unexecuted_blocks=1 00:39:12.715 00:39:12.715 ' 00:39:12.715 06:38:07 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:12.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:12.715 --rc genhtml_branch_coverage=1 00:39:12.715 --rc genhtml_function_coverage=1 00:39:12.715 --rc genhtml_legend=1 00:39:12.715 --rc geninfo_all_blocks=1 00:39:12.715 --rc geninfo_unexecuted_blocks=1 00:39:12.715 00:39:12.715 ' 00:39:12.715 06:38:07 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:12.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:12.715 --rc genhtml_branch_coverage=1 00:39:12.715 --rc genhtml_function_coverage=1 00:39:12.715 --rc genhtml_legend=1 00:39:12.715 --rc geninfo_all_blocks=1 00:39:12.715 --rc geninfo_unexecuted_blocks=1 00:39:12.715 00:39:12.715 ' 00:39:12.715 06:38:07 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:39:12.715 06:38:07 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:12.715 06:38:07 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:39:12.715 06:38:07 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:12.715 06:38:07 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:12.715 06:38:07 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:12.715 06:38:07 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:12.715 06:38:07 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:12.715 06:38:07 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:12.715 06:38:07 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:12.715 06:38:07 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:12.715 06:38:07 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:12.715 06:38:07 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:12.715 06:38:07 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:39:12.715 06:38:07 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:39:12.715 06:38:07 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:12.715 06:38:07 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:12.715 06:38:07 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:12.715 06:38:07 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:12.716 06:38:07 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:12.716 06:38:07 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:39:12.716 06:38:07 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:12.716 06:38:07 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:12.716 06:38:07 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:12.716 06:38:07 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:12.716 06:38:07 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:12.716 06:38:07 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:12.716 06:38:07 keyring_linux -- paths/export.sh@5 -- # export PATH 00:39:12.716 06:38:07 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:12.716 06:38:07 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:39:12.716 06:38:07 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:12.716 06:38:07 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:12.716 06:38:07 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:12.716 06:38:07 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:12.716 06:38:07 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:12.716 06:38:07 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:12.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:12.716 06:38:07 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:12.716 06:38:07 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:12.716 06:38:07 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:12.716 06:38:07 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:39:12.716 06:38:07 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:39:12.716 06:38:07 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:39:12.716 06:38:07 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:39:12.716 06:38:07 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:39:12.716 06:38:07 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:39:12.716 06:38:07 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:39:12.716 06:38:07 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:39:12.716 06:38:07 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:39:12.716 06:38:07 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:39:12.716 06:38:07 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:39:12.716 06:38:07 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:39:12.716 06:38:07 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:39:12.716 06:38:07 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:39:12.716 06:38:07 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:39:12.716 06:38:07 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:39:12.716 06:38:07 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:39:12.716 06:38:07 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:39:12.716 06:38:07 keyring_linux -- nvmf/common.sh@733 -- # python - 00:39:12.716 06:38:07 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:39:12.976 06:38:07 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:39:12.976 /tmp/:spdk-test:key0 00:39:12.976 06:38:07 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:39:12.976 06:38:07 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:39:12.976 06:38:07 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:39:12.976 06:38:07 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:39:12.976 06:38:07 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:39:12.976 06:38:07 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:39:12.976 06:38:07 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:39:12.976 06:38:07 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:39:12.976 06:38:07 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:39:12.976 06:38:07 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:39:12.976 06:38:07 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:39:12.976 06:38:07 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:39:12.976 06:38:07 keyring_linux -- nvmf/common.sh@733 -- # python - 00:39:12.976 06:38:07 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:39:12.976 06:38:07 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:39:12.976 /tmp/:spdk-test:key1 00:39:12.976 06:38:07 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=639814 00:39:12.976 06:38:07 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 639814 00:39:12.976 06:38:07 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:39:12.976 06:38:07 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 639814 ']' 00:39:12.976 06:38:07 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:12.976 06:38:07 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:12.976 06:38:07 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:12.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:12.976 06:38:07 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:12.976 06:38:07 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:12.976 [2024-12-09 06:38:07.410476] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:39:12.976 [2024-12-09 06:38:07.410531] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid639814 ] 00:39:12.976 [2024-12-09 06:38:07.493613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:12.976 [2024-12-09 06:38:07.526590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:13.916 06:38:08 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:13.916 06:38:08 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:39:13.916 06:38:08 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:39:13.916 06:38:08 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:13.916 06:38:08 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:13.917 [2024-12-09 06:38:08.212586] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:13.917 null0 00:39:13.917 [2024-12-09 06:38:08.244631] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:39:13.917 [2024-12-09 06:38:08.244969] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:39:13.917 06:38:08 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:13.917 06:38:08 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:39:13.917 37399013 00:39:13.917 06:38:08 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:39:13.917 65791867 00:39:13.917 06:38:08 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=639847 00:39:13.917 06:38:08 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 639847 /var/tmp/bperf.sock 00:39:13.917 06:38:08 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:39:13.917 06:38:08 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 639847 ']' 00:39:13.917 06:38:08 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:13.917 06:38:08 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:13.917 06:38:08 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:13.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:13.917 06:38:08 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:13.917 06:38:08 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:13.917 [2024-12-09 06:38:08.321790] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:39:13.917 [2024-12-09 06:38:08.321835] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid639847 ] 00:39:13.917 [2024-12-09 06:38:08.379947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:13.917 [2024-12-09 06:38:08.409853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:13.917 06:38:08 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:13.917 06:38:08 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:39:13.917 06:38:08 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:39:13.917 06:38:08 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:39:14.178 06:38:08 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:39:14.178 06:38:08 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:39:14.439 06:38:08 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:39:14.439 06:38:08 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:39:14.439 [2024-12-09 06:38:09.016385] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:39:14.699 nvme0n1 00:39:14.699 06:38:09 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:39:14.699 06:38:09 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:39:14.699 06:38:09 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:39:14.699 06:38:09 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:39:14.699 06:38:09 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:39:14.699 06:38:09 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:14.960 06:38:09 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:39:14.960 06:38:09 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:39:14.960 06:38:09 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:39:14.960 06:38:09 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:39:14.960 06:38:09 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:39:14.960 06:38:09 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:14.960 06:38:09 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:14.960 06:38:09 keyring_linux -- keyring/linux.sh@25 -- # sn=37399013 00:39:14.960 06:38:09 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:39:14.960 06:38:09 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:39:14.960 06:38:09 keyring_linux -- keyring/linux.sh@26 -- # [[ 37399013 == \3\7\3\9\9\0\1\3 ]] 00:39:14.960 06:38:09 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 37399013 00:39:14.960 06:38:09 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:39:14.960 06:38:09 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:15.220 Running I/O for 1 seconds... 00:39:16.158 23709.00 IOPS, 92.61 MiB/s 00:39:16.158 Latency(us) 00:39:16.158 [2024-12-09T05:38:10.745Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:16.158 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:39:16.158 nvme0n1 : 1.01 23709.96 92.62 0.00 0.00 5382.21 4486.70 14014.62 00:39:16.158 [2024-12-09T05:38:10.745Z] =================================================================================================================== 00:39:16.158 [2024-12-09T05:38:10.745Z] Total : 23709.96 92.62 0.00 0.00 5382.21 4486.70 14014.62 00:39:16.158 { 00:39:16.158 "results": [ 00:39:16.158 { 00:39:16.158 "job": "nvme0n1", 00:39:16.158 "core_mask": "0x2", 00:39:16.158 "workload": "randread", 00:39:16.158 "status": "finished", 00:39:16.158 "queue_depth": 128, 00:39:16.158 "io_size": 4096, 00:39:16.158 "runtime": 1.005358, 00:39:16.158 "iops": 23709.962023478205, 00:39:16.158 "mibps": 92.61703915421174, 00:39:16.158 "io_failed": 0, 00:39:16.158 "io_timeout": 0, 00:39:16.158 "avg_latency_us": 5382.20541072218, 00:39:16.158 "min_latency_us": 4486.695384615385, 00:39:16.158 "max_latency_us": 14014.621538461539 00:39:16.158 } 00:39:16.158 ], 00:39:16.158 "core_count": 1 00:39:16.158 } 00:39:16.158 06:38:10 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:39:16.158 06:38:10 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:39:16.421 06:38:10 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:39:16.421 06:38:10 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:39:16.421 06:38:10 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:39:16.421 06:38:10 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:39:16.421 06:38:10 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:39:16.421 06:38:10 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:16.421 06:38:10 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:39:16.421 06:38:10 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:39:16.421 06:38:10 keyring_linux -- keyring/linux.sh@23 -- # return 00:39:16.421 06:38:10 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:16.421 06:38:10 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:39:16.421 06:38:10 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:16.421 06:38:10 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:39:16.421 06:38:10 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:16.421 06:38:10 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:39:16.421 06:38:10 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:16.421 06:38:10 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:16.421 06:38:10 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:16.681 [2024-12-09 06:38:11.123562] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:39:16.681 [2024-12-09 06:38:11.124318] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa06240 (107): Transport endpoint is not connected 00:39:16.681 [2024-12-09 06:38:11.125314] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa06240 (9): Bad file descriptor 00:39:16.681 [2024-12-09 06:38:11.126315] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:39:16.681 [2024-12-09 06:38:11.126324] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:39:16.681 [2024-12-09 06:38:11.126330] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:39:16.682 [2024-12-09 06:38:11.126337] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:39:16.682 request: 00:39:16.682 { 00:39:16.682 "name": "nvme0", 00:39:16.682 "trtype": "tcp", 00:39:16.682 "traddr": "127.0.0.1", 00:39:16.682 "adrfam": "ipv4", 00:39:16.682 "trsvcid": "4420", 00:39:16.682 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:16.682 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:16.682 "prchk_reftag": false, 00:39:16.682 "prchk_guard": false, 00:39:16.682 "hdgst": false, 00:39:16.682 "ddgst": false, 00:39:16.682 "psk": ":spdk-test:key1", 00:39:16.682 "allow_unrecognized_csi": false, 00:39:16.682 "method": "bdev_nvme_attach_controller", 00:39:16.682 "req_id": 1 00:39:16.682 } 00:39:16.682 Got JSON-RPC error response 00:39:16.682 response: 00:39:16.682 { 00:39:16.682 "code": -5, 00:39:16.682 "message": "Input/output error" 00:39:16.682 } 00:39:16.682 06:38:11 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:39:16.682 06:38:11 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:16.682 06:38:11 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:16.682 06:38:11 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:16.682 06:38:11 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:39:16.682 06:38:11 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:39:16.682 06:38:11 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:39:16.682 06:38:11 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:39:16.682 06:38:11 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:39:16.682 06:38:11 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:39:16.682 06:38:11 keyring_linux -- keyring/linux.sh@33 -- # sn=37399013 00:39:16.682 06:38:11 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 37399013 00:39:16.682 1 links removed 00:39:16.682 06:38:11 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:39:16.682 06:38:11 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:39:16.682 06:38:11 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:39:16.682 06:38:11 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:39:16.682 06:38:11 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:39:16.682 06:38:11 keyring_linux -- keyring/linux.sh@33 -- # sn=65791867 00:39:16.682 06:38:11 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 65791867 00:39:16.682 1 links removed 00:39:16.682 06:38:11 keyring_linux -- keyring/linux.sh@41 -- # killprocess 639847 00:39:16.682 06:38:11 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 639847 ']' 00:39:16.682 06:38:11 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 639847 00:39:16.682 06:38:11 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:39:16.682 06:38:11 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:16.682 06:38:11 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 639847 00:39:16.682 06:38:11 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:16.682 06:38:11 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:16.682 06:38:11 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 639847' 00:39:16.682 killing process with pid 639847 00:39:16.682 06:38:11 keyring_linux -- common/autotest_common.sh@973 -- # kill 639847 00:39:16.682 Received shutdown signal, test time was about 1.000000 seconds 00:39:16.682 00:39:16.682 Latency(us) 00:39:16.682 [2024-12-09T05:38:11.269Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:16.682 [2024-12-09T05:38:11.269Z] =================================================================================================================== 00:39:16.682 [2024-12-09T05:38:11.269Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:16.682 06:38:11 keyring_linux -- common/autotest_common.sh@978 -- # wait 639847 00:39:16.942 06:38:11 keyring_linux -- keyring/linux.sh@42 -- # killprocess 639814 00:39:16.942 06:38:11 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 639814 ']' 00:39:16.942 06:38:11 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 639814 00:39:16.942 06:38:11 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:39:16.942 06:38:11 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:16.942 06:38:11 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 639814 00:39:16.942 06:38:11 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:16.942 06:38:11 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:16.942 06:38:11 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 639814' 00:39:16.942 killing process with pid 639814 00:39:16.942 06:38:11 keyring_linux -- common/autotest_common.sh@973 -- # kill 639814 00:39:16.942 06:38:11 keyring_linux -- common/autotest_common.sh@978 -- # wait 639814 00:39:17.202 00:39:17.202 real 0m4.578s 00:39:17.202 user 0m8.284s 00:39:17.202 sys 0m1.396s 00:39:17.202 06:38:11 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:17.202 06:38:11 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:17.202 ************************************ 00:39:17.202 END TEST keyring_linux 00:39:17.202 ************************************ 00:39:17.202 06:38:11 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:39:17.202 06:38:11 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:39:17.202 06:38:11 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:39:17.202 06:38:11 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:39:17.202 06:38:11 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:39:17.202 06:38:11 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:39:17.202 06:38:11 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:39:17.202 06:38:11 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:39:17.202 06:38:11 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:39:17.202 06:38:11 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:39:17.202 06:38:11 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:39:17.202 06:38:11 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:39:17.202 06:38:11 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:39:17.202 06:38:11 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:39:17.202 06:38:11 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:39:17.202 06:38:11 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:39:17.202 06:38:11 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:39:17.202 06:38:11 -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:17.202 06:38:11 -- common/autotest_common.sh@10 -- # set +x 00:39:17.202 06:38:11 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:39:17.202 06:38:11 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:39:17.202 06:38:11 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:39:17.202 06:38:11 -- common/autotest_common.sh@10 -- # set +x 00:39:25.388 INFO: APP EXITING 00:39:25.388 INFO: killing all VMs 00:39:25.388 INFO: killing vhost app 00:39:25.388 INFO: EXIT DONE 00:39:27.923 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:39:27.923 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:39:27.923 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:39:27.923 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:39:27.923 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:39:27.923 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:39:27.923 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:39:27.923 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:39:27.923 0000:65:00.0 (8086 0a54): Already using the nvme driver 00:39:27.923 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:39:27.923 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:39:27.923 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:39:27.923 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:39:27.923 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:39:27.923 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:39:27.923 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:39:27.923 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:39:32.121 Cleaning 00:39:32.121 Removing: /var/run/dpdk/spdk0/config 00:39:32.121 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:39:32.121 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:39:32.121 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:39:32.121 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:39:32.121 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:39:32.121 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:39:32.121 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:39:32.121 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:39:32.121 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:39:32.121 Removing: /var/run/dpdk/spdk0/hugepage_info 00:39:32.121 Removing: /var/run/dpdk/spdk1/config 00:39:32.121 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:39:32.121 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:39:32.121 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:39:32.121 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:39:32.121 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:39:32.121 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:39:32.121 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:39:32.121 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:39:32.121 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:39:32.121 Removing: /var/run/dpdk/spdk1/hugepage_info 00:39:32.121 Removing: /var/run/dpdk/spdk2/config 00:39:32.121 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:39:32.121 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:39:32.121 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:39:32.121 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:39:32.121 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:39:32.121 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:39:32.121 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:39:32.121 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:39:32.121 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:39:32.121 Removing: /var/run/dpdk/spdk2/hugepage_info 00:39:32.121 Removing: /var/run/dpdk/spdk3/config 00:39:32.121 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:39:32.121 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:39:32.121 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:39:32.121 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:39:32.121 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:39:32.121 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:39:32.121 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:39:32.121 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:39:32.122 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:39:32.122 Removing: /var/run/dpdk/spdk3/hugepage_info 00:39:32.122 Removing: /var/run/dpdk/spdk4/config 00:39:32.122 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:39:32.122 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:39:32.122 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:39:32.122 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:39:32.122 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:39:32.122 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:39:32.122 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:39:32.122 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:39:32.122 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:39:32.122 Removing: /var/run/dpdk/spdk4/hugepage_info 00:39:32.122 Removing: /dev/shm/bdev_svc_trace.1 00:39:32.122 Removing: /dev/shm/nvmf_trace.0 00:39:32.122 Removing: /dev/shm/spdk_tgt_trace.pid111941 00:39:32.122 Removing: /var/run/dpdk/spdk0 00:39:32.122 Removing: /var/run/dpdk/spdk1 00:39:32.122 Removing: /var/run/dpdk/spdk2 00:39:32.122 Removing: /var/run/dpdk/spdk3 00:39:32.122 Removing: /var/run/dpdk/spdk4 00:39:32.122 Removing: /var/run/dpdk/spdk_pid108519 00:39:32.122 Removing: /var/run/dpdk/spdk_pid109736 00:39:32.122 Removing: /var/run/dpdk/spdk_pid111941 00:39:32.122 Removing: /var/run/dpdk/spdk_pid112467 00:39:32.122 Removing: /var/run/dpdk/spdk_pid113406 00:39:32.122 Removing: /var/run/dpdk/spdk_pid113697 00:39:32.122 Removing: /var/run/dpdk/spdk_pid114672 00:39:32.122 Removing: /var/run/dpdk/spdk_pid114935 00:39:32.122 Removing: /var/run/dpdk/spdk_pid115114 00:39:32.122 Removing: /var/run/dpdk/spdk_pid116791 00:39:32.122 Removing: /var/run/dpdk/spdk_pid118413 00:39:32.122 Removing: /var/run/dpdk/spdk_pid118774 00:39:32.122 Removing: /var/run/dpdk/spdk_pid118865 00:39:32.122 Removing: /var/run/dpdk/spdk_pid119236 00:39:32.122 Removing: /var/run/dpdk/spdk_pid119602 00:39:32.122 Removing: /var/run/dpdk/spdk_pid119923 00:39:32.122 Removing: /var/run/dpdk/spdk_pid119973 00:39:32.122 Removing: /var/run/dpdk/spdk_pid120319 00:39:32.122 Removing: /var/run/dpdk/spdk_pid121284 00:39:32.122 Removing: /var/run/dpdk/spdk_pid124271 00:39:32.122 Removing: /var/run/dpdk/spdk_pid124604 00:39:32.122 Removing: /var/run/dpdk/spdk_pid124938 00:39:32.122 Removing: /var/run/dpdk/spdk_pid124963 00:39:32.122 Removing: /var/run/dpdk/spdk_pid125329 00:39:32.122 Removing: /var/run/dpdk/spdk_pid125593 00:39:32.122 Removing: /var/run/dpdk/spdk_pid125939 00:39:32.122 Removing: /var/run/dpdk/spdk_pid126106 00:39:32.122 Removing: /var/run/dpdk/spdk_pid126281 00:39:32.122 Removing: /var/run/dpdk/spdk_pid126561 00:39:32.122 Removing: /var/run/dpdk/spdk_pid126626 00:39:32.122 Removing: /var/run/dpdk/spdk_pid126928 00:39:32.122 Removing: /var/run/dpdk/spdk_pid127341 00:39:32.122 Removing: /var/run/dpdk/spdk_pid127534 00:39:32.122 Removing: /var/run/dpdk/spdk_pid127767 00:39:32.122 Removing: /var/run/dpdk/spdk_pid132320 00:39:32.122 Removing: /var/run/dpdk/spdk_pid137305 00:39:32.122 Removing: /var/run/dpdk/spdk_pid148368 00:39:32.122 Removing: /var/run/dpdk/spdk_pid149046 00:39:32.122 Removing: /var/run/dpdk/spdk_pid153905 00:39:32.122 Removing: /var/run/dpdk/spdk_pid154282 00:39:32.122 Removing: /var/run/dpdk/spdk_pid159012 00:39:32.122 Removing: /var/run/dpdk/spdk_pid165425 00:39:32.122 Removing: /var/run/dpdk/spdk_pid168200 00:39:32.122 Removing: /var/run/dpdk/spdk_pid179507 00:39:32.122 Removing: /var/run/dpdk/spdk_pid189597 00:39:32.122 Removing: /var/run/dpdk/spdk_pid191407 00:39:32.122 Removing: /var/run/dpdk/spdk_pid192333 00:39:32.122 Removing: /var/run/dpdk/spdk_pid211909 00:39:32.122 Removing: /var/run/dpdk/spdk_pid216434 00:39:32.122 Removing: /var/run/dpdk/spdk_pid268691 00:39:32.122 Removing: /var/run/dpdk/spdk_pid274771 00:39:32.122 Removing: /var/run/dpdk/spdk_pid281012 00:39:32.122 Removing: /var/run/dpdk/spdk_pid288152 00:39:32.122 Removing: /var/run/dpdk/spdk_pid288154 00:39:32.122 Removing: /var/run/dpdk/spdk_pid289063 00:39:32.122 Removing: /var/run/dpdk/spdk_pid289971 00:39:32.122 Removing: /var/run/dpdk/spdk_pid290877 00:39:32.122 Removing: /var/run/dpdk/spdk_pid291225 00:39:32.122 Removing: /var/run/dpdk/spdk_pid291371 00:39:32.122 Removing: /var/run/dpdk/spdk_pid291528 00:39:32.122 Removing: /var/run/dpdk/spdk_pid291812 00:39:32.122 Removing: /var/run/dpdk/spdk_pid291817 00:39:32.122 Removing: /var/run/dpdk/spdk_pid292840 00:39:32.382 Removing: /var/run/dpdk/spdk_pid293871 00:39:32.382 Removing: /var/run/dpdk/spdk_pid294965 00:39:32.382 Removing: /var/run/dpdk/spdk_pid295455 00:39:32.382 Removing: /var/run/dpdk/spdk_pid295525 00:39:32.382 Removing: /var/run/dpdk/spdk_pid295758 00:39:32.382 Removing: /var/run/dpdk/spdk_pid297061 00:39:32.382 Removing: /var/run/dpdk/spdk_pid298108 00:39:32.382 Removing: /var/run/dpdk/spdk_pid306976 00:39:32.382 Removing: /var/run/dpdk/spdk_pid337374 00:39:32.382 Removing: /var/run/dpdk/spdk_pid342373 00:39:32.382 Removing: /var/run/dpdk/spdk_pid343930 00:39:32.382 Removing: /var/run/dpdk/spdk_pid346014 00:39:32.382 Removing: /var/run/dpdk/spdk_pid346026 00:39:32.382 Removing: /var/run/dpdk/spdk_pid346199 00:39:32.382 Removing: /var/run/dpdk/spdk_pid346351 00:39:32.382 Removing: /var/run/dpdk/spdk_pid346702 00:39:32.382 Removing: /var/run/dpdk/spdk_pid348533 00:39:32.382 Removing: /var/run/dpdk/spdk_pid349234 00:39:32.382 Removing: /var/run/dpdk/spdk_pid349797 00:39:32.382 Removing: /var/run/dpdk/spdk_pid352017 00:39:32.382 Removing: /var/run/dpdk/spdk_pid352578 00:39:32.382 Removing: /var/run/dpdk/spdk_pid353072 00:39:32.382 Removing: /var/run/dpdk/spdk_pid357888 00:39:32.382 Removing: /var/run/dpdk/spdk_pid363954 00:39:32.382 Removing: /var/run/dpdk/spdk_pid363955 00:39:32.382 Removing: /var/run/dpdk/spdk_pid363956 00:39:32.382 Removing: /var/run/dpdk/spdk_pid368431 00:39:32.382 Removing: /var/run/dpdk/spdk_pid378699 00:39:32.382 Removing: /var/run/dpdk/spdk_pid382991 00:39:32.382 Removing: /var/run/dpdk/spdk_pid389536 00:39:32.382 Removing: /var/run/dpdk/spdk_pid390906 00:39:32.382 Removing: /var/run/dpdk/spdk_pid392323 00:39:32.382 Removing: /var/run/dpdk/spdk_pid393800 00:39:32.382 Removing: /var/run/dpdk/spdk_pid399094 00:39:32.382 Removing: /var/run/dpdk/spdk_pid403978 00:39:32.382 Removing: /var/run/dpdk/spdk_pid408671 00:39:32.382 Removing: /var/run/dpdk/spdk_pid417189 00:39:32.382 Removing: /var/run/dpdk/spdk_pid417210 00:39:32.382 Removing: /var/run/dpdk/spdk_pid422576 00:39:32.382 Removing: /var/run/dpdk/spdk_pid422782 00:39:32.382 Removing: /var/run/dpdk/spdk_pid423081 00:39:32.382 Removing: /var/run/dpdk/spdk_pid423830 00:39:32.382 Removing: /var/run/dpdk/spdk_pid423858 00:39:32.382 Removing: /var/run/dpdk/spdk_pid429045 00:39:32.382 Removing: /var/run/dpdk/spdk_pid429756 00:39:32.382 Removing: /var/run/dpdk/spdk_pid434727 00:39:32.382 Removing: /var/run/dpdk/spdk_pid437476 00:39:32.382 Removing: /var/run/dpdk/spdk_pid443248 00:39:32.382 Removing: /var/run/dpdk/spdk_pid449200 00:39:32.382 Removing: /var/run/dpdk/spdk_pid458458 00:39:32.382 Removing: /var/run/dpdk/spdk_pid466186 00:39:32.382 Removing: /var/run/dpdk/spdk_pid466188 00:39:32.382 Removing: /var/run/dpdk/spdk_pid488543 00:39:32.382 Removing: /var/run/dpdk/spdk_pid489169 00:39:32.382 Removing: /var/run/dpdk/spdk_pid489779 00:39:32.382 Removing: /var/run/dpdk/spdk_pid490379 00:39:32.382 Removing: /var/run/dpdk/spdk_pid491066 00:39:32.382 Removing: /var/run/dpdk/spdk_pid491683 00:39:32.382 Removing: /var/run/dpdk/spdk_pid492240 00:39:32.382 Removing: /var/run/dpdk/spdk_pid492642 00:39:32.642 Removing: /var/run/dpdk/spdk_pid497483 00:39:32.642 Removing: /var/run/dpdk/spdk_pid497736 00:39:32.642 Removing: /var/run/dpdk/spdk_pid504251 00:39:32.642 Removing: /var/run/dpdk/spdk_pid504507 00:39:32.642 Removing: /var/run/dpdk/spdk_pid510561 00:39:32.642 Removing: /var/run/dpdk/spdk_pid515222 00:39:32.642 Removing: /var/run/dpdk/spdk_pid526123 00:39:32.642 Removing: /var/run/dpdk/spdk_pid526726 00:39:32.642 Removing: /var/run/dpdk/spdk_pid531586 00:39:32.642 Removing: /var/run/dpdk/spdk_pid531908 00:39:32.642 Removing: /var/run/dpdk/spdk_pid536757 00:39:32.642 Removing: /var/run/dpdk/spdk_pid543095 00:39:32.642 Removing: /var/run/dpdk/spdk_pid545638 00:39:32.642 Removing: /var/run/dpdk/spdk_pid556642 00:39:32.642 Removing: /var/run/dpdk/spdk_pid567032 00:39:32.642 Removing: /var/run/dpdk/spdk_pid568748 00:39:32.642 Removing: /var/run/dpdk/spdk_pid569664 00:39:32.642 Removing: /var/run/dpdk/spdk_pid587683 00:39:32.642 Removing: /var/run/dpdk/spdk_pid592190 00:39:32.642 Removing: /var/run/dpdk/spdk_pid594951 00:39:32.642 Removing: /var/run/dpdk/spdk_pid603630 00:39:32.642 Removing: /var/run/dpdk/spdk_pid603642 00:39:32.642 Removing: /var/run/dpdk/spdk_pid609653 00:39:32.642 Removing: /var/run/dpdk/spdk_pid611758 00:39:32.642 Removing: /var/run/dpdk/spdk_pid614208 00:39:32.642 Removing: /var/run/dpdk/spdk_pid615282 00:39:32.642 Removing: /var/run/dpdk/spdk_pid617278 00:39:32.642 Removing: /var/run/dpdk/spdk_pid618573 00:39:32.642 Removing: /var/run/dpdk/spdk_pid628297 00:39:32.642 Removing: /var/run/dpdk/spdk_pid628808 00:39:32.642 Removing: /var/run/dpdk/spdk_pid629406 00:39:32.642 Removing: /var/run/dpdk/spdk_pid632201 00:39:32.642 Removing: /var/run/dpdk/spdk_pid632540 00:39:32.642 Removing: /var/run/dpdk/spdk_pid633126 00:39:32.642 Removing: /var/run/dpdk/spdk_pid637610 00:39:32.642 Removing: /var/run/dpdk/spdk_pid637773 00:39:32.642 Removing: /var/run/dpdk/spdk_pid639161 00:39:32.642 Removing: /var/run/dpdk/spdk_pid639814 00:39:32.642 Removing: /var/run/dpdk/spdk_pid639847 00:39:32.642 Clean 00:39:32.642 06:38:27 -- common/autotest_common.sh@1453 -- # return 0 00:39:32.642 06:38:27 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:39:32.642 06:38:27 -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:32.642 06:38:27 -- common/autotest_common.sh@10 -- # set +x 00:39:32.902 06:38:27 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:39:32.902 06:38:27 -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:32.902 06:38:27 -- common/autotest_common.sh@10 -- # set +x 00:39:32.902 06:38:27 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:39:32.902 06:38:27 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:39:32.902 06:38:27 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:39:32.902 06:38:27 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:39:32.902 06:38:27 -- spdk/autotest.sh@398 -- # hostname 00:39:32.902 06:38:27 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-06 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:39:33.162 geninfo: WARNING: invalid characters removed from testname! 00:39:59.734 06:38:51 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:00.305 06:38:54 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:02.214 06:38:56 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:04.122 06:38:58 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:05.521 06:38:59 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:07.429 06:39:01 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:08.811 06:39:03 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:40:08.811 06:39:03 -- spdk/autorun.sh@1 -- $ timing_finish 00:40:08.811 06:39:03 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:40:08.811 06:39:03 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:40:08.811 06:39:03 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:40:08.811 06:39:03 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:40:08.811 + [[ -n 28194 ]] 00:40:08.811 + sudo kill 28194 00:40:09.083 [Pipeline] } 00:40:09.100 [Pipeline] // stage 00:40:09.105 [Pipeline] } 00:40:09.120 [Pipeline] // timeout 00:40:09.125 [Pipeline] } 00:40:09.139 [Pipeline] // catchError 00:40:09.144 [Pipeline] } 00:40:09.157 [Pipeline] // wrap 00:40:09.163 [Pipeline] } 00:40:09.178 [Pipeline] // catchError 00:40:09.188 [Pipeline] stage 00:40:09.191 [Pipeline] { (Epilogue) 00:40:09.208 [Pipeline] catchError 00:40:09.210 [Pipeline] { 00:40:09.226 [Pipeline] echo 00:40:09.228 Cleanup processes 00:40:09.236 [Pipeline] sh 00:40:09.534 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:40:09.535 652107 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:40:09.552 [Pipeline] sh 00:40:09.848 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:40:09.848 ++ grep -v 'sudo pgrep' 00:40:09.848 ++ awk '{print $1}' 00:40:09.848 + sudo kill -9 00:40:09.848 + true 00:40:09.863 [Pipeline] sh 00:40:10.160 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:40:22.407 [Pipeline] sh 00:40:22.697 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:40:22.697 Artifacts sizes are good 00:40:22.714 [Pipeline] archiveArtifacts 00:40:22.723 Archiving artifacts 00:40:23.228 [Pipeline] sh 00:40:23.606 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:40:23.682 [Pipeline] cleanWs 00:40:23.709 [WS-CLEANUP] Deleting project workspace... 00:40:23.709 [WS-CLEANUP] Deferred wipeout is used... 00:40:23.716 [WS-CLEANUP] done 00:40:23.718 [Pipeline] } 00:40:23.735 [Pipeline] // catchError 00:40:23.749 [Pipeline] sh 00:40:24.043 + logger -p user.info -t JENKINS-CI 00:40:24.054 [Pipeline] } 00:40:24.070 [Pipeline] // stage 00:40:24.076 [Pipeline] } 00:40:24.094 [Pipeline] // node 00:40:24.100 [Pipeline] End of Pipeline 00:40:24.136 Finished: SUCCESS